questions
stringlengths
4
1.65k
answers
stringlengths
1.73k
353k
site
stringclasses
24 values
answers_cleaned
stringlengths
1.73k
353k
argocd Release Assets Asset Description Verification of Argo CD Artifacts Prerequisites crane for container verification only cosign or higher slsa verifier
# Verification of Argo CD Artifacts ## Prerequisites - cosign `v2.0.0` or higher [installation instructions](https://docs.sigstore.dev/cosign/installation) - slsa-verifier [installation instructions](https://github.com/slsa-framework/slsa-verifier#installation) - crane [installation instructions](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) (for container verification only) *** ## Release Assets | Asset | Description | |--------------------------|-------------------------------| | argocd-darwin-amd64 | CLI Binary | | argocd-darwin-arm64 | CLI Binary | | argocd-linux_amd64 | CLI Binary | | argocd-linux_arm64 | CLI Binary | | argocd-linux_ppc64le | CLI Binary | | argocd-linux_s390x | CLI Binary | | argocd-windows_amd64 | CLI Binary | | argocd-cli.intoto.jsonl | Attestation of CLI binaries | | argocd-sbom.intoto.jsonl | Attestation of SBOM | | cli_checksums.txt | Checksums of binaries | | sbom.tar.gz | Sbom | | sbom.tar.gz.pem | Certificate used to sign sbom | | sbom.tar.gz.sig | Signature of sbom | *** ## Verification of container images Argo CD container images are signed by [cosign](https://github.com/sigstore/cosign) using identity-based ("keyless") signing and transparency. Executing the following command can be used to verify the signature of a container image: ```bash cosign verify \ --certificate-identity-regexp https://github.com/argoproj/argo-cd/.github/workflows/image-reuse.yaml@refs/tags/v \ --certificate-oidc-issuer https://token.actions.githubusercontent.com \ --certificate-github-workflow-repository "argoproj/argo-cd" \ quay.io/argoproj/argocd:v2.11.3 | jq ``` The command should output the following if the container image was correctly verified: ```bash The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - Any certificates were verified against the Fulcio roots. [ { "critical": { "identity": { "docker-reference": "quay.io/argoproj/argo-cd" }, "image": { "docker-manifest-digest": "sha256:63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277" }, "type": "cosign container image signature" }, "optional": { "1.3.6.1.4.1.57264.1.1": "https://token.actions.githubusercontent.com", "1.3.6.1.4.1.57264.1.2": "push", "1.3.6.1.4.1.57264.1.3": "a6ec84da0eaa519cbd91a8f016cf4050c03323b2", "1.3.6.1.4.1.57264.1.4": "Publish ArgoCD Release", "1.3.6.1.4.1.57264.1.5": "argoproj/argo-cd", "1.3.6.1.4.1.57264.1.6": "refs/tags/<version>", ... ``` *** ## Verification of container image with SLSA attestations A [SLSA](https://slsa.dev/) Level 3 provenance is generated using [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator). The following command will verify the signature of an attestation and how it was issued. It will contain the payloadType, payload, and signature. Run the following command as per the [slsa-verifier documentation](https://github.com/slsa-framework/slsa-verifier/tree/main#containers): ```bash # Get the immutable container image to prevent TOCTOU attacks https://github.com/slsa-framework/slsa-verifier#toctou-attacks IMAGE=quay.io/argoproj/argocd:v2.7.0 IMAGE="${IMAGE}@"$(crane digest "${IMAGE}") # Verify provenance, including the tag to prevent rollback attacks. slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` If you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification: ```shell slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-versioned-tag v2 # Note: May use v2.7 for minor version verification. ``` The attestation payload contains a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above: ```bash slsa-verifier verify-image "$IMAGE" \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 \ --print-provenance | jq ``` If you prefer using cosign, follow these [instructions](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#cosign). !!! tip `cosign` or `slsa-verifier` can both be used to verify image attestations. Check the documentation of each binary for detailed instructions. *** ## Verification of CLI artifacts with SLSA attestations A single attestation (`argocd-cli.intoto.jsonl`) from each release is provided. This can be used with [slsa-verifier](https://github.com/slsa-framework/slsa-verifier#verification-for-github-builders) to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed. ```bash slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` If you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification: ```shell slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-versioned-tag v2 # Note: May use v2.7 for minor version verification. ``` The payload is a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above: ```bash slsa-verifier verify-artifact argocd-linux-amd64 \ --provenance-path argocd-cli.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 \ --print-provenance | jq ``` ## Verification of Sbom A single attestation (`argocd-sbom.intoto.jsonl`) from each release is provided along with the sbom (`sbom.tar.gz`). This can be used with [slsa-verifier](https://github.com/slsa-framework/slsa-verifier#verification-for-github-builders) to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed. ```bash slsa-verifier verify-artifact sbom.tar.gz \ --provenance-path argocd-sbom.intoto.jsonl \ --source-uri github.com/argoproj/argo-cd \ --source-tag v2.7.0 ``` *** ## Verification on Kubernetes ### Policy controllers !!! note We encourage all users to verify signatures and provenances with your admission/policy controller of choice. Doing so will verify that an image was built by us before it's deployed on your Kubernetes cluster. Cosign signatures and SLSA provenances are compatible with several types of admission controllers. Please see the [cosign documentation](https://docs.sigstore.dev/cosign/overview/#kubernetes-integrations) and [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator/blob/main/internal/builders/container/README.md#verification) for supported controllers.
argocd
Verification of Argo CD Artifacts Prerequisites cosign v2 0 0 or higher installation instructions https docs sigstore dev cosign installation slsa verifier installation instructions https github com slsa framework slsa verifier installation crane installation instructions https github com google go containerregistry blob main cmd crane README md for container verification only Release Assets Asset Description argocd darwin amd64 CLI Binary argocd darwin arm64 CLI Binary argocd linux amd64 CLI Binary argocd linux arm64 CLI Binary argocd linux ppc64le CLI Binary argocd linux s390x CLI Binary argocd windows amd64 CLI Binary argocd cli intoto jsonl Attestation of CLI binaries argocd sbom intoto jsonl Attestation of SBOM cli checksums txt Checksums of binaries sbom tar gz Sbom sbom tar gz pem Certificate used to sign sbom sbom tar gz sig Signature of sbom Verification of container images Argo CD container images are signed by cosign https github com sigstore cosign using identity based keyless signing and transparency Executing the following command can be used to verify the signature of a container image bash cosign verify certificate identity regexp https github com argoproj argo cd github workflows image reuse yaml refs tags v certificate oidc issuer https token actions githubusercontent com certificate github workflow repository argoproj argo cd quay io argoproj argocd v2 11 3 jq The command should output the following if the container image was correctly verified bash The following checks were performed on each of these signatures The cosign claims were validated Existence of the claims in the transparency log was verified offline Any certificates were verified against the Fulcio roots critical identity docker reference quay io argoproj argo cd image docker manifest digest sha256 63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277 type cosign container image signature optional 1 3 6 1 4 1 57264 1 1 https token actions githubusercontent com 1 3 6 1 4 1 57264 1 2 push 1 3 6 1 4 1 57264 1 3 a6ec84da0eaa519cbd91a8f016cf4050c03323b2 1 3 6 1 4 1 57264 1 4 Publish ArgoCD Release 1 3 6 1 4 1 57264 1 5 argoproj argo cd 1 3 6 1 4 1 57264 1 6 refs tags version Verification of container image with SLSA attestations A SLSA https slsa dev Level 3 provenance is generated using slsa github generator https github com slsa framework slsa github generator The following command will verify the signature of an attestation and how it was issued It will contain the payloadType payload and signature Run the following command as per the slsa verifier documentation https github com slsa framework slsa verifier tree main containers bash Get the immutable container image to prevent TOCTOU attacks https github com slsa framework slsa verifier toctou attacks IMAGE quay io argoproj argocd v2 7 0 IMAGE IMAGE crane digest IMAGE Verify provenance including the tag to prevent rollback attacks slsa verifier verify image IMAGE source uri github com argoproj argo cd source tag v2 7 0 If you only want to verify up to the major or minor verion of the source repository tag instead of the full tag use the source versioned tag which performs semantic versioning verification shell slsa verifier verify image IMAGE source uri github com argoproj argo cd source versioned tag v2 Note May use v2 7 for minor version verification The attestation payload contains a non forgeable provenance which is base64 encoded and can be viewed by passing the print provenance option to the commands above bash slsa verifier verify image IMAGE source uri github com argoproj argo cd source tag v2 7 0 print provenance jq If you prefer using cosign follow these instructions https github com slsa framework slsa github generator blob main internal builders container README md cosign tip cosign or slsa verifier can both be used to verify image attestations Check the documentation of each binary for detailed instructions Verification of CLI artifacts with SLSA attestations A single attestation argocd cli intoto jsonl from each release is provided This can be used with slsa verifier https github com slsa framework slsa verifier verification for github builders to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed bash slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 If you only want to verify up to the major or minor verion of the source repository tag instead of the full tag use the source versioned tag which performs semantic versioning verification shell slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source versioned tag v2 Note May use v2 7 for minor version verification The payload is a non forgeable provenance which is base64 encoded and can be viewed by passing the print provenance option to the commands above bash slsa verifier verify artifact argocd linux amd64 provenance path argocd cli intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 print provenance jq Verification of Sbom A single attestation argocd sbom intoto jsonl from each release is provided along with the sbom sbom tar gz This can be used with slsa verifier https github com slsa framework slsa verifier verification for github builders to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed bash slsa verifier verify artifact sbom tar gz provenance path argocd sbom intoto jsonl source uri github com argoproj argo cd source tag v2 7 0 Verification on Kubernetes Policy controllers note We encourage all users to verify signatures and provenances with your admission policy controller of choice Doing so will verify that an image was built by us before it s deployed on your Kubernetes cluster Cosign signatures and SLSA provenances are compatible with several types of admission controllers Please see the cosign documentation https docs sigstore dev cosign overview kubernetes integrations and slsa github generator https github com slsa framework slsa github generator blob main internal builders container README md verification for supported controllers
argocd operations and the API TLS configuration The user facing endpoint of the workload which serves the UI The endpoint of the which is accessed by Argo CD provides three inbound TLS endpoints that can be configured and workloads to request repository
# TLS configuration Argo CD provides three inbound TLS endpoints that can be configured: * The user-facing endpoint of the `argocd-server` workload which serves the UI and the API * The endpoint of the `argocd-repo-server`, which is accessed by `argocd-server` and `argocd-application-controller` workloads to request repository operations. * The endpoint of the `argocd-dex-server`, which is accessed by `argocd-server` to handle OIDC authentication. By default, and without further configuration, these endpoints will be set-up to use an automatically generated, self-signed certificate. However, most users will want to explicitly configure the certificates for these TLS endpoints, possibly using automated means such as `cert-manager` or using their own dedicated Certificate Authority. ## Configuring TLS for argocd-server ### Inbound TLS options for argocd-server You can configure certain TLS options for the `argocd-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--insecure`|`false`|Disables TLS completely| |`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients| |`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients| |`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients| ### TLS certificates used by argocd-server There are two ways to configure the TLS certificates used by `argocd-server`: * Setting the `tls.crt` and `tls.key` keys in the `argocd-server-tls` secret to hold PEM data of the certificate and the corresponding private key. The `argocd-server-tls` secret may be of type `tls`, but does not have to be. * Setting the `tls.crt` and `tls.key` keys in the `argocd-secret` secret to hold PEM data of the certificate and the corresponding private key. This method is considered deprecated, and only exists for purposes of backwards compatibility. Changing `argocd-secret` should not be used to override the TLS certificate anymore. Argo CD decides which TLS certificate to use for the endpoint of `argocd-server` as follows: * If the `argocd-server-tls` secret exists and contains a valid key pair in the `tls.crt` and `tls.key` keys, this will be used for the certificate of the endpoint of `argocd-server`. * Otherwise, if the `argocd-secret` secret contains a valid key pair in the `tls.crt` and `tls.key` keys, this will be used as certificate for the endpoint of `argocd-server`. * If no `tls.crt` and `tls.key` keys are found in neither of the two mentioned secrets, Argo CD will generate a self-signed certificate and persist it in the `argocd-secret` secret. The `argocd-server-tls` secret contains only information for TLS configuration to be used by `argocd-server` and is safe to be managed via third-party tools such as `cert-manager` or `SealedSecrets` To create this secret manually from an existing key pair, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` Argo CD will pick up changes to the `argocd-server-tls` secret automatically and will not require restart of the pods to use a renewed certificate. ## Configuring inbound TLS for argocd-repo-server ### Inbound TLS options for argocd-repo-server You can configure certain TLS options for the `argocd-repo-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--disable-tls`|`false`|Disables TLS completely| |`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients| |`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients| |`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients| ### Inbound TLS certificates used by argocd-repo-server To configure the TLS certificate used by the `argocd-repo-server` workload, create a secret named `argocd-repo-server-tls` in the namespace where Argo CD is running in with the certificate's key pair stored in `tls.crt` and `tls.key` keys. If this secret does not exist, `argocd-repo-server` will generate and use a self-signed certificate. To create this secret, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-repo-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` If the certificate is self-signed, you will also need to add `ca.crt` to the secret with the contents of your CA certificate. Please note, that as opposed to `argocd-server`, the `argocd-repo-server` is not able to pick up changes to this secret automatically. If you create (or update) this secret, the `argocd-repo-server` pods need to be restarted. Also note, that the certificate should be issued with the correct SAN entries for the `argocd-repo-server`, containing at least the entries for `DNS:argocd-repo-server` and `DNS:argocd-repo-server.argo-cd.svc` depending on how your workloads connect to the repository server. ## Configuring inbound TLS for argocd-dex-server ### Inbound TLS options for argocd-dex-server You can configure certain TLS options for the `argocd-dex-server` workload by setting command line parameters. The following parameters are available: |Parameter|Default|Description| |---------|-------|-----------| |`--disable-tls`|`false`|Disables TLS completely| ### Inbound TLS certificates used by argocd-dex-server To configure the TLS certificate used by the `argocd-dex-server` workload, create a secret named `argocd-dex-server-tls` in the namespace where Argo CD is running in with the certificate's key pair stored in `tls.crt` and `tls.key` keys. If this secret does not exist, `argocd-dex-server` will generate and use a self-signed certificate. To create this secret, you can use `kubectl`: ```shell kubectl create -n argocd secret tls argocd-dex-server-tls \ --cert=/path/to/cert.pem \ --key=/path/to/key.pem ``` If the certificate is self-signed, you will also need to add `ca.crt` to the secret with the contents of your CA certificate. Please note, that as opposed to `argocd-server`, the `argocd-dex-server` is not able to pick up changes to this secret automatically. If you create (or update) this secret, the `argocd-dex-server` pods need to be restarted. Also note, that the certificate should be issued with the correct SAN entries for the `argocd-dex-server`, containing at least the entries for `DNS:argocd-dex-server` and `DNS:argocd-dex-server.argo-cd.svc` depending on how your workloads connect to the repository server. ## Configuring TLS between Argo CD components ### Configuring TLS to argocd-repo-server Both `argocd-server` and `argocd-application-controller` communicate with the `argocd-repo-server` using a gRPC API over TLS. By default, `argocd-repo-server` generates a non-persistent, self signed certificate to use for its gRPC endpoint on startup. Because the `argocd-repo-server` has no means to connect to the K8s control plane API, this certificate is not being available to outside consumers for verification. Both, the `argocd-server` and `argocd-application-server` will use a non-validating connection to the `argocd-repo-server` for this reason. To change this behavior to be more secure by having the `argocd-server` and `argocd-application-controller` validate the TLS certificate of the `argocd-repo-server` endpoint, the following steps need to be performed: * Create a persistent TLS certificate to be used by `argocd-repo-server`, as shown above * Restart the `argocd-repo-server` pod(s) * Modify the pod startup parameters for `argocd-server` and `argocd-application-controller` to include the `--repo-server-strict-tls` parameter. The `argocd-server` and `argocd-application-controller` workloads will now validate the TLS certificate of the `argocd-repo-server` by using the certificate stored in the `argocd-repo-server-tls` secret. !!!note "Certificate expiry" Please make sure that the certificate has a proper life time. Keep in mind that when you have to replace the certificate, all workloads have to be restarted in order to properly work again. ### Configuring TLS to argocd-dex-server `argocd-server` communicates with the `argocd-dex-server` using an HTTPS API over TLS. By default, `argocd-dex-server` generates a non-persistent, self signed certificate to use for its HTTPS endpoint on startup. Because the `argocd-dex-server` has no means to connect to the K8s control plane API, this certificate is not being available to outside consumers for verification. The `argocd-server` will use a non-validating connection to the `argocd-dex-server` for this reason. To change this behavior to be more secure by having the `argocd-server` validate the TLS certificate of the `argocd-dex-server` endpoint, the following steps need to be performed: * Create a persistent TLS certificate to be used by `argocd-dex-server`, as shown above * Restart the `argocd-dex-server` pod(s) * Modify the pod startup parameters for `argocd-server` to include the `--dex-server-strict-tls` parameter. The `argocd-server` workload will now validate the TLS certificate of the `argocd-dex-server` by using the certificate stored in the `argocd-dex-server-tls` secret. !!!note "Certificate expiry" Please make sure that the certificate has a proper life time. Keep in mind that when you have to replace the certificate, all workloads have to be restarted in order to properly work again. ### Disabling TLS to argocd-repo-server In some scenarios where mTLS through side-car proxies is involved (e.g. in a service mesh), you may want configure the connections between the `argocd-server` and `argocd-application-controller` to `argocd-repo-server` to not use TLS at all. In this case, you will need to: * Configure `argocd-repo-server` with TLS on the gRPC API disabled by specifying the `--disable-tls` parameter to the pod container's startup arguments. Also, consider restricting listening addresses to the loopback interface by specifying `--listen 127.0.0.1` parameter, so that insecure endpoint is not exposed on the pod's network interfaces, but still available to the side-car container. * Configure `argocd-server` and `argocd-application-controller` to not use TLS for connections to the `argocd-repo-server` by specifying the parameter `--repo-server-plaintext` to the pod container's startup arguments * Configure `argocd-server` and `argocd-application-controller` to connect to the side-car instead of directly to the `argocd-repo-server` service by specifying its address via the `--repo-server <address>` parameter After this change, the `argocd-server` and `argocd-application-controller` will use a plain text connection to the side-car proxy, that will handle all aspects of TLS to the `argocd-repo-server`'s TLS side-car proxy. ### Disabling TLS to argocd-dex-server In some scenarios where mTLS through side-car proxies is involved (e.g. in a service mesh), you may want configure the connections between `argocd-server` to `argocd-dex-server` to not use TLS at all. In this case, you will need to: * Configure `argocd-dex-server` with TLS on the HTTPS API disabled by specifying the `--disable-tls` parameter to the pod container's startup arguments * Configure `argocd-server` to not use TLS for connections to the `argocd-dex-server` by specifying the parameter `--dex-server-plaintext` to the pod container's startup arguments * Configure `argocd-server` to connect to the side-car instead of directly to the `argocd-dex-server` service by specifying its address via the `--dex-server <address>` parameter After this change, the `argocd-server` will use a plain text connection to the side-car proxy, that will handle all aspects of TLS to the `argocd-dex-server`'s TLS side-car proxy.
argocd
TLS configuration Argo CD provides three inbound TLS endpoints that can be configured The user facing endpoint of the argocd server workload which serves the UI and the API The endpoint of the argocd repo server which is accessed by argocd server and argocd application controller workloads to request repository operations The endpoint of the argocd dex server which is accessed by argocd server to handle OIDC authentication By default and without further configuration these endpoints will be set up to use an automatically generated self signed certificate However most users will want to explicitly configure the certificates for these TLS endpoints possibly using automated means such as cert manager or using their own dedicated Certificate Authority Configuring TLS for argocd server Inbound TLS options for argocd server You can configure certain TLS options for the argocd server workload by setting command line parameters The following parameters are available Parameter Default Description insecure false Disables TLS completely tlsminversion 1 2 The minimum TLS version to be offered to clients tlsmaxversion 1 3 The maximum TLS version to be offered to clients tlsciphers TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384 A colon separated list of TLS cipher suites to be offered to clients TLS certificates used by argocd server There are two ways to configure the TLS certificates used by argocd server Setting the tls crt and tls key keys in the argocd server tls secret to hold PEM data of the certificate and the corresponding private key The argocd server tls secret may be of type tls but does not have to be Setting the tls crt and tls key keys in the argocd secret secret to hold PEM data of the certificate and the corresponding private key This method is considered deprecated and only exists for purposes of backwards compatibility Changing argocd secret should not be used to override the TLS certificate anymore Argo CD decides which TLS certificate to use for the endpoint of argocd server as follows If the argocd server tls secret exists and contains a valid key pair in the tls crt and tls key keys this will be used for the certificate of the endpoint of argocd server Otherwise if the argocd secret secret contains a valid key pair in the tls crt and tls key keys this will be used as certificate for the endpoint of argocd server If no tls crt and tls key keys are found in neither of the two mentioned secrets Argo CD will generate a self signed certificate and persist it in the argocd secret secret The argocd server tls secret contains only information for TLS configuration to be used by argocd server and is safe to be managed via third party tools such as cert manager or SealedSecrets To create this secret manually from an existing key pair you can use kubectl shell kubectl create n argocd secret tls argocd server tls cert path to cert pem key path to key pem Argo CD will pick up changes to the argocd server tls secret automatically and will not require restart of the pods to use a renewed certificate Configuring inbound TLS for argocd repo server Inbound TLS options for argocd repo server You can configure certain TLS options for the argocd repo server workload by setting command line parameters The following parameters are available Parameter Default Description disable tls false Disables TLS completely tlsminversion 1 2 The minimum TLS version to be offered to clients tlsmaxversion 1 3 The maximum TLS version to be offered to clients tlsciphers TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384 A colon separated list of TLS cipher suites to be offered to clients Inbound TLS certificates used by argocd repo server To configure the TLS certificate used by the argocd repo server workload create a secret named argocd repo server tls in the namespace where Argo CD is running in with the certificate s key pair stored in tls crt and tls key keys If this secret does not exist argocd repo server will generate and use a self signed certificate To create this secret you can use kubectl shell kubectl create n argocd secret tls argocd repo server tls cert path to cert pem key path to key pem If the certificate is self signed you will also need to add ca crt to the secret with the contents of your CA certificate Please note that as opposed to argocd server the argocd repo server is not able to pick up changes to this secret automatically If you create or update this secret the argocd repo server pods need to be restarted Also note that the certificate should be issued with the correct SAN entries for the argocd repo server containing at least the entries for DNS argocd repo server and DNS argocd repo server argo cd svc depending on how your workloads connect to the repository server Configuring inbound TLS for argocd dex server Inbound TLS options for argocd dex server You can configure certain TLS options for the argocd dex server workload by setting command line parameters The following parameters are available Parameter Default Description disable tls false Disables TLS completely Inbound TLS certificates used by argocd dex server To configure the TLS certificate used by the argocd dex server workload create a secret named argocd dex server tls in the namespace where Argo CD is running in with the certificate s key pair stored in tls crt and tls key keys If this secret does not exist argocd dex server will generate and use a self signed certificate To create this secret you can use kubectl shell kubectl create n argocd secret tls argocd dex server tls cert path to cert pem key path to key pem If the certificate is self signed you will also need to add ca crt to the secret with the contents of your CA certificate Please note that as opposed to argocd server the argocd dex server is not able to pick up changes to this secret automatically If you create or update this secret the argocd dex server pods need to be restarted Also note that the certificate should be issued with the correct SAN entries for the argocd dex server containing at least the entries for DNS argocd dex server and DNS argocd dex server argo cd svc depending on how your workloads connect to the repository server Configuring TLS between Argo CD components Configuring TLS to argocd repo server Both argocd server and argocd application controller communicate with the argocd repo server using a gRPC API over TLS By default argocd repo server generates a non persistent self signed certificate to use for its gRPC endpoint on startup Because the argocd repo server has no means to connect to the K8s control plane API this certificate is not being available to outside consumers for verification Both the argocd server and argocd application server will use a non validating connection to the argocd repo server for this reason To change this behavior to be more secure by having the argocd server and argocd application controller validate the TLS certificate of the argocd repo server endpoint the following steps need to be performed Create a persistent TLS certificate to be used by argocd repo server as shown above Restart the argocd repo server pod s Modify the pod startup parameters for argocd server and argocd application controller to include the repo server strict tls parameter The argocd server and argocd application controller workloads will now validate the TLS certificate of the argocd repo server by using the certificate stored in the argocd repo server tls secret note Certificate expiry Please make sure that the certificate has a proper life time Keep in mind that when you have to replace the certificate all workloads have to be restarted in order to properly work again Configuring TLS to argocd dex server argocd server communicates with the argocd dex server using an HTTPS API over TLS By default argocd dex server generates a non persistent self signed certificate to use for its HTTPS endpoint on startup Because the argocd dex server has no means to connect to the K8s control plane API this certificate is not being available to outside consumers for verification The argocd server will use a non validating connection to the argocd dex server for this reason To change this behavior to be more secure by having the argocd server validate the TLS certificate of the argocd dex server endpoint the following steps need to be performed Create a persistent TLS certificate to be used by argocd dex server as shown above Restart the argocd dex server pod s Modify the pod startup parameters for argocd server to include the dex server strict tls parameter The argocd server workload will now validate the TLS certificate of the argocd dex server by using the certificate stored in the argocd dex server tls secret note Certificate expiry Please make sure that the certificate has a proper life time Keep in mind that when you have to replace the certificate all workloads have to be restarted in order to properly work again Disabling TLS to argocd repo server In some scenarios where mTLS through side car proxies is involved e g in a service mesh you may want configure the connections between the argocd server and argocd application controller to argocd repo server to not use TLS at all In this case you will need to Configure argocd repo server with TLS on the gRPC API disabled by specifying the disable tls parameter to the pod container s startup arguments Also consider restricting listening addresses to the loopback interface by specifying listen 127 0 0 1 parameter so that insecure endpoint is not exposed on the pod s network interfaces but still available to the side car container Configure argocd server and argocd application controller to not use TLS for connections to the argocd repo server by specifying the parameter repo server plaintext to the pod container s startup arguments Configure argocd server and argocd application controller to connect to the side car instead of directly to the argocd repo server service by specifying its address via the repo server address parameter After this change the argocd server and argocd application controller will use a plain text connection to the side car proxy that will handle all aspects of TLS to the argocd repo server s TLS side car proxy Disabling TLS to argocd dex server In some scenarios where mTLS through side car proxies is involved e g in a service mesh you may want configure the connections between argocd server to argocd dex server to not use TLS at all In this case you will need to Configure argocd dex server with TLS on the HTTPS API disabled by specifying the disable tls parameter to the pod container s startup arguments Configure argocd server to not use TLS for connections to the argocd dex server by specifying the parameter dex server plaintext to the pod container s startup arguments Configure argocd server to connect to the side car instead of directly to the argocd dex server service by specifying its address via the dex server address parameter After this change the argocd server will use a plain text connection to the side car proxy that will handle all aspects of TLS to the argocd dex server s TLS side car proxy
argocd Operators can add actions to custom resources in form of a Lua script and expand those capabilities Built in Actions Argo CD allows operators to define custom actions which users can perform on specific resource types This is used internally to provide actions like for a or for an Argo Rollout The following are actions that are built in to Argo CD Each action name links to its Lua script definition Resource Actions Overview
# Resource Actions ## Overview Argo CD allows operators to define custom actions which users can perform on specific resource types. This is used internally to provide actions like `restart` for a `DaemonSet`, or `retry` for an Argo Rollout. Operators can add actions to custom resources in form of a Lua script and expand those capabilities. ## Built-in Actions The following are actions that are built-in to Argo CD. Each action name links to its Lua script definition: {!docs/operator-manual/resource_actions_builtin.md!} See the [RBAC documentation](rbac.md#the-action-action) for information on how to control access to these actions. ## Custom Resource Actions Argo CD supports custom resource actions written in [Lua](https://www.lua.org/). This is useful if you: * Have a custom resource for which Argo CD does not provide any built-in actions. * Have a commonly performed manual task that might be error prone if executed by users via `kubectl` The resource actions act on a single object. You can define your own custom resource actions in the `argocd-cm` ConfigMap. ### Custom Resource Action Types #### An action that modifies the source resource This action modifies and returns the source resource. This kind of action was the only one available till 2.8, and it is still supported. #### An action that produces a list of new or modified resources **An alpha feature, introduced in 2.8.** This action returns a list of impacted resources, each impacted resource has a K8S resource and an operation to perform on. Currently supported operations are "create" and "patch", "patch" is only supported for the source resource. Creating new resources is possible, by specifying a "create" operation for each such resource in the returned list. One of the returned resources can be the modified source object, with a "patch" operation, if needed. See the definition examples below. ### Define a Custom Resource Action in `argocd-cm` ConfigMap Custom resource actions can be defined in `resource.customizations.actions.<group_kind>` field of `argocd-cm`. Following example demonstrates a set of custom actions for `CronJob` resources, each such action returns the modified CronJob. The customizations key is in the format of `resource.customizations.actions.<apiGroup_Kind>`. ```yaml resource.customizations.actions.batch_CronJob: | discovery.lua: | actions = {} actions["suspend"] = {["disabled"] = true} actions["resume"] = {["disabled"] = true} local suspend = false if obj.spec.suspend ~= nil then suspend = obj.spec.suspend end if suspend then actions["resume"]["disabled"] = false else actions["suspend"]["disabled"] = false end return actions definitions: - name: suspend action.lua: | obj.spec.suspend = true return obj - name: resume action.lua: | if obj.spec.suspend ~= nil and obj.spec.suspend then obj.spec.suspend = false end return obj ``` The `discovery.lua` script must return a table where the key name represents the action name. You can optionally include logic to enable or disable certain actions based on the current object state. Each action name must be represented in the list of `definitions` with an accompanying `action.lua` script to control the resource modifications. The `obj` is a global variable which contains the resource. Each action script returns an optionally modified version of the resource. In this example, we are simply setting `.spec.suspend` to either `true` or `false`. By default, defining a resource action customization will override any built-in action for this resource kind. As of Argo CD version 2.13.0, if you want to retain the built-in actions, you can set the `mergeBuiltinActions` key to `true`. Your custom actions will have precedence over the built-in actions. ```yaml resource.customizations.actions.argoproj.io_Rollout: | mergeBuiltinActions: true discovery.lua: | actions = {} actions["do-things"] = {} return actions definitions: - name: do-things action.lua: | return obj ``` #### Creating new resources with a custom action !!! important Creating resources via the Argo CD UI is an intentional, strategic departure from GitOps principles. We recommend that you use this feature sparingly and only for resources that are not part of the desired state of the application. The resource the action is invoked on would be referred to as the `source resource`. The new resource and all the resources implicitly created as a result, must be permitted on the AppProject level, otherwise the creation will fail. ##### Creating a source resource child resources with a custom action If the new resource represents a k8s child of the source resource, the source resource ownerReference must be set on the new resource. Here is an example Lua snippet, that takes care of constructing a Job resource that is a child of a source CronJob resource - the `obj` is a global variable, which contains the source resource: ```lua -- ... ownerRef = {} ownerRef.apiVersion = obj.apiVersion ownerRef.kind = obj.kind ownerRef.name = obj.metadata.name ownerRef.uid = obj.metadata.uid job = {} job.metadata = {} job.metadata.ownerReferences = {} job.metadata.ownerReferences[1] = ownerRef -- ... ``` ##### Creating independent child resources with a custom action If the new resource is independent of the source resource, the default behavior of such new resource is that it is not known by the App of the source resource (as it is not part of the desired state and does not have an `ownerReference`). To make the App aware of the new resource, the `app.kubernetes.io/instance` label (or other ArgoCD tracking label, if configured) must be set on the resource. It can be copied from the source resource, like this: ```lua -- ... newObj = {} newObj.metadata = {} newObj.metadata.labels = {} newObj.metadata.labels["app.kubernetes.io/instance"] = obj.metadata.labels["app.kubernetes.io/instance"] -- ... ``` While the new resource will be part of the App with the tracking label in place, it will be immediately deleted if auto prune is set on the App. To keep the resource, set `Prune=false` annotation on the resource, with this Lua snippet: ```lua -- ... newObj.metadata.annotations = {} newObj.metadata.annotations["argocd.argoproj.io/sync-options"] = "Prune=false" -- ... ``` (If setting `Prune=false` behavior, the resource will not be deleted upon the deletion of the App, and will require a manual cleanup). The resource and the App will now appear out of sync - which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state. If you wish to treat such an App as a synced one, add the following resource annotation in Lua code: ```lua -- ... newObj.metadata.annotations["argocd.argoproj.io/compare-options"] = "IgnoreExtraneous" -- ... ``` #### An action that produces a list of resources - a complete example: ```yaml resource.customizations.actions.ConfigMap: | discovery.lua: | actions = {} actions["do-things"] = {} return actions definitions: - name: do-things action.lua: | -- Create a new ConfigMap cm1 = {} cm1.apiVersion = "v1" cm1.kind = "ConfigMap" cm1.metadata = {} cm1.metadata.name = "cm1" cm1.metadata.namespace = obj.metadata.namespace cm1.metadata.labels = {} -- Copy ArgoCD tracking label so that the resource is recognized by the App cm1.metadata.labels["app.kubernetes.io/instance"] = obj.metadata.labels["app.kubernetes.io/instance"] cm1.metadata.annotations = {} -- For Apps with auto-prune, set the prune false on the resource, so it does not get deleted cm1.metadata.annotations["argocd.argoproj.io/sync-options"] = "Prune=false" -- Keep the App synced even though it has a resource that is not in Git cm1.metadata.annotations["argocd.argoproj.io/compare-options"] = "IgnoreExtraneous" cm1.data = {} cm1.data.myKey1 = "myValue1" impactedResource1 = {} impactedResource1.operation = "create" impactedResource1.resource = cm1 -- Patch the original cm obj.metadata.labels["aKey"] = "aValue" impactedResource2 = {} impactedResource2.operation = "patch" impactedResource2.resource = obj result = {} result[1] = impactedResource1 result[2] = impactedResource2 return result ```
argocd
Resource Actions Overview Argo CD allows operators to define custom actions which users can perform on specific resource types This is used internally to provide actions like restart for a DaemonSet or retry for an Argo Rollout Operators can add actions to custom resources in form of a Lua script and expand those capabilities Built in Actions The following are actions that are built in to Argo CD Each action name links to its Lua script definition docs operator manual resource actions builtin md See the RBAC documentation rbac md the action action for information on how to control access to these actions Custom Resource Actions Argo CD supports custom resource actions written in Lua https www lua org This is useful if you Have a custom resource for which Argo CD does not provide any built in actions Have a commonly performed manual task that might be error prone if executed by users via kubectl The resource actions act on a single object You can define your own custom resource actions in the argocd cm ConfigMap Custom Resource Action Types An action that modifies the source resource This action modifies and returns the source resource This kind of action was the only one available till 2 8 and it is still supported An action that produces a list of new or modified resources An alpha feature introduced in 2 8 This action returns a list of impacted resources each impacted resource has a K8S resource and an operation to perform on Currently supported operations are create and patch patch is only supported for the source resource Creating new resources is possible by specifying a create operation for each such resource in the returned list One of the returned resources can be the modified source object with a patch operation if needed See the definition examples below Define a Custom Resource Action in argocd cm ConfigMap Custom resource actions can be defined in resource customizations actions group kind field of argocd cm Following example demonstrates a set of custom actions for CronJob resources each such action returns the modified CronJob The customizations key is in the format of resource customizations actions apiGroup Kind yaml resource customizations actions batch CronJob discovery lua actions actions suspend disabled true actions resume disabled true local suspend false if obj spec suspend nil then suspend obj spec suspend end if suspend then actions resume disabled false else actions suspend disabled false end return actions definitions name suspend action lua obj spec suspend true return obj name resume action lua if obj spec suspend nil and obj spec suspend then obj spec suspend false end return obj The discovery lua script must return a table where the key name represents the action name You can optionally include logic to enable or disable certain actions based on the current object state Each action name must be represented in the list of definitions with an accompanying action lua script to control the resource modifications The obj is a global variable which contains the resource Each action script returns an optionally modified version of the resource In this example we are simply setting spec suspend to either true or false By default defining a resource action customization will override any built in action for this resource kind As of Argo CD version 2 13 0 if you want to retain the built in actions you can set the mergeBuiltinActions key to true Your custom actions will have precedence over the built in actions yaml resource customizations actions argoproj io Rollout mergeBuiltinActions true discovery lua actions actions do things return actions definitions name do things action lua return obj Creating new resources with a custom action important Creating resources via the Argo CD UI is an intentional strategic departure from GitOps principles We recommend that you use this feature sparingly and only for resources that are not part of the desired state of the application The resource the action is invoked on would be referred to as the source resource The new resource and all the resources implicitly created as a result must be permitted on the AppProject level otherwise the creation will fail Creating a source resource child resources with a custom action If the new resource represents a k8s child of the source resource the source resource ownerReference must be set on the new resource Here is an example Lua snippet that takes care of constructing a Job resource that is a child of a source CronJob resource the obj is a global variable which contains the source resource lua ownerRef ownerRef apiVersion obj apiVersion ownerRef kind obj kind ownerRef name obj metadata name ownerRef uid obj metadata uid job job metadata job metadata ownerReferences job metadata ownerReferences 1 ownerRef Creating independent child resources with a custom action If the new resource is independent of the source resource the default behavior of such new resource is that it is not known by the App of the source resource as it is not part of the desired state and does not have an ownerReference To make the App aware of the new resource the app kubernetes io instance label or other ArgoCD tracking label if configured must be set on the resource It can be copied from the source resource like this lua newObj newObj metadata newObj metadata labels newObj metadata labels app kubernetes io instance obj metadata labels app kubernetes io instance While the new resource will be part of the App with the tracking label in place it will be immediately deleted if auto prune is set on the App To keep the resource set Prune false annotation on the resource with this Lua snippet lua newObj metadata annotations newObj metadata annotations argocd argoproj io sync options Prune false If setting Prune false behavior the resource will not be deleted upon the deletion of the App and will require a manual cleanup The resource and the App will now appear out of sync which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state If you wish to treat such an App as a synced one add the following resource annotation in Lua code lua newObj metadata annotations argocd argoproj io compare options IgnoreExtraneous An action that produces a list of resources a complete example yaml resource customizations actions ConfigMap discovery lua actions actions do things return actions definitions name do things action lua Create a new ConfigMap cm1 cm1 apiVersion v1 cm1 kind ConfigMap cm1 metadata cm1 metadata name cm1 cm1 metadata namespace obj metadata namespace cm1 metadata labels Copy ArgoCD tracking label so that the resource is recognized by the App cm1 metadata labels app kubernetes io instance obj metadata labels app kubernetes io instance cm1 metadata annotations For Apps with auto prune set the prune false on the resource so it does not get deleted cm1 metadata annotations argocd argoproj io sync options Prune false Keep the App synced even though it has a resource that is not in Git cm1 metadata annotations argocd argoproj io compare options IgnoreExtraneous cm1 data cm1 data myKey1 myValue1 impactedResource1 impactedResource1 operation create impactedResource1 resource cm1 Patch the original cm obj metadata labels aKey aValue impactedResource2 impactedResource2 operation patch impactedResource2 resource obj result result 1 impactedResource1 result 2 impactedResource2 return result
argocd For untracked resources you can By default an Argo CD Application is refreshed every time a resource that belongs to it changes When a resource update is ignored if the resource s does not change the Application that this resource belongs to will not be reconciled for Reconcile Optimization and a high CPU usage on the Argo CD allows you to optionally ignore resource updates on specific fields Kubernetes controllers often update the resources they watch periodically causing continuous reconcile operation on the Application
# Reconcile Optimization By default, an Argo CD Application is refreshed every time a resource that belongs to it changes. Kubernetes controllers often update the resources they watch periodically, causing continuous reconcile operation on the Application and a high CPU usage on the `argocd-application-controller`. Argo CD allows you to optionally ignore resource updates on specific fields for [tracked resources](../user-guide/resource_tracking.md). For untracked resources, you can [use the argocd.argoproj.io/ignore-resource-updates annotations](#ignoring-updates-for-untracked-resources) When a resource update is ignored, if the resource's [health status](./health.md) does not change, the Application that this resource belongs to will not be reconciled. ## System-Level Configuration By default, `resource.ignoreResourceUpdatesEnabled` is set to `true`, enabling Argo CD to ignore resource updates. This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations. If you need to alter this behavior, you can explicitly set `resource.ignoreResourceUpdatesEnabled` to `false` in the `argocd-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd data: resource.ignoreResourceUpdatesEnabled: "false" ``` Argo CD allows ignoring resource updates at a specific JSON path, using [RFC6902 JSON patches](https://tools.ietf.org/html/rfc6902) and [JQ path expressions](https://stedolan.github.io/jq/manual/#path(path_expression)). It can be configured for a specified group and kind in `resource.customizations` key of the `argocd-cm` ConfigMap. Following is an example of a customization which ignores the `refreshTime` status field of an [`ExternalSecret`](https://external-secrets.io/main/api/externalsecret/) resource: ```yaml data: resource.customizations.ignoreResourceUpdates.external-secrets.io_ExternalSecret: | jsonPointers: - /status/refreshTime # JQ equivalent of the above: # jqPathExpressions: # - .status.refreshTime ``` It is possible to configure `ignoreResourceUpdates` to be applied to all tracked resources in every Application managed by an Argo CD instance. In order to do so, resource customizations can be configured like in the example below: ```yaml data: resource.customizations.ignoreResourceUpdates.all: | jsonPointers: - /status ``` ### Using ignoreDifferences to ignore reconcile It is possible to use existing system-level `ignoreDifferences` customizations to ignore resource updates as well. Instead of copying all configurations, the `ignoreDifferencesOnResourceUpdates` setting can be used to add all ignored differences as ignored resource updates: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: resource.compareoptions: | ignoreDifferencesOnResourceUpdates: true ``` ## Default Configuration By default, the metadata fields `generation`, `resourceVersion` and `managedFields` are always ignored for all resources. ## Finding Resources to Ignore The application controller logs when a resource change triggers a refresh. You can use these logs to find high-churn resource kinds and then inspect those resources to find which fields to ignore. To find these logs, search for `"Requesting app refresh caused by object update"`. The logs include structured fields for `api-version` and `kind`. Counting the number of refreshes triggered, by api-version/kind should reveal the high-churn resource kinds. !!!note These logs are at the `debug` level. Configure the application-controller's log level to `debug`. Once you have identified some resources which change often, you can try to determine which fields are changing. Here is one approach: ```shell kubectl get <resource> -o yaml > /tmp/before.yaml # Wait a minute or two. kubectl get <resource> -o yaml > /tmp/after.yaml diff /tmp/before.yaml /tmp/after ``` The diff can give you a sense for which fields are changing and should perhaps be ignored. ## Checking Whether Resource Updates are Ignored Whenever Argo CD skips a refresh due to an ignored resource update, the controller logs the following line: "Ignoring change of object because none of the watched resource fields have changed". Search the application-controller logs for this line to confirm that your resource ignore rules are being applied. !!!note These logs are at the `debug` level. Configure the application-controller's log level to `debug`. ## Examples ### argoproj.io/Application ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: resource.customizations.ignoreResourceUpdates.argoproj.io_Application: | jsonPointers: # Ignore when ownerReferences change, for example when a parent ApplicationSet changes often. - /metadata/ownerReferences # Ignore reconciledAt, since by itself it doesn't indicate any important change. - /status/reconciledAt jqPathExpressions: # Ignore lastTransitionTime for conditions; helpful when SharedResourceWarnings are being regularly updated but not # actually changing in content. - .status?.conditions[]?.lastTransitionTime ``` ## Ignoring updates for untracked resources ArgoCD will only apply `ignoreResourceUpdates` configuration to tracked resources of an application. This means dependant resources, such as a `ReplicaSet` and `Pod` created by a `Deployment`, will not ignore any updates and trigger a reconcile of the application for any changes. If you want to apply the `ignoreResourceUpdates` configuration to an untracked resource, you can add the `argocd.argoproj.io/ignore-resource-updates=true` annotation in the dependent resources manifest. ## Example ### CronJob ```yaml apiVersion: batch/v1 kind: CronJob metadata: name: hello namespace: test-cronjob spec: schedule: "* * * * *" jobTemplate: metadata: annotations: argocd.argoproj.io/ignore-resource-updates: "true" spec: template: metadata: annotations: argocd.argoproj.io/ignore-resource-updates: "true" spec: containers: - name: hello image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure ``` The resource updates will be ignored based on your the `ignoreResourceUpdates` configuration in the `argocd-cm` configMap: `argocd-cm`: ```yaml resource.customizations.ignoreResourceUpdates.batch_Job: | jsonPointers: - /status resource.customizations.ignoreResourceUpdates.Pod: | jsonPointers: - /status ```
argocd
Reconcile Optimization By default an Argo CD Application is refreshed every time a resource that belongs to it changes Kubernetes controllers often update the resources they watch periodically causing continuous reconcile operation on the Application and a high CPU usage on the argocd application controller Argo CD allows you to optionally ignore resource updates on specific fields for tracked resources user guide resource tracking md For untracked resources you can use the argocd argoproj io ignore resource updates annotations ignoring updates for untracked resources When a resource update is ignored if the resource s health status health md does not change the Application that this resource belongs to will not be reconciled System Level Configuration By default resource ignoreResourceUpdatesEnabled is set to true enabling Argo CD to ignore resource updates This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations If you need to alter this behavior you can explicitly set resource ignoreResourceUpdatesEnabled to false in the argocd cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd data resource ignoreResourceUpdatesEnabled false Argo CD allows ignoring resource updates at a specific JSON path using RFC6902 JSON patches https tools ietf org html rfc6902 and JQ path expressions https stedolan github io jq manual path path expression It can be configured for a specified group and kind in resource customizations key of the argocd cm ConfigMap Following is an example of a customization which ignores the refreshTime status field of an ExternalSecret https external secrets io main api externalsecret resource yaml data resource customizations ignoreResourceUpdates external secrets io ExternalSecret jsonPointers status refreshTime JQ equivalent of the above jqPathExpressions status refreshTime It is possible to configure ignoreResourceUpdates to be applied to all tracked resources in every Application managed by an Argo CD instance In order to do so resource customizations can be configured like in the example below yaml data resource customizations ignoreResourceUpdates all jsonPointers status Using ignoreDifferences to ignore reconcile It is possible to use existing system level ignoreDifferences customizations to ignore resource updates as well Instead of copying all configurations the ignoreDifferencesOnResourceUpdates setting can be used to add all ignored differences as ignored resource updates yaml apiVersion v1 kind ConfigMap metadata name argocd cm data resource compareoptions ignoreDifferencesOnResourceUpdates true Default Configuration By default the metadata fields generation resourceVersion and managedFields are always ignored for all resources Finding Resources to Ignore The application controller logs when a resource change triggers a refresh You can use these logs to find high churn resource kinds and then inspect those resources to find which fields to ignore To find these logs search for Requesting app refresh caused by object update The logs include structured fields for api version and kind Counting the number of refreshes triggered by api version kind should reveal the high churn resource kinds note These logs are at the debug level Configure the application controller s log level to debug Once you have identified some resources which change often you can try to determine which fields are changing Here is one approach shell kubectl get resource o yaml tmp before yaml Wait a minute or two kubectl get resource o yaml tmp after yaml diff tmp before yaml tmp after The diff can give you a sense for which fields are changing and should perhaps be ignored Checking Whether Resource Updates are Ignored Whenever Argo CD skips a refresh due to an ignored resource update the controller logs the following line Ignoring change of object because none of the watched resource fields have changed Search the application controller logs for this line to confirm that your resource ignore rules are being applied note These logs are at the debug level Configure the application controller s log level to debug Examples argoproj io Application yaml apiVersion v1 kind ConfigMap metadata name argocd cm data resource customizations ignoreResourceUpdates argoproj io Application jsonPointers Ignore when ownerReferences change for example when a parent ApplicationSet changes often metadata ownerReferences Ignore reconciledAt since by itself it doesn t indicate any important change status reconciledAt jqPathExpressions Ignore lastTransitionTime for conditions helpful when SharedResourceWarnings are being regularly updated but not actually changing in content status conditions lastTransitionTime Ignoring updates for untracked resources ArgoCD will only apply ignoreResourceUpdates configuration to tracked resources of an application This means dependant resources such as a ReplicaSet and Pod created by a Deployment will not ignore any updates and trigger a reconcile of the application for any changes If you want to apply the ignoreResourceUpdates configuration to an untracked resource you can add the argocd argoproj io ignore resource updates true annotation in the dependent resources manifest Example CronJob yaml apiVersion batch v1 kind CronJob metadata name hello namespace test cronjob spec schedule jobTemplate metadata annotations argocd argoproj io ignore resource updates true spec template metadata annotations argocd argoproj io ignore resource updates true spec containers name hello image busybox 1 28 imagePullPolicy IfNotPresent command bin sh c date echo Hello from the Kubernetes cluster restartPolicy OnFailure The resource updates will be ignored based on your the ignoreResourceUpdates configuration in the argocd cm configMap argocd cm yaml resource customizations ignoreResourceUpdates batch Job jsonPointers status resource customizations ignoreResourceUpdates Pod jsonPointers status
argocd NOTE The HA installation will require at least three different nodes due to pod anti affinity roles in the specs Additionally IPv6 only clusters are not supported Scaling Up High Availability A set of are provided for users who wish to run Argo CD in a highly available manner This runs more containers and runs Redis in HA mode Argo CD is largely stateless All data is persisted as Kubernetes objects which in turn is stored in Kubernetes etcd Redis is only used as a throw away cache and can be lost When lost it will be rebuilt without loss of service
# High Availability Argo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service. A set of [HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode. > **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the > specs. Additionally, IPv6 only clusters are not supported. ## Scaling Up ### argocd-repo-server **settings:** The `argocd-repo-server` is responsible for cloning Git repository, keeping it up to date and generating manifests using the appropriate tool. * `argocd-repo-server` fork/exec config management tool to generate manifests. The fork can fail due to lack of memory or limit on the number of OS threads. The `--parallelismlimit` flag controls how many manifests generations are running concurrently and helps avoid OOM kills. * the `argocd-repo-server` ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize, Helm or custom plugin. As a result Git repositories with multiple applications might affect repository server performance. Read [Monorepo Scaling Considerations](#monorepo-scaling-considerations) for more information. * `argocd-repo-server` clones the repository into `/tmp` (or the path specified in the `TMPDIR` env variable). The Pod might run out of disk space if it has too many repositories or if the repositories have a lot of files. To avoid this problem mount a persistent volume. * `argocd-repo-server` uses `git ls-remote` to resolve ambiguous revisions such as `HEAD`, a branch or a tag name. This operation happens frequently and might fail. To avoid failed syncs use the `ARGOCD_GIT_ATTEMPTS_COUNT` environment variable to retry failed requests. * `argocd-repo-server` Every 3m (by default) Argo CD checks for changes to the app manifests. Argo CD assumes by default that manifests only change when the repo changes, so it caches the generated manifests (for 24h by default). With Kustomize remote bases, or in case a Helm chart gets changed without bumping its version number, the expected manifests can change even though the repo has not changed. By reducing the cache time, you can get the changes without waiting for 24h. Use `--repo-cache-expiration duration`, and we'd suggest in low volume environments you try '1h'. Bear in mind that this will negate the benefits of caching if set too low. * `argocd-repo-server` executes config management tools such as `helm` or `kustomize` and enforces a 90 second timeout. This timeout can be changed by using the `ARGOCD_EXEC_TIMEOUT` env variable. The value should be in the Go time duration string format, for example, `2m30s`. **metrics:** * `argocd_git_request_total` - Number of git requests. This metric provides two tags: `repo` - Git repo URL; `request_type` - `ls-remote` or `fetch`. * `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - Is an environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store! ### argocd-application-controller **settings:** The `argocd-application-controller` uses `argocd-repo-server` to get generated manifests and Kubernetes API server to get the actual cluster state. * each controller replica uses two separate queues to process application reconciliation (milliseconds) and app syncing (seconds). The number of queue processors for each queue is controlled by `--status-processors` (20 by default) and `--operation-processors` (10 by default) flags. Increase the number of processors if your Argo CD instance manages too many applications. For 1000 application we use 50 for `--status-processors` and 25 for `--operation-processors` * The manifest generation typically takes the most time during reconciliation. The duration of manifest generation is limited to make sure the controller refresh queue does not overflow. The app reconciliation fails with `Context deadline exceeded` error if the manifest generation is taking too much time. As a workaround increase the value of `--repo-server-timeout-seconds` and consider scaling up the `argocd-repo-server` deployment. * The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache. This allows avoiding querying Kubernetes during app reconciliation and significantly improves performance. For performance reasons the controller monitors and caches only the preferred versions of a resource. During reconciliation, the controller might have to convert cached resources from the preferred version into a version of the resource stored in Git. If `kubectl convert` fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down reconciliation. In this case, we advise to use the preferred resource version in Git. * The controller polls Git every 3m by default. You can change this duration using the `timeout.reconciliation` and `timeout.reconciliation.jitter` setting in the `argocd-cm` ConfigMap. The value of the fields is a duration string e.g `60s`, `1m`, `1h` or `1d`. * If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas. To enable sharding, increase the number of replicas in `argocd-application-controller` `StatefulSet` and repeat the number of replicas in the `ARGOCD_CONTROLLER_REPLICAS` environment variable. The strategic merge patch below demonstrates changes required to configure two controller replicas. * By default, the controller will update the cluster information every 10 seconds. If there is a problem with your cluster network environment that is causing the update time to take a long time, you can try modifying the environment variable `ARGO_CD_UPDATE_CLUSTER_INFO_TIMEOUT` to increase the timeout (the unit is seconds). ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: argocd-application-controller spec: replicas: 2 template: spec: containers: - name: argocd-application-controller env: - name: ARGOCD_CONTROLLER_REPLICAS value: "2" ``` * In order to manually set the cluster's shard number, specify the optional `shard` property when creating a cluster. If not specified, it will be calculated on the fly by the application controller. * The shard distribution algorithm of the `argocd-application-controller` can be set by using the `--sharding-method` parameter. Supported sharding methods are : [legacy (default), round-robin, consistent-hashing]: - `legacy` mode uses an `uid` based distribution (non-uniform). - `round-robin` uses an equal distribution across all shards. - `consistent-hashing` uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters. The `--sharding-method` parameter can also be overridden by setting the key `controller.sharding.algorithm` in the `argocd-cmd-params-cm` `configMap` (preferably) or by setting the `ARGOCD_CONTROLLER_SHARDING_ALGORITHM` environment variable and by specifiying the same possible values. !!! warning "Alpha Features" The `round-robin` shard distribution algorithm is an experimental feature. Reshuffling is known to occur in certain scenarios with cluster removal. If the cluster at rank-0 is removed, reshuffling all clusters across shards will occur and may temporarily have negative performance impacts. The `consistent-hashing` shard distribution algorithm is an experimental feature. Extensive benchmark have been documented on the [CNOE blog](https://cnoe.io/blog/argo-cd-application-scalability) with encouraging results. Community feedback is highly appreciated before moving this feature to a production ready state. * A cluster can be manually assigned and forced to a `shard` by patching the `shard` field in the cluster secret to contain the shard number, e.g. ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: shard: 1 name: mycluster.example.com server: https://mycluster.example.com config: | { "bearerToken": "<authentication token>", "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` * `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store! * `ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` - environment variable controlling the number of pages the controller buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache. This is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd compaction interval timeout. In this scenario, when attempting to sync the cluster cache, the application controller may throw an error that the `continue parameter is too old to display a consistent list result`. Setting a higher value for this environment variable configures the controller with a larger buffer in which to store pre-fetched pages which are processed asynchronously, increasing the likelihood that all pages have been pulled before the etcd compaction interval timeout expires. In the most extreme case, operators can set this value such that `ARGOCD_CLUSTER_CACHE_LIST_PAGE_SIZE * ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` exceeds the largest resource count (grouped by k8s api version, the granule of parallelism for list operations). In this case, all resources will be buffered in memory -- no api server request will be blocked by processing. * `ARGOCD_APPLICATION_TREE_SHARD_SIZE` - environment variable controlling the max number of resources stored in one Redis key. Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis. The default value is 0, which means that the application tree is stored in a single Redis key. The reasonable value is 100. **metrics** * `argocd_app_reconcile` - reports application reconciliation duration in seconds. Can be used to build reconciliation duration heat map to get a high-level reconciliation performance picture. * `argocd_app_k8s_request_total` - number of k8s requests per application. The number of fallback Kubernetes API queries - useful to identify which application has a resource with non-preferred version and causes performance issues. ### argocd-server The `argocd-server` is stateless and probably the least likely to cause issues. To ensure there is no downtime during upgrades, consider increasing the number of replicas to `3` or more and repeat the number in the `ARGOCD_API_SERVER_REPLICAS` environment variable. The strategic merge patch below demonstrates this. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: argocd-server spec: replicas: 3 template: spec: containers: - name: argocd-server env: - name: ARGOCD_API_SERVER_REPLICAS value: "3" ``` **settings:** * The `ARGOCD_API_SERVER_REPLICAS` environment variable is used to divide [the limit of concurrent login requests (`ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT`)](./user-management/index.md#failed-logins-rate-limiting) between each replica. * The `ARGOCD_GRPC_MAX_SIZE_MB` environment variable allows specifying the max size of the server response message in megabytes. The default value is 200. You might need to increase this for an Argo CD instance that manages 3000+ applications. ### argocd-dex-server, argocd-redis The `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers/sentinels. ## Monorepo Scaling Considerations Argo CD repo server maintains one repository clone locally and uses it for application manifest generation. If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed. This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications (50+). ### Enable Concurrent Processing Argo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings. If the manifest generation has no side effects then requests are processed in parallel without a performance penalty. The following are known cases that might cause slowness and their workarounds: * **Multiple Helm based applications pointing to the same directory in one Git repository:** for historical reasons Argo CD generates Helm manifests sequentially. To enable parallel generation set `ARGOCD_HELM_ALLOW_CONCURRENCY=true` to `argocd-repo-server` deployment or create `.argocd-allow-concurrency` file. Future versions of Argo CD will enable this by default. * **Multiple Custom plugin based applications:** avoid creating temporal files during manifest generation and create `.argocd-allow-concurrency` file in the app directory, or use the sidecar plugin option, which processes each application using a temporary copy of the repository. * **Multiple Kustomize applications in same repository with [parameter overrides](../user-guide/parameters.md):** sorry, no workaround for now. ### Manifest Paths Annotation Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key. A new commit to the Git repository invalidates the cache for all applications configured in the repository. This can negatively affect repositories with multiple applications. You can use [webhooks](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/webhook.md) and the `argocd.argoproj.io/manifest-generate-paths` Application CRD annotation to solve this problem and improve performance. The `argocd.argoproj.io/manifest-generate-paths` annotation contains a semicolon-separated list of paths within the Git repository that are used during manifest generation. It will use the paths specified in the annotation to compare the last cached revision to the latest commit. If no modified files match the paths specified in `argocd.argoproj.io/manifest-generate-paths`, then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit. Installations that use a different repository for each application are **not** subject to this behavior and will likely get no benefit from using these annotations. Similarly, applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source. For webhooks, the comparison is done using the files specified in the webhook event payload instead. !!! note Application manifest paths annotation support for webhooks depends on the git provider used for the Application. It is currently only supported for GitHub, GitLab, and Gogs based repos. * **Relative path** The annotation might contain a relative path. In this case the path is considered relative to the path specified in the application source: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: # resolves to the 'guestbook' directory argocd.argoproj.io/manifest-generate-paths: . spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` * **Absolute path** The annotation value might be an absolute path starting with '/'. In this case path is considered as an absolute path within the Git repository: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook annotations: argocd.argoproj.io/manifest-generate-paths: /guestbook spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` * **Multiple paths** It is possible to put multiple paths into the annotation. Paths must be separated with a semicolon (`;`): ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook annotations: # resolves to 'my-application' and 'shared' argocd.argoproj.io/manifest-generate-paths: .;../shared spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: my-application # ... ``` * **Glob paths** The annotation might contain a glob pattern path, which can be any pattern supported by the [Go filepath Match function](https://pkg.go.dev/path/filepath#Match): ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd annotations: # resolves to any file matching the pattern of *-secret.yaml in the top level shared folder argocd.argoproj.io/manifest-generate-paths: "/shared/*-secret.yaml" spec: source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook # ... ``` !!! note If application manifest generation using the `argocd.argoproj.io/manifest-generate-paths` annotation feature is enabled, only the resources specified by this annotation will be sent to the CMP server for manifest generation, rather than the entire repository. To determine the appropriate resources, a common root path is calculated based on the paths provided in the annotation. The application path serves as the deepest path that can be selected as the root. ### Application Sync Timeout & Jitter Argo CD has a timeout for application syncs. It will trigger a refresh for each application periodically when the timeout expires. With a large number of applications, this will cause a spike in the refresh queue and can cause a spike to the repo-server component. To avoid this, you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo-server to catch up. The jitter is the maximum duration that can be added to the sync timeout, so if the sync timeout is 5 minutes and the jitter is 1 minute, then the actual timeout will be between 5 and 6 minutes. To configure the jitter you can set the following environment variables: * `ARGOCD_RECONCILIATION_JITTER` - The jitter to apply to the sync timeout. Disabled when value is 0. Defaults to 0. ## Rate Limiting Application Reconciliations To prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors, we can configure rate limits on the workqueues used by the application controller. There are two types of rate limits that can be configured: * Global rate limits * Per item rate limits The final rate limiter uses a combination of both and calculates the final backoff as `max(globalBackoff, perItemBackoff)`. ### Global rate limits This is disabled by default, it is a simple bucket based rate limiter that limits the number of items that can be queued per second. This is useful to prevent a large number of apps from being queued at the same time. To configure the bucket limiter you can set the following environment variables: * `WORKQUEUE_BUCKET_SIZE` - The number of items that can be queued in a single burst. Defaults to 500. * `WORKQUEUE_BUCKET_QPS` - The number of items that can be queued per second. Defaults to MaxFloat64, which disables the limiter. ### Per item rate limits This by default returns a fixed base delay/backoff value but can be configured to return exponential values. Per item rate limiter limits the number of times a particular item can be queued. This is based on exponential backoff where the backoff time for an item keeps increasing exponentially if it is queued multiple times in a short period, but the backoff is reset automatically if a configured `cool down` period has elapsed since the last time the item was queued. To configure the per item limiter you can set the following environment variables: * `WORKQUEUE_FAILURE_COOLDOWN_NS` : The cool down period in nanoseconds, once period has elapsed for an item the backoff is reset. Exponential backoff is disabled if set to 0(default), eg. values : 10 * 10^9 (=10s) * `WORKQUEUE_BASE_DELAY_NS` : The base delay in nanoseconds, this is the initial backoff used in the exponential backoff formula. Defaults to 1000 (=1μs) * `WORKQUEUE_MAX_DELAY_NS` : The max delay in nanoseconds, this is the max backoff limit. Defaults to 3 * 10^9 (=3s) * `WORKQUEUE_BACKOFF_FACTOR` : The backoff factor, this is the factor by which the backoff is increased for each retry. Defaults to 1.5 The formula used to calculate the backoff time for an item, where `numRequeue` is the number of times the item has been queued and `lastRequeueTime` is the time at which the item was last queued: - When `WORKQUEUE_FAILURE_COOLDOWN_NS` != 0 : ``` backoff = time.Since(lastRequeueTime) >= WORKQUEUE_FAILURE_COOLDOWN_NS ? WORKQUEUE_BASE_DELAY_NS : min( WORKQUEUE_MAX_DELAY_NS, WORKQUEUE_BASE_DELAY_NS * WORKQUEUE_BACKOFF_FACTOR ^ (numRequeue) ) ``` - When `WORKQUEUE_FAILURE_COOLDOWN_NS` = 0 : ``` backoff = WORKQUEUE_BASE_DELAY_NS ``` ## HTTP Request Retry Strategy In scenarios where network instability or transient server errors occur, the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests. It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network. ### Configuring Retries The retry logic can be fine-tuned with the following environment variables: * `ARGOCD_K8SCLIENT_RETRY_MAX` - The maximum number of retries for each request. The request will be dropped after this count is reached. Defaults to 0 (no retries). * `ARGOCD_K8SCLIENT_RETRY_BASE_BACKOFF` - The initial backoff delay on the first retry attempt in ms. Subsequent retries will double this backoff time up to a maximum threshold. Defaults to 100ms. ### Backoff Strategy The backoff strategy employed is a simple exponential backoff without jitter. The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached. The formula for calculating the backoff time is: ``` backoff = min(retryWaitMax, baseRetryBackoff * (2 ^ retryAttempt)) ``` Where `retryAttempt` starts at 0 and increments by 1 for each subsequent retry. ### Maximum Wait Time There is a cap on the backoff time to prevent excessive wait times between retries. This cap is defined by: `retryWaitMax` - The maximum duration to wait before retrying. This ensures that retries happen within a reasonable timeframe. Defaults to 10 seconds. ### Non-Retriable Conditions Not all HTTP responses are eligible for retries. The following conditions will not trigger a retry: * Responses with a status code indicating client errors (4xx) except for 429 Too Many Requests. * Responses with the status code 501 Not Implemented. ## CPU/Memory Profiling Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component. The profiling endpoint is available on metrics port of each component. See [metrics](./metrics.md) for more information about the port. For security reasons the profiling endpoint is disabled by default. The endpoint can be enabled by setting the `server.profile.enabled` or `controller.profile.enabled` key of [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml) ConfigMap to `true`. Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles. Example: ```bash $ kubectl port-forward svc/argocd-metrics 8082:8082 $ go tool pprof http://localhost:8082/debug/pprof/heap ```
argocd
High Availability Argo CD is largely stateless All data is persisted as Kubernetes objects which in turn is stored in Kubernetes etcd Redis is only used as a throw away cache and can be lost When lost it will be rebuilt without loss of service A set of HA manifests https github com argoproj argo cd tree master manifests ha are provided for users who wish to run Argo CD in a highly available manner This runs more containers and runs Redis in HA mode NOTE The HA installation will require at least three different nodes due to pod anti affinity roles in the specs Additionally IPv6 only clusters are not supported Scaling Up argocd repo server settings The argocd repo server is responsible for cloning Git repository keeping it up to date and generating manifests using the appropriate tool argocd repo server fork exec config management tool to generate manifests The fork can fail due to lack of memory or limit on the number of OS threads The parallelismlimit flag controls how many manifests generations are running concurrently and helps avoid OOM kills the argocd repo server ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize Helm or custom plugin As a result Git repositories with multiple applications might affect repository server performance Read Monorepo Scaling Considerations monorepo scaling considerations for more information argocd repo server clones the repository into tmp or the path specified in the TMPDIR env variable The Pod might run out of disk space if it has too many repositories or if the repositories have a lot of files To avoid this problem mount a persistent volume argocd repo server uses git ls remote to resolve ambiguous revisions such as HEAD a branch or a tag name This operation happens frequently and might fail To avoid failed syncs use the ARGOCD GIT ATTEMPTS COUNT environment variable to retry failed requests argocd repo server Every 3m by default Argo CD checks for changes to the app manifests Argo CD assumes by default that manifests only change when the repo changes so it caches the generated manifests for 24h by default With Kustomize remote bases or in case a Helm chart gets changed without bumping its version number the expected manifests can change even though the repo has not changed By reducing the cache time you can get the changes without waiting for 24h Use repo cache expiration duration and we d suggest in low volume environments you try 1h Bear in mind that this will negate the benefits of caching if set too low argocd repo server executes config management tools such as helm or kustomize and enforces a 90 second timeout This timeout can be changed by using the ARGOCD EXEC TIMEOUT env variable The value should be in the Go time duration string format for example 2m30s metrics argocd git request total Number of git requests This metric provides two tags repo Git repo URL request type ls remote or fetch ARGOCD ENABLE GRPC TIME HISTOGRAM Is an environment variable that enables collecting RPC performance metrics Enable it if you need to troubleshoot performance issues Note This metric is expensive to both query and store argocd application controller settings The argocd application controller uses argocd repo server to get generated manifests and Kubernetes API server to get the actual cluster state each controller replica uses two separate queues to process application reconciliation milliseconds and app syncing seconds The number of queue processors for each queue is controlled by status processors 20 by default and operation processors 10 by default flags Increase the number of processors if your Argo CD instance manages too many applications For 1000 application we use 50 for status processors and 25 for operation processors The manifest generation typically takes the most time during reconciliation The duration of manifest generation is limited to make sure the controller refresh queue does not overflow The app reconciliation fails with Context deadline exceeded error if the manifest generation is taking too much time As a workaround increase the value of repo server timeout seconds and consider scaling up the argocd repo server deployment The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache This allows avoiding querying Kubernetes during app reconciliation and significantly improves performance For performance reasons the controller monitors and caches only the preferred versions of a resource During reconciliation the controller might have to convert cached resources from the preferred version into a version of the resource stored in Git If kubectl convert fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down reconciliation In this case we advise to use the preferred resource version in Git The controller polls Git every 3m by default You can change this duration using the timeout reconciliation and timeout reconciliation jitter setting in the argocd cm ConfigMap The value of the fields is a duration string e g 60s 1m 1h or 1d If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas To enable sharding increase the number of replicas in argocd application controller StatefulSet and repeat the number of replicas in the ARGOCD CONTROLLER REPLICAS environment variable The strategic merge patch below demonstrates changes required to configure two controller replicas By default the controller will update the cluster information every 10 seconds If there is a problem with your cluster network environment that is causing the update time to take a long time you can try modifying the environment variable ARGO CD UPDATE CLUSTER INFO TIMEOUT to increase the timeout the unit is seconds yaml apiVersion apps v1 kind StatefulSet metadata name argocd application controller spec replicas 2 template spec containers name argocd application controller env name ARGOCD CONTROLLER REPLICAS value 2 In order to manually set the cluster s shard number specify the optional shard property when creating a cluster If not specified it will be calculated on the fly by the application controller The shard distribution algorithm of the argocd application controller can be set by using the sharding method parameter Supported sharding methods are legacy default round robin consistent hashing legacy mode uses an uid based distribution non uniform round robin uses an equal distribution across all shards consistent hashing uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters The sharding method parameter can also be overridden by setting the key controller sharding algorithm in the argocd cmd params cm configMap preferably or by setting the ARGOCD CONTROLLER SHARDING ALGORITHM environment variable and by specifiying the same possible values warning Alpha Features The round robin shard distribution algorithm is an experimental feature Reshuffling is known to occur in certain scenarios with cluster removal If the cluster at rank 0 is removed reshuffling all clusters across shards will occur and may temporarily have negative performance impacts The consistent hashing shard distribution algorithm is an experimental feature Extensive benchmark have been documented on the CNOE blog https cnoe io blog argo cd application scalability with encouraging results Community feedback is highly appreciated before moving this feature to a production ready state A cluster can be manually assigned and forced to a shard by patching the shard field in the cluster secret to contain the shard number e g yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData shard 1 name mycluster example com server https mycluster example com config bearerToken authentication token tlsClientConfig insecure false caData base64 encoded certificate ARGOCD ENABLE GRPC TIME HISTOGRAM environment variable that enables collecting RPC performance metrics Enable it if you need to troubleshoot performance issues Note This metric is expensive to both query and store ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE environment variable controlling the number of pages the controller buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache This is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd compaction interval timeout In this scenario when attempting to sync the cluster cache the application controller may throw an error that the continue parameter is too old to display a consistent list result Setting a higher value for this environment variable configures the controller with a larger buffer in which to store pre fetched pages which are processed asynchronously increasing the likelihood that all pages have been pulled before the etcd compaction interval timeout expires In the most extreme case operators can set this value such that ARGOCD CLUSTER CACHE LIST PAGE SIZE ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE exceeds the largest resource count grouped by k8s api version the granule of parallelism for list operations In this case all resources will be buffered in memory no api server request will be blocked by processing ARGOCD APPLICATION TREE SHARD SIZE environment variable controlling the max number of resources stored in one Redis key Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis The default value is 0 which means that the application tree is stored in a single Redis key The reasonable value is 100 metrics argocd app reconcile reports application reconciliation duration in seconds Can be used to build reconciliation duration heat map to get a high level reconciliation performance picture argocd app k8s request total number of k8s requests per application The number of fallback Kubernetes API queries useful to identify which application has a resource with non preferred version and causes performance issues argocd server The argocd server is stateless and probably the least likely to cause issues To ensure there is no downtime during upgrades consider increasing the number of replicas to 3 or more and repeat the number in the ARGOCD API SERVER REPLICAS environment variable The strategic merge patch below demonstrates this yaml apiVersion apps v1 kind Deployment metadata name argocd server spec replicas 3 template spec containers name argocd server env name ARGOCD API SERVER REPLICAS value 3 settings The ARGOCD API SERVER REPLICAS environment variable is used to divide the limit of concurrent login requests ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT user management index md failed logins rate limiting between each replica The ARGOCD GRPC MAX SIZE MB environment variable allows specifying the max size of the server response message in megabytes The default value is 200 You might need to increase this for an Argo CD instance that manages 3000 applications argocd dex server argocd redis The argocd dex server uses an in memory database and two or more instances would have inconsistent data argocd redis is pre configured with the understanding of only three total redis servers sentinels Monorepo Scaling Considerations Argo CD repo server maintains one repository clone locally and uses it for application manifest generation If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications 50 Enable Concurrent Processing Argo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings If the manifest generation has no side effects then requests are processed in parallel without a performance penalty The following are known cases that might cause slowness and their workarounds Multiple Helm based applications pointing to the same directory in one Git repository for historical reasons Argo CD generates Helm manifests sequentially To enable parallel generation set ARGOCD HELM ALLOW CONCURRENCY true to argocd repo server deployment or create argocd allow concurrency file Future versions of Argo CD will enable this by default Multiple Custom plugin based applications avoid creating temporal files during manifest generation and create argocd allow concurrency file in the app directory or use the sidecar plugin option which processes each application using a temporary copy of the repository Multiple Kustomize applications in same repository with parameter overrides user guide parameters md sorry no workaround for now Manifest Paths Annotation Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key A new commit to the Git repository invalidates the cache for all applications configured in the repository This can negatively affect repositories with multiple applications You can use webhooks https github com argoproj argo cd blob master docs operator manual webhook md and the argocd argoproj io manifest generate paths Application CRD annotation to solve this problem and improve performance The argocd argoproj io manifest generate paths annotation contains a semicolon separated list of paths within the Git repository that are used during manifest generation It will use the paths specified in the annotation to compare the last cached revision to the latest commit If no modified files match the paths specified in argocd argoproj io manifest generate paths then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit Installations that use a different repository for each application are not subject to this behavior and will likely get no benefit from using these annotations Similarly applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source For webhooks the comparison is done using the files specified in the webhook event payload instead note Application manifest paths annotation support for webhooks depends on the git provider used for the Application It is currently only supported for GitHub GitLab and Gogs based repos Relative path The annotation might contain a relative path In this case the path is considered relative to the path specified in the application source yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd annotations resolves to the guestbook directory argocd argoproj io manifest generate paths spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook Absolute path The annotation value might be an absolute path starting with In this case path is considered as an absolute path within the Git repository yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook annotations argocd argoproj io manifest generate paths guestbook spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook Multiple paths It is possible to put multiple paths into the annotation Paths must be separated with a semicolon yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook annotations resolves to my application and shared argocd argoproj io manifest generate paths shared spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path my application Glob paths The annotation might contain a glob pattern path which can be any pattern supported by the Go filepath Match function https pkg go dev path filepath Match yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd annotations resolves to any file matching the pattern of secret yaml in the top level shared folder argocd argoproj io manifest generate paths shared secret yaml spec source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook note If application manifest generation using the argocd argoproj io manifest generate paths annotation feature is enabled only the resources specified by this annotation will be sent to the CMP server for manifest generation rather than the entire repository To determine the appropriate resources a common root path is calculated based on the paths provided in the annotation The application path serves as the deepest path that can be selected as the root Application Sync Timeout Jitter Argo CD has a timeout for application syncs It will trigger a refresh for each application periodically when the timeout expires With a large number of applications this will cause a spike in the refresh queue and can cause a spike to the repo server component To avoid this you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo server to catch up The jitter is the maximum duration that can be added to the sync timeout so if the sync timeout is 5 minutes and the jitter is 1 minute then the actual timeout will be between 5 and 6 minutes To configure the jitter you can set the following environment variables ARGOCD RECONCILIATION JITTER The jitter to apply to the sync timeout Disabled when value is 0 Defaults to 0 Rate Limiting Application Reconciliations To prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors we can configure rate limits on the workqueues used by the application controller There are two types of rate limits that can be configured Global rate limits Per item rate limits The final rate limiter uses a combination of both and calculates the final backoff as max globalBackoff perItemBackoff Global rate limits This is disabled by default it is a simple bucket based rate limiter that limits the number of items that can be queued per second This is useful to prevent a large number of apps from being queued at the same time To configure the bucket limiter you can set the following environment variables WORKQUEUE BUCKET SIZE The number of items that can be queued in a single burst Defaults to 500 WORKQUEUE BUCKET QPS The number of items that can be queued per second Defaults to MaxFloat64 which disables the limiter Per item rate limits This by default returns a fixed base delay backoff value but can be configured to return exponential values Per item rate limiter limits the number of times a particular item can be queued This is based on exponential backoff where the backoff time for an item keeps increasing exponentially if it is queued multiple times in a short period but the backoff is reset automatically if a configured cool down period has elapsed since the last time the item was queued To configure the per item limiter you can set the following environment variables WORKQUEUE FAILURE COOLDOWN NS The cool down period in nanoseconds once period has elapsed for an item the backoff is reset Exponential backoff is disabled if set to 0 default eg values 10 10 9 10s WORKQUEUE BASE DELAY NS The base delay in nanoseconds this is the initial backoff used in the exponential backoff formula Defaults to 1000 1 s WORKQUEUE MAX DELAY NS The max delay in nanoseconds this is the max backoff limit Defaults to 3 10 9 3s WORKQUEUE BACKOFF FACTOR The backoff factor this is the factor by which the backoff is increased for each retry Defaults to 1 5 The formula used to calculate the backoff time for an item where numRequeue is the number of times the item has been queued and lastRequeueTime is the time at which the item was last queued When WORKQUEUE FAILURE COOLDOWN NS 0 backoff time Since lastRequeueTime WORKQUEUE FAILURE COOLDOWN NS WORKQUEUE BASE DELAY NS min WORKQUEUE MAX DELAY NS WORKQUEUE BASE DELAY NS WORKQUEUE BACKOFF FACTOR numRequeue When WORKQUEUE FAILURE COOLDOWN NS 0 backoff WORKQUEUE BASE DELAY NS HTTP Request Retry Strategy In scenarios where network instability or transient server errors occur the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network Configuring Retries The retry logic can be fine tuned with the following environment variables ARGOCD K8SCLIENT RETRY MAX The maximum number of retries for each request The request will be dropped after this count is reached Defaults to 0 no retries ARGOCD K8SCLIENT RETRY BASE BACKOFF The initial backoff delay on the first retry attempt in ms Subsequent retries will double this backoff time up to a maximum threshold Defaults to 100ms Backoff Strategy The backoff strategy employed is a simple exponential backoff without jitter The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached The formula for calculating the backoff time is backoff min retryWaitMax baseRetryBackoff 2 retryAttempt Where retryAttempt starts at 0 and increments by 1 for each subsequent retry Maximum Wait Time There is a cap on the backoff time to prevent excessive wait times between retries This cap is defined by retryWaitMax The maximum duration to wait before retrying This ensures that retries happen within a reasonable timeframe Defaults to 10 seconds Non Retriable Conditions Not all HTTP responses are eligible for retries The following conditions will not trigger a retry Responses with a status code indicating client errors 4xx except for 429 Too Many Requests Responses with the status code 501 Not Implemented CPU Memory Profiling Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component The profiling endpoint is available on metrics port of each component See metrics metrics md for more information about the port For security reasons the profiling endpoint is disabled by default The endpoint can be enabled by setting the server profile enabled or controller profile enabled key of argocd cmd params cm argocd cmd params cm yaml ConfigMap to true Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles Example bash kubectl port forward svc argocd metrics 8082 8082 go tool pprof http localhost 8082 debug pprof heap
argocd note Git webhook notifications from GitHub GitLab Bitbucket Bitbucket Server Azure DevOps and Gogs The following explains how to configure this delay from polling the API server can be configured to receive webhook events Argo CD supports Overview Git Webhook Configuration Argo CD polls Git repositories every three minutes to detect changes to the manifests To eliminate a Git webhook for GitHub but the same process should be applicable to other providers
# Git Webhook Configuration ## Overview Argo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate this delay from polling, the API server can be configured to receive webhook events. Argo CD supports Git webhook notifications from GitHub, GitLab, Bitbucket, Bitbucket Server, Azure DevOps and Gogs. The following explains how to configure a Git webhook for GitHub, but the same process should be applicable to other providers. !!! note The webhook handler does not differentiate between branch events and tag events where the branch and tag names are the same. A hook event for a push to branch `x` will trigger a refresh for an app pointing at the same repo with `targetRevision: refs/tags/x`. ## 1. Create The WebHook In The Git Provider In your Git provider, navigate to the settings page where webhooks can be configured. The payload URL configured in the Git provider should use the `/api/webhook` endpoint of your Argo CD instance (e.g. `https://argocd.example.com/api/webhook`). If you wish to use a shared secret, input an arbitrary value in the secret. This value will be used when configuring the webhook in the next step. To prevent DDoS attacks with unauthenticated webhook events (the `/api/webhook` endpoint currently lacks rate limiting protection), it is recommended to limit the payload size. You can achieve this by configuring the `argocd-cm` ConfigMap with the `webhook.maxPayloadSizeMB` attribute. The default value is 1GB. ## Github ![Add Webhook](../assets/webhook-config.png "Add Webhook") !!! note When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks ## Azure DevOps ![Add Webhook](../assets/azure-devops-webhook-config.png "Add Webhook") Azure DevOps optionally supports securing the webhook using basic authentication. To use it, specify the username and password in the webhook configuration and configure the same username/password in `argocd-secret` Kubernetes secret in `webhook.azuredevops.username` and `webhook.azuredevops.password` keys. ## 2. Configure Argo CD With The WebHook Secret (Optional) Configuring a webhook shared secret is optional, since Argo CD will still refresh applications related to the Git repository, even with unauthenticated webhook events. This is safe to do since the contents of webhook payloads are considered untrusted, and will only result in a refresh of the application (a process which already occurs at three-minute intervals). If Argo CD is publicly accessible, then configuring a webhook secret is recommended to prevent a DDoS attack. In the `argocd-secret` Kubernetes secret, configure one of the following keys with the Git provider's webhook secret configured in step 1. | Provider | K8s Secret Key | |-----------------|----------------------------------| | GitHub | `webhook.github.secret` | | GitLab | `webhook.gitlab.secret` | | BitBucket | `webhook.bitbucket.uuid` | | BitBucketServer | `webhook.bitbucketserver.secret` | | Gogs | `webhook.gogs.secret` | | Azure DevOps | `webhook.azuredevops.username` | | | `webhook.azuredevops.password` | Edit the Argo CD Kubernetes secret: ```bash kubectl edit secret argocd-secret -n argocd ``` TIP: for ease of entering secrets, Kubernetes supports inputting secrets in the `stringData` field, which saves you the trouble of base64 encoding the values and copying it to the `data` field. Simply copy the shared webhook secret created in step 1, to the corresponding GitHub/GitLab/BitBucket key under the `stringData` field: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret namespace: argocd type: Opaque data: ... stringData: # github webhook secret webhook.github.secret: shhhh! it's a GitHub secret # gitlab webhook secret webhook.gitlab.secret: shhhh! it's a GitLab secret # bitbucket webhook secret webhook.bitbucket.uuid: your-bitbucket-uuid # bitbucket server webhook secret webhook.bitbucketserver.secret: shhhh! it's a Bitbucket server secret # gogs server webhook secret webhook.gogs.secret: shhhh! it's a gogs server secret # azuredevops username and password webhook.azuredevops.username: admin webhook.azuredevops.password: secret-password ``` After saving, the changes should take effect automatically. ### Alternative If you want to store webhook data in **another** Kubernetes `Secret`, instead of `argocd-secret`. ArgoCD knows to check the keys under `data` in your Kubernetes `Secret` starts with `$`, then your Kubernetes `Secret` name and `:` (colon). Syntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>` > NOTE: Secret must have label `app.kubernetes.io/part-of: argocd` For more information refer to the corresponding section in the [User Management Documentation](user-management/index.md#alternative).
argocd
Git Webhook Configuration Overview Argo CD polls Git repositories every three minutes to detect changes to the manifests To eliminate this delay from polling the API server can be configured to receive webhook events Argo CD supports Git webhook notifications from GitHub GitLab Bitbucket Bitbucket Server Azure DevOps and Gogs The following explains how to configure a Git webhook for GitHub but the same process should be applicable to other providers note The webhook handler does not differentiate between branch events and tag events where the branch and tag names are the same A hook event for a push to branch x will trigger a refresh for an app pointing at the same repo with targetRevision refs tags x 1 Create The WebHook In The Git Provider In your Git provider navigate to the settings page where webhooks can be configured The payload URL configured in the Git provider should use the api webhook endpoint of your Argo CD instance e g https argocd example com api webhook If you wish to use a shared secret input an arbitrary value in the secret This value will be used when configuring the webhook in the next step To prevent DDoS attacks with unauthenticated webhook events the api webhook endpoint currently lacks rate limiting protection it is recommended to limit the payload size You can achieve this by configuring the argocd cm ConfigMap with the webhook maxPayloadSizeMB attribute The default value is 1GB Github Add Webhook assets webhook config png Add Webhook note When creating the webhook in GitHub the Content type needs to be set to application json The default value application x www form urlencoded is not supported by the library used to handle the hooks Azure DevOps Add Webhook assets azure devops webhook config png Add Webhook Azure DevOps optionally supports securing the webhook using basic authentication To use it specify the username and password in the webhook configuration and configure the same username password in argocd secret Kubernetes secret in webhook azuredevops username and webhook azuredevops password keys 2 Configure Argo CD With The WebHook Secret Optional Configuring a webhook shared secret is optional since Argo CD will still refresh applications related to the Git repository even with unauthenticated webhook events This is safe to do since the contents of webhook payloads are considered untrusted and will only result in a refresh of the application a process which already occurs at three minute intervals If Argo CD is publicly accessible then configuring a webhook secret is recommended to prevent a DDoS attack In the argocd secret Kubernetes secret configure one of the following keys with the Git provider s webhook secret configured in step 1 Provider K8s Secret Key GitHub webhook github secret GitLab webhook gitlab secret BitBucket webhook bitbucket uuid BitBucketServer webhook bitbucketserver secret Gogs webhook gogs secret Azure DevOps webhook azuredevops username webhook azuredevops password Edit the Argo CD Kubernetes secret bash kubectl edit secret argocd secret n argocd TIP for ease of entering secrets Kubernetes supports inputting secrets in the stringData field which saves you the trouble of base64 encoding the values and copying it to the data field Simply copy the shared webhook secret created in step 1 to the corresponding GitHub GitLab BitBucket key under the stringData field yaml apiVersion v1 kind Secret metadata name argocd secret namespace argocd type Opaque data stringData github webhook secret webhook github secret shhhh it s a GitHub secret gitlab webhook secret webhook gitlab secret shhhh it s a GitLab secret bitbucket webhook secret webhook bitbucket uuid your bitbucket uuid bitbucket server webhook secret webhook bitbucketserver secret shhhh it s a Bitbucket server secret gogs server webhook secret webhook gogs secret shhhh it s a gogs server secret azuredevops username and password webhook azuredevops username admin webhook azuredevops password secret password After saving the changes should take effect automatically Alternative If you want to store webhook data in another Kubernetes Secret instead of argocd secret ArgoCD knows to check the keys under data in your Kubernetes Secret starts with then your Kubernetes Secret name and colon Syntax k8s secret name a key in that k8s secret NOTE Secret must have label app kubernetes io part of argocd For more information refer to the corresponding section in the User Management Documentation user management index md alternative
argocd Argo CD applications projects and settings can be defined declaratively using Kubernetes manifests These can be updated using without needing to touch the command line tool Quick Reference Declarative Setup Atomic configuration All resources including and specs have to be installed in the Argo CD namespace by default
# Declarative Setup Argo CD applications, projects and settings can be defined declaratively using Kubernetes manifests. These can be updated using `kubectl apply`, without needing to touch the `argocd` command-line tool. ## Quick Reference All resources, including `Application` and `AppProject` specs, have to be installed in the Argo CD namespace (by default `argocd`). ### Atomic configuration | Sample File | Resource Name | Kind | Description | |-----------------------------------------------------------------------|------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------| | [`argocd-cm.yaml`](argocd-cm-yaml.md) | argocd-cm | ConfigMap | General Argo CD configuration | | [`argocd-repositories.yaml`](argocd-repositories-yaml.md) | my-private-repo / istio-helm-repo / private-helm-repo / private-repo | Secrets | Sample repository connection details | | [`argocd-repo-creds.yaml`](argocd-repo-creds-yaml.md) | argoproj-https-creds / argoproj-ssh-creds / github-creds / github-enterprise-creds | Secrets | Sample repository credential templates | | [`argocd-cmd-params-cm.yaml`](argocd-cmd-params-cm-yaml.md) | argocd-cmd-params-cm | ConfigMap | Argo CD env variables configuration | | [`argocd-secret.yaml`](argocd-secret-yaml.md) | argocd-secret | Secret | User Passwords, Certificates (deprecated), Signing Key, Dex secrets, Webhook secrets | | [`argocd-rbac-cm.yaml`](argocd-rbac-cm-yaml.md) | argocd-rbac-cm | ConfigMap | RBAC Configuration | | [`argocd-tls-certs-cm.yaml`](argocd-tls-certs-cm-yaml.md) | argocd-tls-certs-cm | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later) | | [`argocd-ssh-known-hosts-cm.yaml`](argocd-ssh-known-hosts-cm-yaml.md) | argocd-ssh-known-hosts-cm | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later) | For each specific kind of ConfigMap and Secret resource, there is only a single supported resource name (as listed in the above table) - if you need to merge things you need to do it before creating them. !!!warning "A note about ConfigMap resources" Be sure to annotate your ConfigMap resources using the label `app.kubernetes.io/part-of: argocd`, otherwise Argo CD will not be able to use them. ### Multiple configuration objects | Sample File | Kind | Description | |------------------------------------------------------------------|-------------|--------------------------| | [`application.yaml`](../user-guide/application-specification.md) | Application | Example application spec | | [`project.yaml`](./project-specification.md) | AppProject | Example project spec | | [`argocd-repositories.yaml`](./argocd-repositories-yaml.md) | Secret | Repository credentials | For `Application` and `AppProject` resources, the name of the resource equals the name of the application or project within Argo CD. This also means that application and project names are unique within a given Argo CD installation - you cannot have the same application name for two different applications. ## Applications The Application CRD is the Kubernetes resource object representing a deployed application instance in an environment. It is defined by two key pieces of information: * `source` reference to the desired state in Git (repository, revision, path, environment) * `destination` reference to the target cluster and namespace. For the cluster one of server or name can be used, but not both (which will result in an error). Under the hood when the server is missing, it is calculated based on the name and used for any operations. A minimal Application spec is as follows: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd spec: project: default source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook destination: server: https://kubernetes.default.svc namespace: guestbook ``` See [application.yaml](application.yaml) for additional fields. As long as you have completed the first step of [Getting Started](../getting_started.md#1-install-argo-cd), you can apply this with `kubectl apply -n argocd -f application.yaml` and Argo CD will start deploying the guestbook application. !!! note The namespace must match the namespace of your Argo CD instance - typically this is `argocd`. !!! note When creating an application from a Helm repository, the `chart` attribute must be specified instead of the `path` attribute within `spec.source`. ```yaml spec: source: repoURL: https://argoproj.github.io/argo-helm chart: argo ``` !!! warning Without the `resources-finalizer.argocd.argoproj.io` finalizer, deleting an application will not delete the resources it manages. To perform a cascading delete, you must add the finalizer. See [App Deletion](../user-guide/app_deletion.md#about-the-deletion-finalizer). ```yaml metadata: finalizers: - resources-finalizer.argocd.argoproj.io ``` ### App of Apps You can create an app that creates other apps, which in turn can create other apps. This allows you to declaratively manage a group of apps that can be deployed and configured in concert. See [cluster bootstrapping](cluster-bootstrapping.md). ## Projects The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications. It is defined by the following key pieces of information: * `sourceRepos` reference to the repositories that applications within the project can pull manifests from. * `destinations` reference to clusters and namespaces that applications within the project can deploy into. * `roles` list of entities with definitions of their access to resources within the project. !!!warning "Projects which can deploy to the Argo CD namespace grant admin access" If a Project's `destinations` configuration allows deploying to the namespace in which Argo CD is installed, then Applications under that project have admin-level access. [RBAC access](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/) to admin-level Projects should be carefully restricted, and push access to allowed `sourceRepos` should be limited to only admins. An example spec is as follows: ```yaml apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-project namespace: argocd # Finalizer that ensures that project is not deleted until it is not referenced by any application finalizers: - resources-finalizer.argocd.argoproj.io spec: description: Example Project # Allow manifests to deploy from any Git repos sourceRepos: - '*' # Only permit applications to deploy to the guestbook namespace in the same cluster destinations: - namespace: guestbook server: https://kubernetes.default.svc # Deny all cluster-scoped resources from being created, except for Namespace clusterResourceWhitelist: - group: '' kind: Namespace # Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy namespaceResourceBlacklist: - group: '' kind: ResourceQuota - group: '' kind: LimitRange - group: '' kind: NetworkPolicy # Deny all namespaced-scoped resources from being created, except for Deployment and StatefulSet namespaceResourceWhitelist: - group: 'apps' kind: Deployment - group: 'apps' kind: StatefulSet roles: # A role which provides read-only access to all applications in the project - name: read-only description: Read-only privileges to my-project policies: - p, proj:my-project:read-only, applications, get, my-project/*, allow groups: - my-oidc-group # A role which provides sync privileges to only the guestbook-dev application, e.g. to provide # sync privileges to a CI system - name: ci-role description: Sync privileges for guestbook-dev policies: - p, proj:my-project:ci-role, applications, sync, my-project/guestbook-dev, allow # NOTE: JWT tokens can only be generated by the API server and the token is not persisted # anywhere by Argo CD. It can be prematurely revoked by removing the entry from this list. jwtTokens: - iat: 1535390316 ``` ## Repositories !!!note Some Git hosters - notably GitLab and possibly on-premise GitLab instances as well - require you to specify the `.git` suffix in the repository URL, otherwise they will send a HTTP 301 redirect to the repository URL suffixed with `.git`. Argo CD will **not** follow these redirects, so you have to adjust your repository URL to be suffixed with `.git`. Repository details are stored in secrets. To configure a repo, create a secret which contains repository details. Consider using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) to store an encrypted secret definition as a Kubernetes manifest. Each repository must have a `url` field and, depending on whether you connect using HTTPS, SSH, or GitHub App, `username` and `password` (for HTTPS), `sshPrivateKey` (for SSH), or `githubAppPrivateKey` (for GitHub App). Credentials can be scoped to a project using the optional `project` field. When omitted, the credential will be used as the default for all projects without a scoped credential. !!!warning When using [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets) the labels will be removed and have to be readded as described here: https://github.com/bitnami-labs/sealed-secrets#sealedsecrets-as-templates-for-secrets Example for HTTPS: ```yaml apiVersion: v1 kind: Secret metadata: name: private-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://github.com/argoproj/private-repo password: my-password username: my-username project: my-project ``` Example for SSH: ```yaml apiVersion: v1 kind: Secret metadata: name: private-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: [email protected]:argoproj/my-private-repository.git sshPrivateKey: | -----BEGIN OPENSSH PRIVATE KEY----- ... -----END OPENSSH PRIVATE KEY----- ``` Example for GitHub App: ```yaml apiVersion: v1 kind: Secret metadata: name: github-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://github.com/argoproj/my-private-repository githubAppID: 1 githubAppInstallationID: 2 githubAppPrivateKey: | -----BEGIN OPENSSH PRIVATE KEY----- ... -----END OPENSSH PRIVATE KEY----- --- apiVersion: v1 kind: Secret metadata: name: github-enterprise-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://ghe.example.com/argoproj/my-private-repository githubAppID: 1 githubAppInstallationID: 2 githubAppEnterpriseBaseUrl: https://ghe.example.com/api/v3 githubAppPrivateKey: | -----BEGIN OPENSSH PRIVATE KEY----- ... -----END OPENSSH PRIVATE KEY----- ``` Example for Google Cloud Source repositories: ```yaml kind: Secret metadata: name: github-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://source.developers.google.com/p/my-google-project/r/my-repo gcpServiceAccountKey: | { "type": "service_account", "project_id": "my-google-project", "private_key_id": "REDACTED", "private_key": "-----BEGIN PRIVATE KEY-----\nREDACTED\n-----END PRIVATE KEY-----\n", "client_email": "[email protected]", "client_id": "REDACTED", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/argocd-service-account%40my-google-project.iam.gserviceaccount.com" } ``` !!! tip The Kubernetes documentation has [instructions for creating a secret containing a private key](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys). ### Repository Credentials If you want to use the same credentials for multiple repositories, you can configure credential templates. Credential templates can carry the same credentials information as repositories. ```yaml apiVersion: v1 kind: Secret metadata: name: first-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://github.com/argoproj/private-repo --- apiVersion: v1 kind: Secret metadata: name: second-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://github.com/argoproj/other-private-repo --- apiVersion: v1 kind: Secret metadata: name: private-repo-creds namespace: argocd labels: argocd.argoproj.io/secret-type: repo-creds stringData: type: git url: https://github.com/argoproj password: my-password username: my-username ``` In the above example, every repository accessed via HTTPS whose URL is prefixed with `https://github.com/argoproj` would use a username stored in the key `username` and a password stored in the key `password` of the secret `private-repo-creds` for connecting to Git. In order for Argo CD to use a credential template for any given repository, the following conditions must be met: * The repository must either not be configured at all, or if configured, must not contain any credential information (i.e. contain none of `sshPrivateKey`, `username`, `password` ) * The URL configured for a credential template (e.g. `https://github.com/argoproj`) must match as prefix for the repository URL (e.g. `https://github.com/argoproj/argocd-example-apps`). !!! note Matching credential template URL prefixes is done on a _best match_ effort, so the longest (best) match will take precedence. The order of definition is not important, as opposed to pre v1.4 configuration. The following keys are valid to refer to credential secrets: #### SSH repositories * `sshPrivateKey` refers to the SSH private key for accessing the repositories #### HTTPS repositories * `username` and `password` refer to the username and/or password for accessing the repositories * `tlsClientCertData` and `tlsClientCertKey` refer to secrets where a TLS client certificate (`tlsClientCertData`) and the corresponding private key `tlsClientCertKey` are stored for accessing the repositories #### GitHub App repositories * `githubAppPrivateKey` refers to the GitHub App private key for accessing the repositories * `githubAppID` refers to the GitHub Application ID for the application you created. * `githubAppInstallationID` refers to the Installation ID of the GitHub app you created and installed. * `githubAppEnterpriseBaseUrl` refers to the base api URL for GitHub Enterprise (e.g. `https://ghe.example.com/api/v3`) * `tlsClientCertData` and `tlsClientCertKey` refer to secrets where a TLS client certificate (`tlsClientCertData`) and the corresponding private key `tlsClientCertKey` are stored for accessing GitHub Enterprise if custom certificates are used. ### Repositories using self-signed TLS certificates (or are signed by custom CA) You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named `argocd-tls-certs-cm`. The data section should contain a map, with the repository server's hostname part (not the complete URL) as key, and the certificate(s) in PEM format as data. So, if you connect to a repository with the URL `https://server.example.com/repos/my-repo`, you should use `server.example.com` as key. The certificate data should be either the server's certificate (in case of self-signed certificate) or the certificate of the CA that was used to sign the server's certificate. You can configure multiple certificates for each server, e.g. if you are having a certificate roll-over planned. If there are no dedicated certificates configured for a repository server, the system's default trust store is used for validating the server's repository. This should be good enough for most (if not all) public Git repository services such as GitLab, GitHub and Bitbucket as well as most privately hosted sites which use certificates from well-known CAs, including Let's Encrypt certificates. An example ConfigMap object: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-tls-certs-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: server.example.com: | -----BEGIN CERTIFICATE----- MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq/v1eIEZPCNbOowDQYJKoZIhvcNAQEL BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0 c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5 NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc CNQm6diPREFwcDPFCe/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+/u P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G ZJIRn/OfSz7NzKylfDCat2z3EAutyeT/5oXZoWOmGg/8T7pn/pR588GoYYKRQnp+ YilqCPFX+az09EqqK/iHXnkdZ/Z2fCuU+9M/Zhrnlwlygl3RuVBI6xhm/ZsXtL2E Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J kKC1li/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV/4u kD1n4p/XMc9HYU/was/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7 bEH4Jatp/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86 r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH/ BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx CaRXFbherV1kTnZw4Y9/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii/StENAz2 XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT +TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z/mJDLlk23npecTouLg83TNSn3R6fYQr d/Y9eXuUJ8U7/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9/Ajlv5OtO OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so 6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8 9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W +LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S/qLR6ASWUS4ViWRhbRlNK XWyb96wrUlv+E8I= -----END CERTIFICATE----- ``` !!! note The `argocd-tls-certs-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/tls` in the pods of `argocd-server` and `argocd-repo-server`. It will create files for each data key in the mount path directory, so above example would leave the file `/app/config/tls/server.example.com`, which contains the certificate data. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration. ### SSH known host public keys If you are configuring repositories to use SSH, Argo CD will need to know their SSH public keys. In order for Argo CD to connect via SSH the public key(s) for each repository server must be pre-configured in Argo CD (unlike TLS configuration), otherwise the connections to the repository will fail. You can manage the SSH known hosts data in the `argocd-ssh-known-hosts-cm` ConfigMap. This ConfigMap contains a single entry, `ssh_known_hosts`, with the public keys of the SSH servers as its value. The value can be filled in from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility (which is part of OpenSSH's client package). The basic format is `<server_name> <keytype> <base64-encoded_key>`, one entry per line. Here is an example of running `ssh-keyscan`: ```bash $ for host in bitbucket.org github.com gitlab.com ssh.dev.azure.com vs-ssh.visualstudio.com ; do ssh-keyscan $host 2> /dev/null ; done bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M= github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk= github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg= gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY= gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9 ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H ``` Here is an example `ConfigMap` object using the output from `ssh-keyscan` above: ```yaml apiVersion: v1 kind: ConfigMap metadata: labels: app.kubernetes.io/name: argocd-ssh-known-hosts-cm app.kubernetes.io/part-of: argocd name: argocd-ssh-known-hosts-cm data: ssh_known_hosts: | # This file was automatically generated by hack/update-ssh-known-hosts.sh. DO NOT EDIT [ssh.github.com]:443 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg= [ssh.github.com]:443 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl [ssh.github.com]:443 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk= bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE= bitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr/6mrui/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL/BvbZ/iRNhItLqNyieoQj/uh/7Iv4uyH/cV/0b4WDSd3DptigWq84lJubb9t/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf/97P5zauIhxcjX+xHv4M= github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg= github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk= gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY= gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9 ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H ``` !!! note The `argocd-ssh-known-hosts-cm` ConfigMap will be mounted as a volume at the mount path `/app/config/ssh` in the pods of `argocd-server` and `argocd-repo-server`. It will create a file `ssh_known_hosts` in that directory, which contains the SSH known hosts data used by Argo CD for connecting to Git repositories via SSH. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration. ### Configure repositories with proxy Proxy for your repository can be specified in the `proxy` field of the repository secret, along with a corresponding `noProxy` config. Argo CD uses this proxy/noProxy config to access the repository and do related helm/kustomize operations. Argo CD looks for the standard proxy environment variables in the repository server if the custom proxy config is absent. An example repository with proxy and noProxy: ```yaml apiVersion: v1 kind: Secret metadata: name: private-repo namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: https://github.com/argoproj/private-repo proxy: https://proxy-server-url:8888 noProxy: ".internal.example.com,company.org,10.123.0.0/16" password: my-password username: my-username ``` A note on noProxy: Argo CD uses exec to interact with different tools such as helm and kustomize. Not all of these tools support the same noProxy syntax as the [httpproxy go package](https://cs.opensource.google/go/x/net/+/internal-branch.go1.21-vendor:http/httpproxy/proxy.go;l=38-50) does. In case you run in trouble with noProxy not beeing respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support. ### Legacy behaviour In Argo CD version 2.0 and earlier, repositories were stored as part of the `argocd-cm` config map. For backward-compatibility, Argo CD will still honor repositories in the config map, but this style of repository configuration is deprecated and support for it will be removed in a future version. ```yaml apiVersion: v1 kind: ConfigMap data: repositories: | - url: https://github.com/argoproj/my-private-repository passwordSecret: name: my-secret key: password usernameSecret: name: my-secret key: username repository.credentials: | - url: https://github.com/argoproj passwordSecret: name: my-secret key: password usernameSecret: name: my-secret key: username --- apiVersion: v1 kind: Secret metadata: name: my-secret namespace: argocd stringData: password: my-password username: my-username ``` ## Clusters Cluster credentials are stored in secrets same as repositories or repository credentials. Each secret must have label `argocd.argoproj.io/secret-type: cluster`. The secret data must include following fields: * `name` - cluster name * `server` - cluster api server url * `namespaces` - optional comma-separated list of namespaces which are accessible in that cluster. Cluster level resources would be ignored if namespace list is not empty. * `clusterResources` - optional boolean string (`"true"` or `"false"`) determining whether Argo CD can manage cluster-level resources on this cluster. This setting is used only if the list of managed namespaces is not empty. * `project` - optional string to designate this as a project-scoped cluster. * `config` - JSON representation of following data structure: ```yaml # Basic authentication settings username: string password: string # Bearer authentication settings bearerToken: string # IAM authentication configuration awsAuthConfig: clusterName: string roleARN: string profile: string # Configure external command to supply client credentials # See https://godoc.org/k8s.io/client-go/tools/clientcmd/api#ExecConfig execProviderConfig: command: string args: [ string ] env: { key: value } apiVersion: string installHint: string # Proxy URL for the kubernetes client to use when connecting to the cluster api server proxyUrl: string # Transport layer security configuration settings tlsClientConfig: # Base64 encoded PEM-encoded bytes (typically read from a client certificate file). caData: string # Base64 encoded PEM-encoded bytes (typically read from a client certificate file). certData: string # Server should be accessed without verifying the TLS certificate insecure: boolean # Base64 encoded PEM-encoded bytes (typically read from a client certificate key file). keyData: string # ServerName is passed to the server for SNI and is used in the client to check server # certificates against. If ServerName is empty, the hostname used to contact the # server is used. serverName: string # Disable automatic compression for requests to the cluster disableCompression: boolean ``` Note that if you specify a command to run under `execProviderConfig`, that command must be available in the Argo CD image. See [BYOI (Build Your Own Image)](custom_tools.md#byoi-build-your-own-image). Cluster secret example: ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: mycluster.example.com server: https://mycluster.example.com config: | { "bearerToken": "<authentication token>", "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` ### EKS EKS cluster secret example using argocd-k8s-auth and [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html): ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: "eks-cluster-name-for-argo" server: "https://xxxyyyzzz.xyz.some-region.eks.amazonaws.com" config: | { "awsAuthConfig": { "clusterName": "my-eks-cluster-name", "roleARN": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>" }, "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` This setup requires: 1. [IRSA enabled](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) on your Argo CD EKS cluster 2. An IAM role ("management role") for your Argo CD EKS cluster that has an appropriate trust policy and permission policies (see below) 3. A role created for each cluster being added to Argo CD that is assumable by the Argo CD management role 4. An [Access Entry](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) within each EKS cluster added to Argo CD that gives the cluster's role (from point 3) RBAC permissions to perform actions within the cluster - Or, alternatively, an entry within the `aws-auth` ConfigMap within the cluster added to Argo CD ([depreciated by EKS](https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html)) #### Argo CD Management Role The role created for Argo CD (the "management role") will need to have a trust policy suitable for assumption by certain Argo CD Service Accounts *and by itself*. The service accounts that need to assume this role are: - `argocd-application-controller`, - `argocd-applicationset-controller` - `argocd-server` If we create role `arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>` for this purpose, the following is an example trust policy suitable for this need. Ensure that the Argo CD cluster has an [IAM OIDC provider configured](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html). ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "ExplicitSelfRoleAssumption", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "sts:AssumeRole", "Condition": { "ArnLike": { "aws:PrincipalArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" } } }, { "Sid": "ServiceAccountRoleAssumption", "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": [ "system:serviceaccount:argocd:argocd-application-controller", "system:serviceaccount:argocd:argocd-applicationset-controller", "system:serviceaccount:argocd:argocd-server" ], "oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com" } } } ] } ``` #### Argo CD Service Accounts The 3 service accounts need to be modified to include an annotation with the Argo CD management role ARN. Here's an example service account configurations for `argocd-application-controller`, `argocd-applicationset-controller`, and `argocd-server`. !!! warning Once the annotations has been set on the service accounts, the application controller and server pods need to be restarted. ```yaml apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" name: argocd-application-controller --- apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" name: argocd-applicationset-controller --- apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" name: argocd-server ``` #### IAM Permission Policy The Argo CD management role (`arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>` in our example) additionally needs to be allowed to assume a role for each cluster added to Argo CD. If we create a role named `<IAM_CLUSTER_ROLE>` for an EKS cluster we are adding to Argo CD, we would update the permission policy of the Argo CD management role to include the following: ```json { "Version" : "2012-10-17", "Statement" : { "Effect" : "Allow", "Action" : "sts:AssumeRole", "Resource" : [ "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE>" ] } } ``` This allows the Argo CD management role to assume the cluster role. You can add permissions like above to the Argo CD management role for each cluster being managed by Argo CD (assuming you create a new role per cluster). #### Cluster Role Trust Policies As stated, each EKS cluster being added to Argo CD should have its own corresponding role. This role should not have any permission policies. Instead, it will be used to authenticate against the EKS cluster's API. The Argo CD management role assumes this role, and calls the AWS API to get an auth token via argocd-k8s-auth. That token is used when connecting to the added cluster's API endpoint. If we create role `arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE>` for a cluster being added to Argo CD, we should set its trust policy to give the Argo CD management role permission to assume it. Note that we're granting the Argo CD management role permission to assume this role above, but we also need to permit that action via the cluster role's trust policy. A suitable trust policy allowing the `IAM_CLUSTER_ROLE` to be assumed by the `ARGO_CD_MANAGEMENT_IAM_ROLE_NAME` role looks like this: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" }, "Action": "sts:AssumeRole" } ] } ``` #### Access Entries Each cluster's role (e.g. `arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE>`) has no permission policy. Instead, we associate that role with an EKS permission policy, which grants that role the ability to generate authentication tokens to the cluster's API. This EKS permission policy decides what RBAC permissions are granted in that process. An [access entry](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) (and the policy associated to the role) can be created using the following commands: ```bash # For each cluster being added to Argo CD aws eks create-access-entry \ --cluster-name my-eks-cluster-name \ --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE> \ --type STANDARD \ --kubernetes-groups [] # No groups needed aws eks associate-access-policy \ --cluster-name my-eks-cluster-name \ --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \ --access-scope type=cluster \ --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE> ``` The above role is granted cluster admin permissions via `AmazonEKSClusterAdminPolicy`. The Argo CD management role that assume this role is therefore granted the same cluster admin permissions when it generates an API token when adding the associated EKS cluster. **AWS Auth (Depreciated)** Instead of using Access Entries, you may need to use the depreciated `aws-auth`. If so, the `roleARN` of each managed cluster needs to be added to each respective cluster's `aws-auth` config map (see [Enabling IAM principal access to your cluster](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html)), as well as having an assume role policy which allows it to be assumed by the Argo CD pod role. An example assume role policy for a cluster which is managed by Argo CD: ```json { "Version" : "2012-10-17", "Statement" : { "Effect" : "Allow", "Action" : "sts:AssumeRole", "Principal" : { "AWS" : "<arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>" } } } ``` Example kube-system/aws-auth configmap for your cluster managed by Argo CD: ```yaml apiVersion: v1 data: # Other groups and accounts omitted for brevity. Ensure that no other rolearns and/or groups are inadvertently removed, # or you risk borking access to your cluster. # # The group name is a RoleBinding which you use to map to a [Cluster]Role. See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-binding-examples mapRoles: | - "groups": - "<GROUP-NAME-IN-K8S-RBAC>" "rolearn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE>" "username": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_CLUSTER_ROLE>" ``` Use the role ARN for both `rolearn` and `username`. #### Alternative EKS Authentication Methods In some scenarios it may not be possible to use IRSA, such as when the Argo CD cluster is running on a different cloud provider's platform. In this case, there are two options: 1. Use `execProviderConfig` to call the AWS authentication mechanism which enables the injection of environment variables to supply credentials 2. Leverage the new AWS profile option available in Argo CD release 2.10 Both of these options will require the steps involving IAM and the `aws-auth` config map (defined above) to provide the principal with access to the cluster. ##### Using execProviderConfig with Environment Variables ```yaml --- apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: mycluster server: https://mycluster.example.com namespaces: "my,managed,namespaces" clusterResources: "true" config: | { "execProviderConfig": { "command": "argocd-k8s-auth", "args": ["aws", "--cluster-name", "my-eks-cluster"], "apiVersion": "client.authentication.k8s.io/v1beta1", "env": { "AWS_REGION": "xx-east-1", "AWS_ACCESS_KEY_ID": "", "AWS_SECRET_ACCESS_KEY": "", "AWS_SESSION_TOKEN": "" } }, "tlsClientConfig": { "insecure": false, "caData": "" } } ``` This example assumes that the role being attached to the credentials that have been supplied, if this is not the case the role can be appended to the `args` section like so: ```yaml ... "args": ["aws", "--cluster-name", "my-eks-cluster", "--role-arn", "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>"], ... ``` This construct can be used in conjunction with something like the External Secrets Operator to avoid storing the keys in plain text and additionally helps to provide a foundation for key rotation. ##### Using An AWS Profile For Authentication The option to use profiles, added in release 2.10, provides a method for supplying credentials while still using the standard Argo CD EKS cluster declaration with an additional command flag that points to an AWS credentials file: ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: "mycluster.com" server: "https://mycluster.com" config: | { "awsAuthConfig": { "clusterName": "my-eks-cluster-name", "roleARN": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>", "profile": "/mount/path/to/my-profile-file" }, "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` This will instruct Argo CD to read the file at the provided path and use the credentials defined within to authenticate to AWS. The profile must be mounted in both the `argocd-server` and `argocd-application-controller` components in order for this to work. For example, the following values can be defined in a Helm-based Argo CD deployment: ```yaml controller: extraVolumes: - name: my-profile-volume secret: secretName: my-aws-profile items: - key: my-profile-file path: my-profile-file extraVolumeMounts: - name: my-profile-mount mountPath: /mount/path/to readOnly: true server: extraVolumes: - name: my-profile-volume secret: secretName: my-aws-profile items: - key: my-profile-file path: my-profile-file extraVolumeMounts: - name: my-profile-mount mountPath: /mount/path/to readOnly: true ``` Where the secret is defined as follows: ```yaml apiVersion: v1 kind: Secret metadata: name: my-aws-profile type: Opaque stringData: my-profile-file: | [default] region = <aws_region> aws_access_key_id = <aws_access_key_id> aws_secret_access_key = <aws_secret_access_key> aws_session_token = <aws_session_token> ``` > ⚠️ Secret mounts are updated on an interval, not real time. If rotation is a requirement ensure the token lifetime outlives the mount update interval and the rotation process doesn't immediately invalidate the existing token ### GKE GKE cluster secret example using argocd-k8s-auth and [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity): ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: mycluster.example.com server: https://mycluster.example.com config: | { "execProviderConfig": { "command": "argocd-k8s-auth", "args": ["gcp"], "apiVersion": "client.authentication.k8s.io/v1beta1" }, "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` Note that you must enable Workload Identity on your GKE cluster, create GCP service account with appropriate IAM role and bind it to Kubernetes service account for argocd-application-controller and argocd-server (showing Pod logs on UI). See [Use Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) and [Authenticating to the Kubernetes API server](https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication). ### AKS Azure cluster secret example using argocd-k8s-auth and [kubelogin](https://github.com/Azure/kubelogin). The option *azure* to the argocd-k8s-auth execProviderConfig encapsulates the *get-token* command for kubelogin. Depending upon which authentication flow is desired (devicecode, spn, ropc, msi, azurecli, workloadidentity), set the environment variable AAD_LOGIN_METHOD with this value. Set other appropriate environment variables depending upon which authentication flow is desired. |Variable Name|Description| |-------------|-----------| |AAD_LOGIN_METHOD|One of devicecode, spn, ropc, msi, azurecli, or workloadidentity| |AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE|AAD client cert in pfx. Used in spn login| |AAD_SERVICE_PRINCIPAL_CLIENT_ID|AAD client application ID| |AAD_SERVICE_PRINCIPAL_CLIENT_SECRET|AAD client application secret| |AAD_USER_PRINCIPAL_NAME|Used in the ropc flow| |AAD_USER_PRINCIPAL_PASSWORD|Used in the ropc flow| |AZURE_TENANT_ID|The AAD tenant ID.| |AZURE_AUTHORITY_HOST|Used in the WorkloadIdentityLogin flow| |AZURE_FEDERATED_TOKEN_FILE|Used in the WorkloadIdentityLogin flow| |AZURE_CLIENT_ID|Used in the WorkloadIdentityLogin flow| In addition to the environment variables above, argocd-k8s-auth accepts two extra environment variables to set the AAD environment, and to set the AAD server application ID. The AAD server application ID will default to 6dae42f8-4368-4678-94ff-3960e28e3630 if not specified. See [here](https://github.com/azure/kubelogin#exec-plugin-format) for details. |Variable Name|Description| |-------------|-----------| |AAD_ENVIRONMENT_NAME|The azure environment to use, default of AzurePublicCloud| |AAD_SERVER_APPLICATION_ID|The optional AAD server application ID, defaults to 6dae42f8-4368-4678-94ff-3960e28e3630| This is an example of using the [federated workload login flow](https://github.com/Azure/kubelogin#azure-workload-federated-identity-non-interactive). The federated token file needs to be mounted as a secret into argoCD, so it can be used in the flow. The location of the token file needs to be set in the environment variable AZURE_FEDERATED_TOKEN_FILE. If your AKS cluster utilizes the [Mutating Admission Webhook](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html) from the Azure Workload Identity project, follow these steps to enable the `argocd-application-controller` and `argocd-server` pods to use the federated identity: 1. **Label the Pods**: Add the `azure.workload.identity/use: "true"` label to the `argocd-application-controller` and `argocd-server` pods. 2. **Create Federated Identity Credential**: Generate an Azure federated identity credential for the `argocd-application-controller` and `argocd-server` service accounts. Refer to the [Federated Identity Credential](https://azure.github.io/azure-workload-identity/docs/topics/federated-identity-credential.html) documentation for detailed instructions. 3. **Add Annotations to Service Account** Add `"azure.workload.identity/client-id": "$CLIENT_ID"` and `"azure.workload.identity/tenant-id": "$TENANT_ID"` annotations to the `argocd-application-controller` and `argocd-server` service accounts using the details from the federated credential. 4. **Set the AZURE_CLIENT_ID**: Update the `AZURE_CLIENT_ID` in the cluster secret to match the client id of the newly created federated identity credential. ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: mycluster.example.com server: https://mycluster.example.com config: | { "execProviderConfig": { "command": "argocd-k8s-auth", "env": { "AAD_ENVIRONMENT_NAME": "AzurePublicCloud", "AZURE_CLIENT_ID": "fill in client id", "AZURE_TENANT_ID": "fill in tenant id", # optional, injected by workload identity mutating admission webhook if enabled "AZURE_FEDERATED_TOKEN_FILE": "/opt/path/to/federated_file.json", # optional, injected by workload identity mutating admission webhook if enabled "AZURE_AUTHORITY_HOST": "https://login.microsoftonline.com/", # optional, injected by workload identity mutating admission webhook if enabled "AAD_LOGIN_METHOD": "workloadidentity" }, "args": ["azure"], "apiVersion": "client.authentication.k8s.io/v1beta1" }, "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` This is an example of using the spn (service principal name) flow. ```yaml apiVersion: v1 kind: Secret metadata: name: mycluster-secret labels: argocd.argoproj.io/secret-type: cluster type: Opaque stringData: name: mycluster.example.com server: https://mycluster.example.com config: | { "execProviderConfig": { "command": "argocd-k8s-auth", "env": { "AAD_ENVIRONMENT_NAME": "AzurePublicCloud", "AAD_SERVICE_PRINCIPAL_CLIENT_SECRET": "fill in your service principal client secret", "AZURE_TENANT_ID": "fill in tenant id", "AAD_SERVICE_PRINCIPAL_CLIENT_ID": "fill in your service principal client id", "AAD_LOGIN_METHOD": "spn" }, "args": ["azure"], "apiVersion": "client.authentication.k8s.io/v1beta1" }, "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded certificate>" } } ``` ## Helm Chart Repositories Non standard Helm Chart repositories have to be registered explicitly. Each repository must have `url`, `type` and `name` fields. For private Helm repos you may need to configure access credentials and HTTPS settings using `username`, `password`, `tlsClientCertData` and `tlsClientCertKey` fields. Example: ```yaml apiVersion: v1 kind: Secret metadata: name: istio namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: name: istio.io url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts type: helm --- apiVersion: v1 kind: Secret metadata: name: argo-helm namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: name: argo url: https://argoproj.github.io/argo-helm type: helm username: my-username password: my-password tlsClientCertData: ... tlsClientCertKey: ... ``` ## Resource Exclusion/Inclusion Resources can be excluded from discovery and sync so that Argo CD is unaware of them. For example, the apiGroup/kind `events.k8s.io/*`, `metrics.k8s.io/*` and `coordination.k8s.io/Lease` are always excluded. Use cases: * You have temporal issues and you want to exclude problematic resources. * There are many of a kind of resources that impacts Argo CD's performance. * Restrict Argo CD's access to certain kinds of resources, e.g. secrets. See [security.md#cluster-rbac](security.md#cluster-rbac). To configure this, edit the `argocd-cm` config map: ```shell kubectl edit configmap argocd-cm -n argocd ``` Add `resource.exclusions`, e.g.: ```yaml apiVersion: v1 data: resource.exclusions: | - apiGroups: - "*" kinds: - "*" clusters: - https://192.168.0.20 kind: ConfigMap ``` The `resource.exclusions` node is a list of objects. Each object can have: * `apiGroups` A list of globs to match the API group. * `kinds` A list of kinds to match. Can be `"*"` to match all. * `clusters` A list of globs to match the cluster. If all three match, then the resource is ignored. In addition to exclusions, you might configure the list of included resources using the `resource.inclusions` setting. By default, all resource group/kinds are included. The `resource.inclusions` setting allows customizing the list of included group/kinds: ```yaml apiVersion: v1 data: resource.inclusions: | - apiGroups: - "*" kinds: - Deployment clusters: - https://192.168.0.20 kind: ConfigMap ``` The `resource.inclusions` and `resource.exclusions` might be used together. The final list of resources includes group/kinds specified in `resource.inclusions` minus group/kinds specified in `resource.exclusions` setting. Notes: * Quote globs in your YAML to avoid parsing errors. * Invalid globs result in the whole rule being ignored. * If you add a rule that matches existing resources, these will appear in the interface as `OutOfSync`. ## Mask sensitive Annotations on Secrets An optional comma-separated list of `metadata.annotations` keys can be configured with `resource.sensitive.mask.annotations` to mask their values in UI/CLI on Secrets. ```yaml resource.sensitive.mask.annotations: openshift.io/token-secret.value, api-key ``` ## Auto respect RBAC for controller Argocd controller can be restricted from discovering/syncing specific resources using just controller rbac, without having to manually configure resource exclusions. This feature can be enabled by setting `resource.respectRBAC` key in argocd cm, once it is set the controller will automatically stop watching for resources that it does not have the permission to list/access. Possible values for `resource.respectRBAC` are: - `strict` : This setting checks whether the list call made by controller is forbidden/unauthorized and if it is, it will cross-check the permission by making a `SelfSubjectAccessReview` call for the resource. - `normal` : This will only check whether the list call response is forbidden/unauthorized and skip `SelfSubjectAccessReview` call, to minimize any extra api-server calls. - unset/empty (default) : This will disable the feature and controller will continue to monitor all resources. Users who are comfortable with an increase in kube api-server calls can opt for `strict` option while users who are concerned with higher api calls and are willing to compromise on the accuracy can opt for the `normal` option. Notes: * When set to use `strict` mode controller must have rbac permission to `create` a `SelfSubjectAccessReview` resource * The `SelfSubjectAccessReview` request will be only made for the `list` verb, it is assumed that if `list` is allowed for a resource then all other permissions are also available to the controller. Example argocd cm with `resource.respectRBAC` set to `strict`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: resource.respectRBAC: "strict" ``` ## Resource Custom Labels Custom Labels configured with `resource.customLabels` (comma separated string) will be displayed in the UI (for any resource that defines them). ## Labels on Application Events An optional comma-separated list of `metadata.labels` keys can be configured with `resource.includeEventLabelKeys` to add to Kubernetes events generated for Argo CD Applications. When events are generated for Applications containing the specified labels, the controller adds the matching labels to the event. This establishes an easy link between the event and the application, allowing for filtering using labels. In case of conflict between labels on the Application and AppProject, the Application label values are prioritized and added to the event. ```yaml resource.includeEventLabelKeys: team,env* ``` To exclude certain labels from events, use the `resource.excludeEventLabelKeys` key, which takes a comma-separated list of `metadata.labels` keys. ```yaml resource.excludeEventLabelKeys: environment,bu ``` Both `resource.includeEventLabelKeys` and `resource.excludeEventLabelKeys` support wildcards. ## SSO & RBAC * SSO configuration details: [SSO](./user-management/index.md) * RBAC configuration details: [RBAC](./rbac.md) ## Manage Argo CD Using Argo CD Argo CD is able to manage itself since all settings are represented by Kubernetes manifests. The suggested way is to create [Kustomize](https://github.com/kubernetes-sigs/kustomize) based application which uses base Argo CD manifests from [https://github.com/argoproj/argo-cd](https://github.com/argoproj/argo-cd/tree/stable/manifests) and apply required changes on top. Example of `kustomization.yaml`: ```yaml # additional resources like ingress rules, cluster and repository secrets. resources: - github.com/argoproj/argo-cd//manifests/cluster-install?ref=stable - clusters-secrets.yaml - repos-secrets.yaml # changes to config maps patches: - path: overlays/argo-cd-cm.yaml ``` The live example of self managed Argo CD config is available at [https://cd.apps.argoproj.io](https://cd.apps.argoproj.io) and with configuration stored at [argoproj/argoproj-deployments](https://github.com/argoproj/argoproj-deployments/tree/master/argocd). !!! note You will need to sign-in using your GitHub account to get access to [https://cd.apps.argoproj.io](https://cd.apps.argoproj.io)
argocd
Declarative Setup Argo CD applications projects and settings can be defined declaratively using Kubernetes manifests These can be updated using kubectl apply without needing to touch the argocd command line tool Quick Reference All resources including Application and AppProject specs have to be installed in the Argo CD namespace by default argocd Atomic configuration Sample File Resource Name Kind Description argocd cm yaml argocd cm yaml md argocd cm ConfigMap General Argo CD configuration argocd repositories yaml argocd repositories yaml md my private repo istio helm repo private helm repo private repo Secrets Sample repository connection details argocd repo creds yaml argocd repo creds yaml md argoproj https creds argoproj ssh creds github creds github enterprise creds Secrets Sample repository credential templates argocd cmd params cm yaml argocd cmd params cm yaml md argocd cmd params cm ConfigMap Argo CD env variables configuration argocd secret yaml argocd secret yaml md argocd secret Secret User Passwords Certificates deprecated Signing Key Dex secrets Webhook secrets argocd rbac cm yaml argocd rbac cm yaml md argocd rbac cm ConfigMap RBAC Configuration argocd tls certs cm yaml argocd tls certs cm yaml md argocd tls certs cm ConfigMap Custom TLS certificates for connecting Git repositories via HTTPS v1 2 and later argocd ssh known hosts cm yaml argocd ssh known hosts cm yaml md argocd ssh known hosts cm ConfigMap SSH known hosts data for connecting Git repositories via SSH v1 2 and later For each specific kind of ConfigMap and Secret resource there is only a single supported resource name as listed in the above table if you need to merge things you need to do it before creating them warning A note about ConfigMap resources Be sure to annotate your ConfigMap resources using the label app kubernetes io part of argocd otherwise Argo CD will not be able to use them Multiple configuration objects Sample File Kind Description application yaml user guide application specification md Application Example application spec project yaml project specification md AppProject Example project spec argocd repositories yaml argocd repositories yaml md Secret Repository credentials For Application and AppProject resources the name of the resource equals the name of the application or project within Argo CD This also means that application and project names are unique within a given Argo CD installation you cannot have the same application name for two different applications Applications The Application CRD is the Kubernetes resource object representing a deployed application instance in an environment It is defined by two key pieces of information source reference to the desired state in Git repository revision path environment destination reference to the target cluster and namespace For the cluster one of server or name can be used but not both which will result in an error Under the hood when the server is missing it is calculated based on the name and used for any operations A minimal Application spec is as follows yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd spec project default source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook destination server https kubernetes default svc namespace guestbook See application yaml application yaml for additional fields As long as you have completed the first step of Getting Started getting started md 1 install argo cd you can apply this with kubectl apply n argocd f application yaml and Argo CD will start deploying the guestbook application note The namespace must match the namespace of your Argo CD instance typically this is argocd note When creating an application from a Helm repository the chart attribute must be specified instead of the path attribute within spec source yaml spec source repoURL https argoproj github io argo helm chart argo warning Without the resources finalizer argocd argoproj io finalizer deleting an application will not delete the resources it manages To perform a cascading delete you must add the finalizer See App Deletion user guide app deletion md about the deletion finalizer yaml metadata finalizers resources finalizer argocd argoproj io App of Apps You can create an app that creates other apps which in turn can create other apps This allows you to declaratively manage a group of apps that can be deployed and configured in concert See cluster bootstrapping cluster bootstrapping md Projects The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications It is defined by the following key pieces of information sourceRepos reference to the repositories that applications within the project can pull manifests from destinations reference to clusters and namespaces that applications within the project can deploy into roles list of entities with definitions of their access to resources within the project warning Projects which can deploy to the Argo CD namespace grant admin access If a Project s destinations configuration allows deploying to the namespace in which Argo CD is installed then Applications under that project have admin level access RBAC access https argo cd readthedocs io en stable operator manual rbac to admin level Projects should be carefully restricted and push access to allowed sourceRepos should be limited to only admins An example spec is as follows yaml apiVersion argoproj io v1alpha1 kind AppProject metadata name my project namespace argocd Finalizer that ensures that project is not deleted until it is not referenced by any application finalizers resources finalizer argocd argoproj io spec description Example Project Allow manifests to deploy from any Git repos sourceRepos Only permit applications to deploy to the guestbook namespace in the same cluster destinations namespace guestbook server https kubernetes default svc Deny all cluster scoped resources from being created except for Namespace clusterResourceWhitelist group kind Namespace Allow all namespaced scoped resources to be created except for ResourceQuota LimitRange NetworkPolicy namespaceResourceBlacklist group kind ResourceQuota group kind LimitRange group kind NetworkPolicy Deny all namespaced scoped resources from being created except for Deployment and StatefulSet namespaceResourceWhitelist group apps kind Deployment group apps kind StatefulSet roles A role which provides read only access to all applications in the project name read only description Read only privileges to my project policies p proj my project read only applications get my project allow groups my oidc group A role which provides sync privileges to only the guestbook dev application e g to provide sync privileges to a CI system name ci role description Sync privileges for guestbook dev policies p proj my project ci role applications sync my project guestbook dev allow NOTE JWT tokens can only be generated by the API server and the token is not persisted anywhere by Argo CD It can be prematurely revoked by removing the entry from this list jwtTokens iat 1535390316 Repositories note Some Git hosters notably GitLab and possibly on premise GitLab instances as well require you to specify the git suffix in the repository URL otherwise they will send a HTTP 301 redirect to the repository URL suffixed with git Argo CD will not follow these redirects so you have to adjust your repository URL to be suffixed with git Repository details are stored in secrets To configure a repo create a secret which contains repository details Consider using bitnami labs sealed secrets https github com bitnami labs sealed secrets to store an encrypted secret definition as a Kubernetes manifest Each repository must have a url field and depending on whether you connect using HTTPS SSH or GitHub App username and password for HTTPS sshPrivateKey for SSH or githubAppPrivateKey for GitHub App Credentials can be scoped to a project using the optional project field When omitted the credential will be used as the default for all projects without a scoped credential warning When using bitnami labs sealed secrets https github com bitnami labs sealed secrets the labels will be removed and have to be readded as described here https github com bitnami labs sealed secrets sealedsecrets as templates for secrets Example for HTTPS yaml apiVersion v1 kind Secret metadata name private repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https github com argoproj private repo password my password username my username project my project Example for SSH yaml apiVersion v1 kind Secret metadata name private repo namespace argocd labels argocd argoproj io secret type repository stringData type git url git github com argoproj my private repository git sshPrivateKey BEGIN OPENSSH PRIVATE KEY END OPENSSH PRIVATE KEY Example for GitHub App yaml apiVersion v1 kind Secret metadata name github repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https github com argoproj my private repository githubAppID 1 githubAppInstallationID 2 githubAppPrivateKey BEGIN OPENSSH PRIVATE KEY END OPENSSH PRIVATE KEY apiVersion v1 kind Secret metadata name github enterprise repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https ghe example com argoproj my private repository githubAppID 1 githubAppInstallationID 2 githubAppEnterpriseBaseUrl https ghe example com api v3 githubAppPrivateKey BEGIN OPENSSH PRIVATE KEY END OPENSSH PRIVATE KEY Example for Google Cloud Source repositories yaml kind Secret metadata name github repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https source developers google com p my google project r my repo gcpServiceAccountKey type service account project id my google project private key id REDACTED private key BEGIN PRIVATE KEY nREDACTED n END PRIVATE KEY n client email argocd service account my google project iam gserviceaccount com client id REDACTED auth uri https accounts google com o oauth2 auth token uri https oauth2 googleapis com token auth provider x509 cert url https www googleapis com oauth2 v1 certs client x509 cert url https www googleapis com robot v1 metadata x509 argocd service account 40my google project iam gserviceaccount com tip The Kubernetes documentation has instructions for creating a secret containing a private key https kubernetes io docs concepts configuration secret use case pod with ssh keys Repository Credentials If you want to use the same credentials for multiple repositories you can configure credential templates Credential templates can carry the same credentials information as repositories yaml apiVersion v1 kind Secret metadata name first repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https github com argoproj private repo apiVersion v1 kind Secret metadata name second repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https github com argoproj other private repo apiVersion v1 kind Secret metadata name private repo creds namespace argocd labels argocd argoproj io secret type repo creds stringData type git url https github com argoproj password my password username my username In the above example every repository accessed via HTTPS whose URL is prefixed with https github com argoproj would use a username stored in the key username and a password stored in the key password of the secret private repo creds for connecting to Git In order for Argo CD to use a credential template for any given repository the following conditions must be met The repository must either not be configured at all or if configured must not contain any credential information i e contain none of sshPrivateKey username password The URL configured for a credential template e g https github com argoproj must match as prefix for the repository URL e g https github com argoproj argocd example apps note Matching credential template URL prefixes is done on a best match effort so the longest best match will take precedence The order of definition is not important as opposed to pre v1 4 configuration The following keys are valid to refer to credential secrets SSH repositories sshPrivateKey refers to the SSH private key for accessing the repositories HTTPS repositories username and password refer to the username and or password for accessing the repositories tlsClientCertData and tlsClientCertKey refer to secrets where a TLS client certificate tlsClientCertData and the corresponding private key tlsClientCertKey are stored for accessing the repositories GitHub App repositories githubAppPrivateKey refers to the GitHub App private key for accessing the repositories githubAppID refers to the GitHub Application ID for the application you created githubAppInstallationID refers to the Installation ID of the GitHub app you created and installed githubAppEnterpriseBaseUrl refers to the base api URL for GitHub Enterprise e g https ghe example com api v3 tlsClientCertData and tlsClientCertKey refer to secrets where a TLS client certificate tlsClientCertData and the corresponding private key tlsClientCertKey are stored for accessing GitHub Enterprise if custom certificates are used Repositories using self signed TLS certificates or are signed by custom CA You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named argocd tls certs cm The data section should contain a map with the repository server s hostname part not the complete URL as key and the certificate s in PEM format as data So if you connect to a repository with the URL https server example com repos my repo you should use server example com as key The certificate data should be either the server s certificate in case of self signed certificate or the certificate of the CA that was used to sign the server s certificate You can configure multiple certificates for each server e g if you are having a certificate roll over planned If there are no dedicated certificates configured for a repository server the system s default trust store is used for validating the server s repository This should be good enough for most if not all public Git repository services such as GitLab GitHub and Bitbucket as well as most privately hosted sites which use certificates from well known CAs including Let s Encrypt certificates An example ConfigMap object yaml apiVersion v1 kind ConfigMap metadata name argocd tls certs cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data server example com BEGIN CERTIFICATE MIIF1zCCA7 gAwIBAgIUQdTcSHY2Sxd3Tq v1eIEZPCNbOowDQYJKoZIhvcNAQEL BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0 c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5 NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts WSAFA6oANvbzlu3JrEQ2CHPc CNQm6diPREFwcDPFCe eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR jtvx u P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G ZJIRn OfSz7NzKylfDCat2z3EAutyeT 5oXZoWOmGg 8T7pn pR588GoYYKRQnp YilqCPFX az09EqqK iHXnkdZ Z2fCuU 9M Zhrnlwlygl3RuVBI6xhm ZsXtL2E Gxa61lNy6pyx5 hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko Ml1L4zCU xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD kMpNm6J kKC1li Iy RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV 4u kD1n4p XMc9HYU was CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO gxhHxxzRWxd24uLyk9Px 9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7 bEH4Jatp 10zkco UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86 r6 x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6 x1ewwDwYDVR0TAQH BAUwAwEB zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy Z0xuHSxhTd0jfn Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ XjgxnkiRACXh7K9hsG2r ageRWGevyvx CaRXFbherV1kTnZw4Y9 pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii StENAz2 XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT TuXwZgqnTOq6piVgr FTSa24qSvaXb5z mJDLlk23npecTouLg83TNSn3R6fYQr d Y9eXuUJ8U7 qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9 Ajlv5OtO OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla v HZcFW2AO9so 6ZdVHHQnIbJa7yQJKZ hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY PYYd7bdRIjN7WNZLFsgplnS8 9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck FTv L7DAmg6l37W LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME r4nXtT1S qLR6ASWUS4ViWRhbRlNK XWyb96wrUlv E8I END CERTIFICATE note The argocd tls certs cm ConfigMap will be mounted as a volume at the mount path app config tls in the pods of argocd server and argocd repo server It will create files for each data key in the mount path directory so above example would leave the file app config tls server example com which contains the certificate data It might take a while for changes in the ConfigMap to be reflected in your pods depending on your Kubernetes configuration SSH known host public keys If you are configuring repositories to use SSH Argo CD will need to know their SSH public keys In order for Argo CD to connect via SSH the public key s for each repository server must be pre configured in Argo CD unlike TLS configuration otherwise the connections to the repository will fail You can manage the SSH known hosts data in the argocd ssh known hosts cm ConfigMap This ConfigMap contains a single entry ssh known hosts with the public keys of the SSH servers as its value The value can be filled in from any existing ssh known hosts file or from the output of the ssh keyscan utility which is part of OpenSSH s client package The basic format is server name keytype base64 encoded key one entry per line Here is an example of running ssh keyscan bash for host in bitbucket org github com gitlab com ssh dev azure com vs ssh visualstudio com do ssh keyscan host 2 dev null done bitbucket org ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk aySyboD5QF61I 1WeTwu deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr 6mrui oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc 5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3 30LVlORZkxOh LKL BvbZ iRNhItLqNyieoQj uh 7Iv4uyH cV 0b4WDSd3DptigWq84lJubb9t DnZlrJazxyDCulTmKdOR7vs9gMTo uoIrPSb8ScTtvw65 odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf 97P5zauIhxcjX xHv4M github com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk github com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB Tpockg gitlab com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY gitlab com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn nOeHHE5UOzRdf gitlab com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ jqCMRgBqB98u3z J1sKlXHWfM9dyhSevkMwSbhoR8XIq U0tCNyokEi ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT ia1NEKjunUqu1xOB StKDHMoX4 OKyIzuS0q T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl siMkPGbO5xR En4iEY6K2XPASUEMaieWVNTRCtJ4S8H 9 ssh dev azure com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H vs ssh visualstudio com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H Here is an example ConfigMap object using the output from ssh keyscan above yaml apiVersion v1 kind ConfigMap metadata labels app kubernetes io name argocd ssh known hosts cm app kubernetes io part of argocd name argocd ssh known hosts cm data ssh known hosts This file was automatically generated by hack update ssh known hosts sh DO NOT EDIT ssh github com 443 ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB Tpockg ssh github com 443 ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl ssh github com 443 ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk bitbucket org ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4 a2sjSSpBK0iqitSQ 5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE bitbucket org ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu UUO bitbucket org ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk aySyboD5QF61I 1WeTwu deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr 6mrui oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc 5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3 30LVlORZkxOh LKL BvbZ iRNhItLqNyieoQj uh 7Iv4uyH cV 0b4WDSd3DptigWq84lJubb9t DnZlrJazxyDCulTmKdOR7vs9gMTo uoIrPSb8ScTtvw65 odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf 97P5zauIhxcjX xHv4M github com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB Tpockg github com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk gitlab com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY gitlab com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn nOeHHE5UOzRdf gitlab com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ jqCMRgBqB98u3z J1sKlXHWfM9dyhSevkMwSbhoR8XIq U0tCNyokEi ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT ia1NEKjunUqu1xOB StKDHMoX4 OKyIzuS0q T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl siMkPGbO5xR En4iEY6K2XPASUEMaieWVNTRCtJ4S8H 9 ssh dev azure com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H vs ssh visualstudio com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H note The argocd ssh known hosts cm ConfigMap will be mounted as a volume at the mount path app config ssh in the pods of argocd server and argocd repo server It will create a file ssh known hosts in that directory which contains the SSH known hosts data used by Argo CD for connecting to Git repositories via SSH It might take a while for changes in the ConfigMap to be reflected in your pods depending on your Kubernetes configuration Configure repositories with proxy Proxy for your repository can be specified in the proxy field of the repository secret along with a corresponding noProxy config Argo CD uses this proxy noProxy config to access the repository and do related helm kustomize operations Argo CD looks for the standard proxy environment variables in the repository server if the custom proxy config is absent An example repository with proxy and noProxy yaml apiVersion v1 kind Secret metadata name private repo namespace argocd labels argocd argoproj io secret type repository stringData type git url https github com argoproj private repo proxy https proxy server url 8888 noProxy internal example com company org 10 123 0 0 16 password my password username my username A note on noProxy Argo CD uses exec to interact with different tools such as helm and kustomize Not all of these tools support the same noProxy syntax as the httpproxy go package https cs opensource google go x net internal branch go1 21 vendor http httpproxy proxy go l 38 50 does In case you run in trouble with noProxy not beeing respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support Legacy behaviour In Argo CD version 2 0 and earlier repositories were stored as part of the argocd cm config map For backward compatibility Argo CD will still honor repositories in the config map but this style of repository configuration is deprecated and support for it will be removed in a future version yaml apiVersion v1 kind ConfigMap data repositories url https github com argoproj my private repository passwordSecret name my secret key password usernameSecret name my secret key username repository credentials url https github com argoproj passwordSecret name my secret key password usernameSecret name my secret key username apiVersion v1 kind Secret metadata name my secret namespace argocd stringData password my password username my username Clusters Cluster credentials are stored in secrets same as repositories or repository credentials Each secret must have label argocd argoproj io secret type cluster The secret data must include following fields name cluster name server cluster api server url namespaces optional comma separated list of namespaces which are accessible in that cluster Cluster level resources would be ignored if namespace list is not empty clusterResources optional boolean string true or false determining whether Argo CD can manage cluster level resources on this cluster This setting is used only if the list of managed namespaces is not empty project optional string to designate this as a project scoped cluster config JSON representation of following data structure yaml Basic authentication settings username string password string Bearer authentication settings bearerToken string IAM authentication configuration awsAuthConfig clusterName string roleARN string profile string Configure external command to supply client credentials See https godoc org k8s io client go tools clientcmd api ExecConfig execProviderConfig command string args string env key value apiVersion string installHint string Proxy URL for the kubernetes client to use when connecting to the cluster api server proxyUrl string Transport layer security configuration settings tlsClientConfig Base64 encoded PEM encoded bytes typically read from a client certificate file caData string Base64 encoded PEM encoded bytes typically read from a client certificate file certData string Server should be accessed without verifying the TLS certificate insecure boolean Base64 encoded PEM encoded bytes typically read from a client certificate key file keyData string ServerName is passed to the server for SNI and is used in the client to check server certificates against If ServerName is empty the hostname used to contact the server is used serverName string Disable automatic compression for requests to the cluster disableCompression boolean Note that if you specify a command to run under execProviderConfig that command must be available in the Argo CD image See BYOI Build Your Own Image custom tools md byoi build your own image Cluster secret example yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster example com server https mycluster example com config bearerToken authentication token tlsClientConfig insecure false caData base64 encoded certificate EKS EKS cluster secret example using argocd k8s auth and IRSA https docs aws amazon com eks latest userguide iam roles for service accounts html yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name eks cluster name for argo server https xxxyyyzzz xyz some region eks amazonaws com config awsAuthConfig clusterName my eks cluster name roleARN arn aws iam AWS ACCOUNT ID role IAM ROLE NAME tlsClientConfig insecure false caData base64 encoded certificate This setup requires 1 IRSA enabled https docs aws amazon com eks latest userguide enable iam roles for service accounts html on your Argo CD EKS cluster 2 An IAM role management role for your Argo CD EKS cluster that has an appropriate trust policy and permission policies see below 3 A role created for each cluster being added to Argo CD that is assumable by the Argo CD management role 4 An Access Entry https docs aws amazon com eks latest userguide access entries html within each EKS cluster added to Argo CD that gives the cluster s role from point 3 RBAC permissions to perform actions within the cluster Or alternatively an entry within the aws auth ConfigMap within the cluster added to Argo CD depreciated by EKS https docs aws amazon com eks latest userguide auth configmap html Argo CD Management Role The role created for Argo CD the management role will need to have a trust policy suitable for assumption by certain Argo CD Service Accounts and by itself The service accounts that need to assume this role are argocd application controller argocd applicationset controller argocd server If we create role arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME for this purpose the following is an example trust policy suitable for this need Ensure that the Argo CD cluster has an IAM OIDC provider configured https docs aws amazon com eks latest userguide enable iam roles for service accounts html json Version 2012 10 17 Statement Sid ExplicitSelfRoleAssumption Effect Allow Principal AWS Action sts AssumeRole Condition ArnLike aws PrincipalArn arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME Sid ServiceAccountRoleAssumption Effect Allow Principal Federated arn aws iam AWS ACCOUNT ID oidc provider oidc eks AWS REGION amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE Action sts AssumeRoleWithWebIdentity Condition StringEquals oidc eks AWS REGION amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE sub system serviceaccount argocd argocd application controller system serviceaccount argocd argocd applicationset controller system serviceaccount argocd argocd server oidc eks AWS REGION amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE aud sts amazonaws com Argo CD Service Accounts The 3 service accounts need to be modified to include an annotation with the Argo CD management role ARN Here s an example service account configurations for argocd application controller argocd applicationset controller and argocd server warning Once the annotations has been set on the service accounts the application controller and server pods need to be restarted yaml apiVersion v1 kind ServiceAccount metadata annotations eks amazonaws com role arn arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME name argocd application controller apiVersion v1 kind ServiceAccount metadata annotations eks amazonaws com role arn arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME name argocd applicationset controller apiVersion v1 kind ServiceAccount metadata annotations eks amazonaws com role arn arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME name argocd server IAM Permission Policy The Argo CD management role arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME in our example additionally needs to be allowed to assume a role for each cluster added to Argo CD If we create a role named IAM CLUSTER ROLE for an EKS cluster we are adding to Argo CD we would update the permission policy of the Argo CD management role to include the following json Version 2012 10 17 Statement Effect Allow Action sts AssumeRole Resource arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE This allows the Argo CD management role to assume the cluster role You can add permissions like above to the Argo CD management role for each cluster being managed by Argo CD assuming you create a new role per cluster Cluster Role Trust Policies As stated each EKS cluster being added to Argo CD should have its own corresponding role This role should not have any permission policies Instead it will be used to authenticate against the EKS cluster s API The Argo CD management role assumes this role and calls the AWS API to get an auth token via argocd k8s auth That token is used when connecting to the added cluster s API endpoint If we create role arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE for a cluster being added to Argo CD we should set its trust policy to give the Argo CD management role permission to assume it Note that we re granting the Argo CD management role permission to assume this role above but we also need to permit that action via the cluster role s trust policy A suitable trust policy allowing the IAM CLUSTER ROLE to be assumed by the ARGO CD MANAGEMENT IAM ROLE NAME role looks like this json Version 2012 10 17 Statement Effect Allow Principal AWS arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME Action sts AssumeRole Access Entries Each cluster s role e g arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE has no permission policy Instead we associate that role with an EKS permission policy which grants that role the ability to generate authentication tokens to the cluster s API This EKS permission policy decides what RBAC permissions are granted in that process An access entry https docs aws amazon com eks latest userguide access entries html and the policy associated to the role can be created using the following commands bash For each cluster being added to Argo CD aws eks create access entry cluster name my eks cluster name principal arn arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE type STANDARD kubernetes groups No groups needed aws eks associate access policy cluster name my eks cluster name policy arn arn aws eks aws cluster access policy AmazonEKSClusterAdminPolicy access scope type cluster principal arn arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE The above role is granted cluster admin permissions via AmazonEKSClusterAdminPolicy The Argo CD management role that assume this role is therefore granted the same cluster admin permissions when it generates an API token when adding the associated EKS cluster AWS Auth Depreciated Instead of using Access Entries you may need to use the depreciated aws auth If so the roleARN of each managed cluster needs to be added to each respective cluster s aws auth config map see Enabling IAM principal access to your cluster https docs aws amazon com eks latest userguide add user role html as well as having an assume role policy which allows it to be assumed by the Argo CD pod role An example assume role policy for a cluster which is managed by Argo CD json Version 2012 10 17 Statement Effect Allow Action sts AssumeRole Principal AWS arn aws iam AWS ACCOUNT ID role ARGO CD MANAGEMENT IAM ROLE NAME Example kube system aws auth configmap for your cluster managed by Argo CD yaml apiVersion v1 data Other groups and accounts omitted for brevity Ensure that no other rolearns and or groups are inadvertently removed or you risk borking access to your cluster The group name is a RoleBinding which you use to map to a Cluster Role See https kubernetes io docs reference access authn authz rbac role binding examples mapRoles groups GROUP NAME IN K8S RBAC rolearn arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE username arn aws iam AWS ACCOUNT ID role IAM CLUSTER ROLE Use the role ARN for both rolearn and username Alternative EKS Authentication Methods In some scenarios it may not be possible to use IRSA such as when the Argo CD cluster is running on a different cloud provider s platform In this case there are two options 1 Use execProviderConfig to call the AWS authentication mechanism which enables the injection of environment variables to supply credentials 2 Leverage the new AWS profile option available in Argo CD release 2 10 Both of these options will require the steps involving IAM and the aws auth config map defined above to provide the principal with access to the cluster Using execProviderConfig with Environment Variables yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster server https mycluster example com namespaces my managed namespaces clusterResources true config execProviderConfig command argocd k8s auth args aws cluster name my eks cluster apiVersion client authentication k8s io v1beta1 env AWS REGION xx east 1 AWS ACCESS KEY ID AWS SECRET ACCESS KEY AWS SESSION TOKEN tlsClientConfig insecure false caData This example assumes that the role being attached to the credentials that have been supplied if this is not the case the role can be appended to the args section like so yaml args aws cluster name my eks cluster role arn arn aws iam AWS ACCOUNT ID role IAM ROLE NAME This construct can be used in conjunction with something like the External Secrets Operator to avoid storing the keys in plain text and additionally helps to provide a foundation for key rotation Using An AWS Profile For Authentication The option to use profiles added in release 2 10 provides a method for supplying credentials while still using the standard Argo CD EKS cluster declaration with an additional command flag that points to an AWS credentials file yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster com server https mycluster com config awsAuthConfig clusterName my eks cluster name roleARN arn aws iam AWS ACCOUNT ID role IAM ROLE NAME profile mount path to my profile file tlsClientConfig insecure false caData base64 encoded certificate This will instruct Argo CD to read the file at the provided path and use the credentials defined within to authenticate to AWS The profile must be mounted in both the argocd server and argocd application controller components in order for this to work For example the following values can be defined in a Helm based Argo CD deployment yaml controller extraVolumes name my profile volume secret secretName my aws profile items key my profile file path my profile file extraVolumeMounts name my profile mount mountPath mount path to readOnly true server extraVolumes name my profile volume secret secretName my aws profile items key my profile file path my profile file extraVolumeMounts name my profile mount mountPath mount path to readOnly true Where the secret is defined as follows yaml apiVersion v1 kind Secret metadata name my aws profile type Opaque stringData my profile file default region aws region aws access key id aws access key id aws secret access key aws secret access key aws session token aws session token Secret mounts are updated on an interval not real time If rotation is a requirement ensure the token lifetime outlives the mount update interval and the rotation process doesn t immediately invalidate the existing token GKE GKE cluster secret example using argocd k8s auth and Workload Identity https cloud google com kubernetes engine docs how to workload identity yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster example com server https mycluster example com config execProviderConfig command argocd k8s auth args gcp apiVersion client authentication k8s io v1beta1 tlsClientConfig insecure false caData base64 encoded certificate Note that you must enable Workload Identity on your GKE cluster create GCP service account with appropriate IAM role and bind it to Kubernetes service account for argocd application controller and argocd server showing Pod logs on UI See Use Workload Identity https cloud google com kubernetes engine docs how to workload identity and Authenticating to the Kubernetes API server https cloud google com kubernetes engine docs how to api server authentication AKS Azure cluster secret example using argocd k8s auth and kubelogin https github com Azure kubelogin The option azure to the argocd k8s auth execProviderConfig encapsulates the get token command for kubelogin Depending upon which authentication flow is desired devicecode spn ropc msi azurecli workloadidentity set the environment variable AAD LOGIN METHOD with this value Set other appropriate environment variables depending upon which authentication flow is desired Variable Name Description AAD LOGIN METHOD One of devicecode spn ropc msi azurecli or workloadidentity AAD SERVICE PRINCIPAL CLIENT CERTIFICATE AAD client cert in pfx Used in spn login AAD SERVICE PRINCIPAL CLIENT ID AAD client application ID AAD SERVICE PRINCIPAL CLIENT SECRET AAD client application secret AAD USER PRINCIPAL NAME Used in the ropc flow AAD USER PRINCIPAL PASSWORD Used in the ropc flow AZURE TENANT ID The AAD tenant ID AZURE AUTHORITY HOST Used in the WorkloadIdentityLogin flow AZURE FEDERATED TOKEN FILE Used in the WorkloadIdentityLogin flow AZURE CLIENT ID Used in the WorkloadIdentityLogin flow In addition to the environment variables above argocd k8s auth accepts two extra environment variables to set the AAD environment and to set the AAD server application ID The AAD server application ID will default to 6dae42f8 4368 4678 94ff 3960e28e3630 if not specified See here https github com azure kubelogin exec plugin format for details Variable Name Description AAD ENVIRONMENT NAME The azure environment to use default of AzurePublicCloud AAD SERVER APPLICATION ID The optional AAD server application ID defaults to 6dae42f8 4368 4678 94ff 3960e28e3630 This is an example of using the federated workload login flow https github com Azure kubelogin azure workload federated identity non interactive The federated token file needs to be mounted as a secret into argoCD so it can be used in the flow The location of the token file needs to be set in the environment variable AZURE FEDERATED TOKEN FILE If your AKS cluster utilizes the Mutating Admission Webhook https azure github io azure workload identity docs installation mutating admission webhook html from the Azure Workload Identity project follow these steps to enable the argocd application controller and argocd server pods to use the federated identity 1 Label the Pods Add the azure workload identity use true label to the argocd application controller and argocd server pods 2 Create Federated Identity Credential Generate an Azure federated identity credential for the argocd application controller and argocd server service accounts Refer to the Federated Identity Credential https azure github io azure workload identity docs topics federated identity credential html documentation for detailed instructions 3 Add Annotations to Service Account Add azure workload identity client id CLIENT ID and azure workload identity tenant id TENANT ID annotations to the argocd application controller and argocd server service accounts using the details from the federated credential 4 Set the AZURE CLIENT ID Update the AZURE CLIENT ID in the cluster secret to match the client id of the newly created federated identity credential yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster example com server https mycluster example com config execProviderConfig command argocd k8s auth env AAD ENVIRONMENT NAME AzurePublicCloud AZURE CLIENT ID fill in client id AZURE TENANT ID fill in tenant id optional injected by workload identity mutating admission webhook if enabled AZURE FEDERATED TOKEN FILE opt path to federated file json optional injected by workload identity mutating admission webhook if enabled AZURE AUTHORITY HOST https login microsoftonline com optional injected by workload identity mutating admission webhook if enabled AAD LOGIN METHOD workloadidentity args azure apiVersion client authentication k8s io v1beta1 tlsClientConfig insecure false caData base64 encoded certificate This is an example of using the spn service principal name flow yaml apiVersion v1 kind Secret metadata name mycluster secret labels argocd argoproj io secret type cluster type Opaque stringData name mycluster example com server https mycluster example com config execProviderConfig command argocd k8s auth env AAD ENVIRONMENT NAME AzurePublicCloud AAD SERVICE PRINCIPAL CLIENT SECRET fill in your service principal client secret AZURE TENANT ID fill in tenant id AAD SERVICE PRINCIPAL CLIENT ID fill in your service principal client id AAD LOGIN METHOD spn args azure apiVersion client authentication k8s io v1beta1 tlsClientConfig insecure false caData base64 encoded certificate Helm Chart Repositories Non standard Helm Chart repositories have to be registered explicitly Each repository must have url type and name fields For private Helm repos you may need to configure access credentials and HTTPS settings using username password tlsClientCertData and tlsClientCertKey fields Example yaml apiVersion v1 kind Secret metadata name istio namespace argocd labels argocd argoproj io secret type repository stringData name istio io url https storage googleapis com istio prerelease daily build master latest daily charts type helm apiVersion v1 kind Secret metadata name argo helm namespace argocd labels argocd argoproj io secret type repository stringData name argo url https argoproj github io argo helm type helm username my username password my password tlsClientCertData tlsClientCertKey Resource Exclusion Inclusion Resources can be excluded from discovery and sync so that Argo CD is unaware of them For example the apiGroup kind events k8s io metrics k8s io and coordination k8s io Lease are always excluded Use cases You have temporal issues and you want to exclude problematic resources There are many of a kind of resources that impacts Argo CD s performance Restrict Argo CD s access to certain kinds of resources e g secrets See security md cluster rbac security md cluster rbac To configure this edit the argocd cm config map shell kubectl edit configmap argocd cm n argocd Add resource exclusions e g yaml apiVersion v1 data resource exclusions apiGroups kinds clusters https 192 168 0 20 kind ConfigMap The resource exclusions node is a list of objects Each object can have apiGroups A list of globs to match the API group kinds A list of kinds to match Can be to match all clusters A list of globs to match the cluster If all three match then the resource is ignored In addition to exclusions you might configure the list of included resources using the resource inclusions setting By default all resource group kinds are included The resource inclusions setting allows customizing the list of included group kinds yaml apiVersion v1 data resource inclusions apiGroups kinds Deployment clusters https 192 168 0 20 kind ConfigMap The resource inclusions and resource exclusions might be used together The final list of resources includes group kinds specified in resource inclusions minus group kinds specified in resource exclusions setting Notes Quote globs in your YAML to avoid parsing errors Invalid globs result in the whole rule being ignored If you add a rule that matches existing resources these will appear in the interface as OutOfSync Mask sensitive Annotations on Secrets An optional comma separated list of metadata annotations keys can be configured with resource sensitive mask annotations to mask their values in UI CLI on Secrets yaml resource sensitive mask annotations openshift io token secret value api key Auto respect RBAC for controller Argocd controller can be restricted from discovering syncing specific resources using just controller rbac without having to manually configure resource exclusions This feature can be enabled by setting resource respectRBAC key in argocd cm once it is set the controller will automatically stop watching for resources that it does not have the permission to list access Possible values for resource respectRBAC are strict This setting checks whether the list call made by controller is forbidden unauthorized and if it is it will cross check the permission by making a SelfSubjectAccessReview call for the resource normal This will only check whether the list call response is forbidden unauthorized and skip SelfSubjectAccessReview call to minimize any extra api server calls unset empty default This will disable the feature and controller will continue to monitor all resources Users who are comfortable with an increase in kube api server calls can opt for strict option while users who are concerned with higher api calls and are willing to compromise on the accuracy can opt for the normal option Notes When set to use strict mode controller must have rbac permission to create a SelfSubjectAccessReview resource The SelfSubjectAccessReview request will be only made for the list verb it is assumed that if list is allowed for a resource then all other permissions are also available to the controller Example argocd cm with resource respectRBAC set to strict yaml apiVersion v1 kind ConfigMap metadata name argocd cm data resource respectRBAC strict Resource Custom Labels Custom Labels configured with resource customLabels comma separated string will be displayed in the UI for any resource that defines them Labels on Application Events An optional comma separated list of metadata labels keys can be configured with resource includeEventLabelKeys to add to Kubernetes events generated for Argo CD Applications When events are generated for Applications containing the specified labels the controller adds the matching labels to the event This establishes an easy link between the event and the application allowing for filtering using labels In case of conflict between labels on the Application and AppProject the Application label values are prioritized and added to the event yaml resource includeEventLabelKeys team env To exclude certain labels from events use the resource excludeEventLabelKeys key which takes a comma separated list of metadata labels keys yaml resource excludeEventLabelKeys environment bu Both resource includeEventLabelKeys and resource excludeEventLabelKeys support wildcards SSO RBAC SSO configuration details SSO user management index md RBAC configuration details RBAC rbac md Manage Argo CD Using Argo CD Argo CD is able to manage itself since all settings are represented by Kubernetes manifests The suggested way is to create Kustomize https github com kubernetes sigs kustomize based application which uses base Argo CD manifests from https github com argoproj argo cd https github com argoproj argo cd tree stable manifests and apply required changes on top Example of kustomization yaml yaml additional resources like ingress rules cluster and repository secrets resources github com argoproj argo cd manifests cluster install ref stable clusters secrets yaml repos secrets yaml changes to config maps patches path overlays argo cd cm yaml The live example of self managed Argo CD config is available at https cd apps argoproj io https cd apps argoproj io and with configuration stored at argoproj argoproj deployments https github com argoproj argoproj deployments tree master argocd note You will need to sign in using your GitHub account to get access to https cd apps argoproj io https cd apps argoproj io
argocd Config Management Plugins a Config Management Plugin CMP Argo CD s native config management tools are Helm Jsonnet and Kustomize If you want to use a different config task of building manifests to the plugin Helm OCI or git repository When a config management plugin is correctly configured the repo server may delegate the management tools or if Argo CD s native tool support does not include a feature you need you might need to turn to The Argo CD repo server component is in charge of building Kubernetes manifests based on some source files from a
# Config Management Plugins Argo CD's "native" config management tools are Helm, Jsonnet, and Kustomize. If you want to use a different config management tools, or if Argo CD's native tool support does not include a feature you need, you might need to turn to a Config Management Plugin (CMP). The Argo CD "repo server" component is in charge of building Kubernetes manifests based on some source files from a Helm, OCI, or git repository. When a config management plugin is correctly configured, the repo server may delegate the task of building manifests to the plugin. The following sections will describe how to create, install, and use plugins. Check out the [example plugins](https://github.com/argoproj/argo-cd/tree/master/examples/plugins) for additional guidance. !!! warning Plugins are granted a level of trust in the Argo CD system, so it is important to implement plugins securely. Argo CD administrators should only install plugins from trusted sources, and they should audit plugins to weigh their particular risks and benefits. ## Installing a config management plugin ### Sidecar plugin An operator can configure a plugin tool via a sidecar to repo-server. The following changes are required to configure a new plugin: #### Write the plugin configuration file Plugins will be configured via a ConfigManagementPlugin manifest located inside the plugin container. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: # The name of the plugin must be unique within a given Argo CD instance. name: my-plugin spec: # The version of your plugin. Optional. If specified, the Application's spec.source.plugin.name field # must be <plugin name>-<plugin version>. version: v1.0 # The init command runs in the Application source directory at the beginning of each manifest generation. The init # command can output anything. A non-zero status code will fail manifest generation. init: # Init always happens immediately before generate, but its output is not treated as manifests. # This is a good place to, for example, download chart dependencies. command: [sh] args: [-c, 'echo "Initializing..."'] # The generate command runs in the Application source directory each time manifests are generated. Standard output # must be ONLY valid Kubernetes Objects in either YAML or JSON. A non-zero exit code will fail manifest generation. # To write log messages from the command, write them to stderr, it will always be displayed. # Error output will be sent to the UI, so avoid printing sensitive information (such as secrets). generate: command: [sh, -c] args: - | echo "{\"kind\": \"ConfigMap\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"$ARGOCD_APP_NAME\", \"namespace\": \"$ARGOCD_APP_NAMESPACE\", \"annotations\": {\"Foo\": \"$ARGOCD_ENV_FOO\", \"KubeVersion\": \"$KUBE_VERSION\", \"KubeApiVersion\": \"$KUBE_API_VERSIONS\",\"Bar\": \"baz\"}}}" # The discovery config is applied to a repository. If every configured discovery tool matches, then the plugin may be # used to generate manifests for Applications using the repository. If the discovery config is omitted then the plugin # will not match any application but can still be invoked explicitly by specifying the plugin name in the app spec. # Only one of fileName, find.glob, or find.command should be specified. If multiple are specified then only the # first (in that order) is evaluated. discover: # fileName is a glob pattern (https://pkg.go.dev/path/filepath#Glob) that is applied to the Application's source # directory. If there is a match, this plugin may be used for the Application. fileName: "./subdir/s*.yaml" find: # This does the same thing as fileName, but it supports double-start (nested directory) glob patterns. glob: "**/Chart.yaml" # The find command runs in the repository's root directory. To match, it must exit with status code 0 _and_ # produce non-empty output to standard out. command: [sh, -c, find . -name env.yaml] # The parameters config describes what parameters the UI should display for an Application. It is up to the user to # actually set parameters in the Application manifest (in spec.source.plugin.parameters). The announcements _only_ # inform the "Parameters" tab in the App Details page of the UI. parameters: # Static parameter announcements are sent to the UI for _all_ Applications handled by this plugin. # Think of the `string`, `array`, and `map` values set here as "defaults". It is up to the plugin author to make # sure that these default values actually reflect the plugin's behavior if the user doesn't explicitly set different # values for those parameters. static: - name: string-param title: Description of the string param tooltip: Tooltip shown when the user hovers the # If this field is set, the UI will indicate to the user that they must set the value. required: false # itemType tells the UI how to present the parameter's value (or, for arrays and maps, values). Default is # "string". Examples of other types which may be supported in the future are "boolean" or "number". # Even if the itemType is not "string", the parameter value from the Application spec will be sent to the plugin # as a string. It's up to the plugin to do the appropriate conversion. itemType: "" # collectionType describes what type of value this parameter accepts (string, array, or map) and allows the UI # to present a form to match that type. Default is "string". This field must be present for non-string types. # It will not be inferred from the presence of an `array` or `map` field. collectionType: "" # This field communicates the parameter's default value to the UI. Setting this field is optional. string: default-string-value # All the fields above besides "string" apply to both the array and map type parameter announcements. - name: array-param # This field communicates the parameter's default value to the UI. Setting this field is optional. array: [default, items] collectionType: array - name: map-param # This field communicates the parameter's default value to the UI. Setting this field is optional. map: some: value collectionType: map # Dynamic parameter announcements are announcements specific to an Application handled by this plugin. For example, # the values for a Helm chart's values.yaml file could be sent as parameter announcements. dynamic: # The command is run in an Application's source directory. Standard output must be JSON matching the schema of the # static parameter announcements list. command: [echo, '[{"name": "example-param", "string": "default-string-value"}]'] # If set to `true` then the plugin receives repository files with original file mode. Dangerous since the repository # might have executable files. Set to true only if you trust the CMP plugin authors. preserveFileMode: false # If set to `true` then the plugin can retrieve git credentials from the reposerver during generate. Plugin authors # should ensure these credentials are appropriately protected during execution provideGitCreds: false ``` !!! note While the ConfigManagementPlugin _looks like_ a Kubernetes object, it is not actually a custom resource. It only follows kubernetes-style spec conventions. The `generate` command must print a valid Kubernetes YAML or JSON object stream to stdout. Both `init` and `generate` commands are executed inside the application source directory. The `discover.fileName` is used as [glob](https://pkg.go.dev/path/filepath#Glob) pattern to determine whether an application repository is supported by the plugin or not. ```yaml discover: find: command: [sh, -c, find . -name env.yaml] ``` If `discover.fileName` is not provided, the `discover.find.command` is executed in order to determine whether an application repository is supported by the plugin or not. The `find` command should return a non-error exit code and produce output to stdout when the application source type is supported. #### Place the plugin configuration file in the sidecar Argo CD expects the plugin configuration file to be located at `/home/argocd/cmp-server/config/plugin.yaml` in the sidecar. If you use a custom image for the sidecar, you can add the file directly to that image. ```dockerfile WORKDIR /home/argocd/cmp-server/config/ COPY plugin.yaml ./ ``` If you use a stock image for the sidecar or would rather maintain the plugin configuration in a ConfigMap, just nest the plugin config file in a ConfigMap under the `plugin.yaml` key and mount the ConfigMap in the sidecar (see next section). ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-plugin-config data: plugin.yaml: | apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: name: my-plugin spec: version: v1.0 init: command: [sh, -c, 'echo "Initializing..."'] generate: command: [sh, -c, 'echo "{\"kind\": \"ConfigMap\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"$ARGOCD_APP_NAME\", \"namespace\": \"$ARGOCD_APP_NAMESPACE\", \"annotations\": {\"Foo\": \"$ARGOCD_ENV_FOO\", \"KubeVersion\": \"$KUBE_VERSION\", \"KubeApiVersion\": \"$KUBE_API_VERSIONS\",\"Bar\": \"baz\"}}}"'] discover: fileName: "./subdir/s*.yaml" ``` #### Register the plugin sidecar To install a plugin, patch argocd-repo-server to run the plugin container as a sidecar, with argocd-cmp-server as its entrypoint. You can use either off-the-shelf or custom-built plugin image as sidecar image. For example: ```yaml containers: - name: my-plugin command: [/var/run/argocd/argocd-cmp-server] # Entrypoint should be Argo CD lightweight CMP server i.e. argocd-cmp-server image: ubuntu # This can be off-the-shelf or custom-built image securityContext: runAsNonRoot: true runAsUser: 999 volumeMounts: - mountPath: /var/run/argocd name: var-files - mountPath: /home/argocd/cmp-server/plugins name: plugins # Remove this volumeMount if you've chosen to bake the config file into the sidecar image. - mountPath: /home/argocd/cmp-server/config/plugin.yaml subPath: plugin.yaml name: my-plugin-config # Starting with v2.4, do NOT mount the same tmp volume as the repo-server container. The filesystem separation helps # mitigate path traversal attacks. - mountPath: /tmp name: cmp-tmp volumes: - configMap: name: my-plugin-config name: my-plugin-config - emptyDir: {} name: cmp-tmp ``` !!! important "Double-check these items" 1. Make sure to use `/var/run/argocd/argocd-cmp-server` as an entrypoint. The `argocd-cmp-server` is a lightweight GRPC service that allows Argo CD to interact with the plugin. 2. Make sure that sidecar container is running as user 999. 3. Make sure that plugin configuration file is present at `/home/argocd/cmp-server/config/plugin.yaml`. It can either be volume mapped via configmap or baked into image. ### Using environment variables in your plugin Plugin commands have access to 1. The system environment variables of the sidecar 2. [Standard build environment variables](../user-guide/build-environment.md) 3. Variables in the Application spec (References to system and build variables will get interpolated in the variables' values): apiVersion: argoproj.io/v1alpha1 kind: Application spec: source: plugin: env: - name: FOO value: bar - name: REV value: test-$ARGOCD_APP_REVISION Before reaching the `init.command`, `generate.command`, and `discover.find.command` commands, Argo CD prefixes all user-supplied environment variables (#3 above) with `ARGOCD_ENV_`. This prevents users from directly setting potentially-sensitive environment variables. 4. Parameters in the Application spec: apiVersion: argoproj.io/v1alpha1 kind: Application spec: source: plugin: parameters: - name: values-files array: [values-dev.yaml] - name: helm-parameters map: image.tag: v1.2.3 The parameters are available as JSON in the `ARGOCD_APP_PARAMETERS` environment variable. The example above would produce this JSON: [{"name": "values-files", "array": ["values-dev.yaml"]}, {"name": "helm-parameters", "map": {"image.tag": "v1.2.3"}}] !!! note Parameter announcements, even if they specify defaults, are _not_ sent to the plugin in `ARGOCD_APP_PARAMETERS`. Only parameters explicitly set in the Application spec are sent to the plugin. It is up to the plugin to apply the same defaults as the ones announced to the UI. The same parameters are also available as individual environment variables. The names of the environment variables follows this convention: - name: some-string-param string: some-string-value # PARAM_SOME_STRING_PARAM=some-string-value - name: some-array-param value: [item1, item2] # PARAM_SOME_ARRAY_PARAM_0=item1 # PARAM_SOME_ARRAY_PARAM_1=item2 - name: some-map-param map: image.tag: v1.2.3 # PARAM_SOME_MAP_PARAM_IMAGE_TAG=v1.2.3 !!! warning "Sanitize/escape user input" As part of Argo CD's manifest generation system, config management plugins are treated with a level of trust. Be sure to escape user input in your plugin to prevent malicious input from causing unwanted behavior. ## Using a config management plugin with an Application You may leave the `name` field empty in the `plugin` section for the plugin to be automatically matched with the Application based on its discovery rules. If you do mention the name make sure it is either `<metadata.name>-<spec.version>` if version is mentioned in the `ConfigManagementPlugin` spec or else just `<metadata.name>`. When name is explicitly specified only that particular plugin will be used iff its discovery pattern/command matches the provided application repo. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd spec: project: default source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook plugin: env: - name: FOO value: bar ``` If you don't need to set any environment variables, you can set an empty plugin section. ```yaml plugin: {} ``` !!! important If your CMP command runs too long, the command will be killed, and the UI will show an error. The CMP server respects the timeouts set by the `server.repo.server.timeout.seconds` and `controller.repo.server.timeout.seconds` items in `argocd-cm`. Increase their values from the default of 60s. Each CMP command will also independently timeout on the `ARGOCD_EXEC_TIMEOUT` set for the CMP sidecar. The default is 90s. So if you increase the repo server timeout greater than 90s, be sure to set `ARGOCD_EXEC_TIMEOUT` on the sidecar. !!! note Each Application can only have one config management plugin configured at a time. If you're converting an existing plugin configured through the `argocd-cm` ConfigMap to a sidecar, make sure to update the plugin name to either `<metadata.name>-<spec.version>` if version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. You can also remove the name altogether and let the automatic discovery to identify the plugin. !!! note If a CMP renders blank manfiests, and `prune` is set to `true`, Argo CD will automatically remove resources. CMP plugin authors should ensure errors are part of the exit code. Commonly something like `kustomize build . | cat` won't pass errors because of the pipe. Consider setting `set -o pipefail` so anything piped will pass errors on failure. ## Debugging a CMP If you are actively developing a sidecar-installed CMP, keep a few things in mind: 1. If you are mounting plugin.yaml from a ConfigMap, you will have to restart the repo-server Pod so the plugin will pick up the changes. 2. If you have baked plugin.yaml into your image, you will have to build, push, and force a re-pull of that image on the repo-server Pod so the plugin will pick up the changes. If you are using `:latest`, the Pod will always pull the new image. If you're using a different, static tag, set `imagePullPolicy: Always` on the CMP's sidecar container. 3. CMP errors are cached by the repo-server in Redis. Restarting the repo-server Pod will not clear the cache. Always do a "Hard Refresh" when actively developing a CMP so you have the latest output. 4. Verify your sidecar has started properly by viewing the Pod and seeing that two containers are running `kubectl get pod -l app.kubernetes.io/component=repo-server -n argocd` 5. Write log message to stderr and set the `--loglevel=info` flag in the sidecar. This will print everything written to stderr, even on successfull command execution. ### Other Common Errors | Error Message | Cause | | -- | -- | | `no matches for kind "ConfigManagementPlugin" in version "argoproj.io/v1alpha1"` | The `ConfigManagementPlugin` CRD was deprecated in Argo CD 2.4 and removed in 2.8. This error means you've tried to put the configuration for your plugin directly into Kubernetes as a CRD. Refer to this [section of documentation](#write-the-plugin-configuration-file) for how to write the plugin configuration file and place it properly in the sidecar. | ## Plugin tar stream exclusions In order to increase the speed of manifest generation, certain files and folders can be excluded from being sent to your plugin. We recommend excluding your `.git` folder if it isn't necessary. Use Go's [filepatch.Match](https://pkg.go.dev/path/filepath#Match) syntax. For example, `.git/*` to exclude `.git` folder. You can set it one of three ways: 1. The `--plugin-tar-exclude` argument on the repo server. 2. The `reposerver.plugin.tar.exclusions` key if you are using `argocd-cmd-params-cm` 3. Directly setting `ARGOCD_REPO_SERVER_PLUGIN_TAR_EXCLUSIONS` environment variable on the repo server. For option 1, the flag can be repeated multiple times. For option 2 and 3, you can specify multiple globs by separating them with semicolons. ## Application manifests generation using argocd.argoproj.io/manifest-generate-paths To enhance the application manifests generation process, you can enable the use of the `argocd.argoproj.io/manifest-generate-paths` annotation. When this flag is enabled, the resources specified by this annotation will be passed to the CMP server for generating application manifests, rather than sending the entire repository. This can be particularly useful for monorepos. You can set it one of three ways: 1. The `--plugin-use-manifest-generate-paths` argument on the repo server. 2. The `reposerver.plugin.use.manifest.generate.paths` key if you are using `argocd-cmd-params-cm` 3. Directly setting `ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS` environment variable on the repo server to `true`. ## Migrating from argocd-cm plugins Installing plugins by modifying the argocd-cm ConfigMap is deprecated as of v2.4 and has been completely removed starting in v2.8. CMP plugins work by adding a sidecar to `argocd-repo-server` along with a configuration in that sidecar located at `/home/argocd/cmp-server/config/plugin.yaml`. A argocd-cm plugin can be easily converted with the following steps. ### Convert the ConfigMap entry into a config file First, copy the plugin's configuration into its own YAML file. Take for example the following ConfigMap entry: ```yaml data: configManagementPlugins: | - name: pluginName init: # Optional command to initialize application source directory command: ["sample command"] args: ["sample args"] generate: # Command to generate Kubernetes Objects in either YAML or JSON command: ["sample command"] args: ["sample args"] lockRepo: true # Defaults to false. See below. ``` The `pluginName` item would be converted to a config file like this: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: name: pluginName spec: init: # Optional command to initialize application source directory command: ["sample command"] args: ["sample args"] generate: # Command to generate Kubernetes Objects in either YAML or JSON command: ["sample command"] args: ["sample args"] ``` !!! note The `lockRepo` key is not relevant for sidecar plugins, because sidecar plugins do not share a single source repo directory when generating manifests. Next, we need to decide how this yaml is going to be added to the sidecar. We can either bake the yaml directly into the image, or we can mount it from a ConfigMap. If using a ConfigMap, our example would look like this: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: pluginName namespace: argocd data: pluginName.yaml: | apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: name: pluginName spec: init: # Optional command to initialize application source directory command: ["sample command"] args: ["sample args"] generate: # Command to generate Kubernetes Objects in either YAML or JSON command: ["sample command"] args: ["sample args"] ``` Then this would be mounted in our plugin sidecar. ### Write discovery rules for your plugin Sidecar plugins can use either discovery rules or a plugin name to match Applications to plugins. If the discovery rule is omitted then you have to explicitly specify the plugin by name in the app spec or else that particular plugin will not match any app. If you want to use discovery instead of the plugin name to match applications to your plugin, write rules applicable to your plugin [using the instructions above](#1-write-the-plugin-configuration-file) and add them to your configuration file. To use the name instead of discovery, update the name in your application manifest to `<metadata.name>-<spec.version>` if version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. For example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook spec: source: plugin: name: pluginName # Delete this for auto-discovery (and set `plugin: {}` if `name` was the only value) or use proper sidecar plugin name ``` ### Make sure the plugin has access to the tools it needs Plugins configured with argocd-cm ran on the Argo CD image. This gave it access to all the tools installed on that image by default (see the [Dockerfile](https://github.com/argoproj/argo-cd/blob/master/Dockerfile) for base image and installed tools). You can either use a stock image (like ubuntu, busybox, or alpine/k8s) or design your own base image with the tools your plugin needs. For security, avoid using images with more binaries installed than what your plugin actually needs. ### Test the plugin After installing the plugin as a sidecar [according to the directions above](#installing-a-config-management-plugin), test it out on a few Applications before migrating all of them to the sidecar plugin. Once tests have checked out, remove the plugin entry from your argocd-cm ConfigMap. ### Additional Settings #### Preserve repository files mode By default, config management plugin receives source repository files with reset file mode. This is done for security reasons. If you want to preserve original file mode, you can set `preserveFileMode` to `true` in the plugin spec: !!! warning Make sure you trust the plugin you are using. If you set `preserveFileMode` to `true` then the plugin might receive files with executable permissions which can be a security risk. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: name: pluginName spec: init: command: ["sample command"] args: ["sample args"] generate: command: ["sample command"] args: ["sample args"] preserveFileMode: true ``` ##### Provide Git Credentials By default, the config management plugin is responsible for providing its own credentials to additional Git repositories that may need to be accessed during manifest generation. The reposerver has these credentials available in its git creds store. When credential sharing is allowed, the git credentials used by the reposerver to clone the repository contents are shared for the lifetime of the execution of the config management plugin, utilizing git's `ASKPASS` method to make a call from the config management sidecar container to the reposerver to retrieve the initialized git credentials. Utilizing `ASKPASS` means that credentials are not proactively shared, but rather only provided when an operation requires them. To allow the plugin to access the reposerver git credentials, you can set `provideGitCreds` to `true` in the plugin spec: !!! warning Make sure you trust the plugin you are using. If you set `provideGitCreds` to `true` then the plugin will receive credentials used to clone the source Git repository. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ConfigManagementPlugin metadata: name: pluginName spec: init: command: ["sample command"] args: ["sample args"] generate: command: ["sample command"] args: ["sample args"] provideGitCreds: true ```
argocd
Config Management Plugins Argo CD s native config management tools are Helm Jsonnet and Kustomize If you want to use a different config management tools or if Argo CD s native tool support does not include a feature you need you might need to turn to a Config Management Plugin CMP The Argo CD repo server component is in charge of building Kubernetes manifests based on some source files from a Helm OCI or git repository When a config management plugin is correctly configured the repo server may delegate the task of building manifests to the plugin The following sections will describe how to create install and use plugins Check out the example plugins https github com argoproj argo cd tree master examples plugins for additional guidance warning Plugins are granted a level of trust in the Argo CD system so it is important to implement plugins securely Argo CD administrators should only install plugins from trusted sources and they should audit plugins to weigh their particular risks and benefits Installing a config management plugin Sidecar plugin An operator can configure a plugin tool via a sidecar to repo server The following changes are required to configure a new plugin Write the plugin configuration file Plugins will be configured via a ConfigManagementPlugin manifest located inside the plugin container yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata The name of the plugin must be unique within a given Argo CD instance name my plugin spec The version of your plugin Optional If specified the Application s spec source plugin name field must be plugin name plugin version version v1 0 The init command runs in the Application source directory at the beginning of each manifest generation The init command can output anything A non zero status code will fail manifest generation init Init always happens immediately before generate but its output is not treated as manifests This is a good place to for example download chart dependencies command sh args c echo Initializing The generate command runs in the Application source directory each time manifests are generated Standard output must be ONLY valid Kubernetes Objects in either YAML or JSON A non zero exit code will fail manifest generation To write log messages from the command write them to stderr it will always be displayed Error output will be sent to the UI so avoid printing sensitive information such as secrets generate command sh c args echo kind ConfigMap apiVersion v1 metadata name ARGOCD APP NAME namespace ARGOCD APP NAMESPACE annotations Foo ARGOCD ENV FOO KubeVersion KUBE VERSION KubeApiVersion KUBE API VERSIONS Bar baz The discovery config is applied to a repository If every configured discovery tool matches then the plugin may be used to generate manifests for Applications using the repository If the discovery config is omitted then the plugin will not match any application but can still be invoked explicitly by specifying the plugin name in the app spec Only one of fileName find glob or find command should be specified If multiple are specified then only the first in that order is evaluated discover fileName is a glob pattern https pkg go dev path filepath Glob that is applied to the Application s source directory If there is a match this plugin may be used for the Application fileName subdir s yaml find This does the same thing as fileName but it supports double start nested directory glob patterns glob Chart yaml The find command runs in the repository s root directory To match it must exit with status code 0 and produce non empty output to standard out command sh c find name env yaml The parameters config describes what parameters the UI should display for an Application It is up to the user to actually set parameters in the Application manifest in spec source plugin parameters The announcements only inform the Parameters tab in the App Details page of the UI parameters Static parameter announcements are sent to the UI for all Applications handled by this plugin Think of the string array and map values set here as defaults It is up to the plugin author to make sure that these default values actually reflect the plugin s behavior if the user doesn t explicitly set different values for those parameters static name string param title Description of the string param tooltip Tooltip shown when the user hovers the If this field is set the UI will indicate to the user that they must set the value required false itemType tells the UI how to present the parameter s value or for arrays and maps values Default is string Examples of other types which may be supported in the future are boolean or number Even if the itemType is not string the parameter value from the Application spec will be sent to the plugin as a string It s up to the plugin to do the appropriate conversion itemType collectionType describes what type of value this parameter accepts string array or map and allows the UI to present a form to match that type Default is string This field must be present for non string types It will not be inferred from the presence of an array or map field collectionType This field communicates the parameter s default value to the UI Setting this field is optional string default string value All the fields above besides string apply to both the array and map type parameter announcements name array param This field communicates the parameter s default value to the UI Setting this field is optional array default items collectionType array name map param This field communicates the parameter s default value to the UI Setting this field is optional map some value collectionType map Dynamic parameter announcements are announcements specific to an Application handled by this plugin For example the values for a Helm chart s values yaml file could be sent as parameter announcements dynamic The command is run in an Application s source directory Standard output must be JSON matching the schema of the static parameter announcements list command echo name example param string default string value If set to true then the plugin receives repository files with original file mode Dangerous since the repository might have executable files Set to true only if you trust the CMP plugin authors preserveFileMode false If set to true then the plugin can retrieve git credentials from the reposerver during generate Plugin authors should ensure these credentials are appropriately protected during execution provideGitCreds false note While the ConfigManagementPlugin looks like a Kubernetes object it is not actually a custom resource It only follows kubernetes style spec conventions The generate command must print a valid Kubernetes YAML or JSON object stream to stdout Both init and generate commands are executed inside the application source directory The discover fileName is used as glob https pkg go dev path filepath Glob pattern to determine whether an application repository is supported by the plugin or not yaml discover find command sh c find name env yaml If discover fileName is not provided the discover find command is executed in order to determine whether an application repository is supported by the plugin or not The find command should return a non error exit code and produce output to stdout when the application source type is supported Place the plugin configuration file in the sidecar Argo CD expects the plugin configuration file to be located at home argocd cmp server config plugin yaml in the sidecar If you use a custom image for the sidecar you can add the file directly to that image dockerfile WORKDIR home argocd cmp server config COPY plugin yaml If you use a stock image for the sidecar or would rather maintain the plugin configuration in a ConfigMap just nest the plugin config file in a ConfigMap under the plugin yaml key and mount the ConfigMap in the sidecar see next section yaml apiVersion v1 kind ConfigMap metadata name my plugin config data plugin yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata name my plugin spec version v1 0 init command sh c echo Initializing generate command sh c echo kind ConfigMap apiVersion v1 metadata name ARGOCD APP NAME namespace ARGOCD APP NAMESPACE annotations Foo ARGOCD ENV FOO KubeVersion KUBE VERSION KubeApiVersion KUBE API VERSIONS Bar baz discover fileName subdir s yaml Register the plugin sidecar To install a plugin patch argocd repo server to run the plugin container as a sidecar with argocd cmp server as its entrypoint You can use either off the shelf or custom built plugin image as sidecar image For example yaml containers name my plugin command var run argocd argocd cmp server Entrypoint should be Argo CD lightweight CMP server i e argocd cmp server image ubuntu This can be off the shelf or custom built image securityContext runAsNonRoot true runAsUser 999 volumeMounts mountPath var run argocd name var files mountPath home argocd cmp server plugins name plugins Remove this volumeMount if you ve chosen to bake the config file into the sidecar image mountPath home argocd cmp server config plugin yaml subPath plugin yaml name my plugin config Starting with v2 4 do NOT mount the same tmp volume as the repo server container The filesystem separation helps mitigate path traversal attacks mountPath tmp name cmp tmp volumes configMap name my plugin config name my plugin config emptyDir name cmp tmp important Double check these items 1 Make sure to use var run argocd argocd cmp server as an entrypoint The argocd cmp server is a lightweight GRPC service that allows Argo CD to interact with the plugin 2 Make sure that sidecar container is running as user 999 3 Make sure that plugin configuration file is present at home argocd cmp server config plugin yaml It can either be volume mapped via configmap or baked into image Using environment variables in your plugin Plugin commands have access to 1 The system environment variables of the sidecar 2 Standard build environment variables user guide build environment md 3 Variables in the Application spec References to system and build variables will get interpolated in the variables values apiVersion argoproj io v1alpha1 kind Application spec source plugin env name FOO value bar name REV value test ARGOCD APP REVISION Before reaching the init command generate command and discover find command commands Argo CD prefixes all user supplied environment variables 3 above with ARGOCD ENV This prevents users from directly setting potentially sensitive environment variables 4 Parameters in the Application spec apiVersion argoproj io v1alpha1 kind Application spec source plugin parameters name values files array values dev yaml name helm parameters map image tag v1 2 3 The parameters are available as JSON in the ARGOCD APP PARAMETERS environment variable The example above would produce this JSON name values files array values dev yaml name helm parameters map image tag v1 2 3 note Parameter announcements even if they specify defaults are not sent to the plugin in ARGOCD APP PARAMETERS Only parameters explicitly set in the Application spec are sent to the plugin It is up to the plugin to apply the same defaults as the ones announced to the UI The same parameters are also available as individual environment variables The names of the environment variables follows this convention name some string param string some string value PARAM SOME STRING PARAM some string value name some array param value item1 item2 PARAM SOME ARRAY PARAM 0 item1 PARAM SOME ARRAY PARAM 1 item2 name some map param map image tag v1 2 3 PARAM SOME MAP PARAM IMAGE TAG v1 2 3 warning Sanitize escape user input As part of Argo CD s manifest generation system config management plugins are treated with a level of trust Be sure to escape user input in your plugin to prevent malicious input from causing unwanted behavior Using a config management plugin with an Application You may leave the name field empty in the plugin section for the plugin to be automatically matched with the Application based on its discovery rules If you do mention the name make sure it is either metadata name spec version if version is mentioned in the ConfigManagementPlugin spec or else just metadata name When name is explicitly specified only that particular plugin will be used iff its discovery pattern command matches the provided application repo yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd spec project default source repoURL https github com argoproj argocd example apps git targetRevision HEAD path guestbook plugin env name FOO value bar If you don t need to set any environment variables you can set an empty plugin section yaml plugin important If your CMP command runs too long the command will be killed and the UI will show an error The CMP server respects the timeouts set by the server repo server timeout seconds and controller repo server timeout seconds items in argocd cm Increase their values from the default of 60s Each CMP command will also independently timeout on the ARGOCD EXEC TIMEOUT set for the CMP sidecar The default is 90s So if you increase the repo server timeout greater than 90s be sure to set ARGOCD EXEC TIMEOUT on the sidecar note Each Application can only have one config management plugin configured at a time If you re converting an existing plugin configured through the argocd cm ConfigMap to a sidecar make sure to update the plugin name to either metadata name spec version if version was mentioned in the ConfigManagementPlugin spec or else just use metadata name You can also remove the name altogether and let the automatic discovery to identify the plugin note If a CMP renders blank manfiests and prune is set to true Argo CD will automatically remove resources CMP plugin authors should ensure errors are part of the exit code Commonly something like kustomize build cat won t pass errors because of the pipe Consider setting set o pipefail so anything piped will pass errors on failure Debugging a CMP If you are actively developing a sidecar installed CMP keep a few things in mind 1 If you are mounting plugin yaml from a ConfigMap you will have to restart the repo server Pod so the plugin will pick up the changes 2 If you have baked plugin yaml into your image you will have to build push and force a re pull of that image on the repo server Pod so the plugin will pick up the changes If you are using latest the Pod will always pull the new image If you re using a different static tag set imagePullPolicy Always on the CMP s sidecar container 3 CMP errors are cached by the repo server in Redis Restarting the repo server Pod will not clear the cache Always do a Hard Refresh when actively developing a CMP so you have the latest output 4 Verify your sidecar has started properly by viewing the Pod and seeing that two containers are running kubectl get pod l app kubernetes io component repo server n argocd 5 Write log message to stderr and set the loglevel info flag in the sidecar This will print everything written to stderr even on successfull command execution Other Common Errors Error Message Cause no matches for kind ConfigManagementPlugin in version argoproj io v1alpha1 The ConfigManagementPlugin CRD was deprecated in Argo CD 2 4 and removed in 2 8 This error means you ve tried to put the configuration for your plugin directly into Kubernetes as a CRD Refer to this section of documentation write the plugin configuration file for how to write the plugin configuration file and place it properly in the sidecar Plugin tar stream exclusions In order to increase the speed of manifest generation certain files and folders can be excluded from being sent to your plugin We recommend excluding your git folder if it isn t necessary Use Go s filepatch Match https pkg go dev path filepath Match syntax For example git to exclude git folder You can set it one of three ways 1 The plugin tar exclude argument on the repo server 2 The reposerver plugin tar exclusions key if you are using argocd cmd params cm 3 Directly setting ARGOCD REPO SERVER PLUGIN TAR EXCLUSIONS environment variable on the repo server For option 1 the flag can be repeated multiple times For option 2 and 3 you can specify multiple globs by separating them with semicolons Application manifests generation using argocd argoproj io manifest generate paths To enhance the application manifests generation process you can enable the use of the argocd argoproj io manifest generate paths annotation When this flag is enabled the resources specified by this annotation will be passed to the CMP server for generating application manifests rather than sending the entire repository This can be particularly useful for monorepos You can set it one of three ways 1 The plugin use manifest generate paths argument on the repo server 2 The reposerver plugin use manifest generate paths key if you are using argocd cmd params cm 3 Directly setting ARGOCD REPO SERVER PLUGIN USE MANIFEST GENERATE PATHS environment variable on the repo server to true Migrating from argocd cm plugins Installing plugins by modifying the argocd cm ConfigMap is deprecated as of v2 4 and has been completely removed starting in v2 8 CMP plugins work by adding a sidecar to argocd repo server along with a configuration in that sidecar located at home argocd cmp server config plugin yaml A argocd cm plugin can be easily converted with the following steps Convert the ConfigMap entry into a config file First copy the plugin s configuration into its own YAML file Take for example the following ConfigMap entry yaml data configManagementPlugins name pluginName init Optional command to initialize application source directory command sample command args sample args generate Command to generate Kubernetes Objects in either YAML or JSON command sample command args sample args lockRepo true Defaults to false See below The pluginName item would be converted to a config file like this yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata name pluginName spec init Optional command to initialize application source directory command sample command args sample args generate Command to generate Kubernetes Objects in either YAML or JSON command sample command args sample args note The lockRepo key is not relevant for sidecar plugins because sidecar plugins do not share a single source repo directory when generating manifests Next we need to decide how this yaml is going to be added to the sidecar We can either bake the yaml directly into the image or we can mount it from a ConfigMap If using a ConfigMap our example would look like this yaml apiVersion v1 kind ConfigMap metadata name pluginName namespace argocd data pluginName yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata name pluginName spec init Optional command to initialize application source directory command sample command args sample args generate Command to generate Kubernetes Objects in either YAML or JSON command sample command args sample args Then this would be mounted in our plugin sidecar Write discovery rules for your plugin Sidecar plugins can use either discovery rules or a plugin name to match Applications to plugins If the discovery rule is omitted then you have to explicitly specify the plugin by name in the app spec or else that particular plugin will not match any app If you want to use discovery instead of the plugin name to match applications to your plugin write rules applicable to your plugin using the instructions above 1 write the plugin configuration file and add them to your configuration file To use the name instead of discovery update the name in your application manifest to metadata name spec version if version was mentioned in the ConfigManagementPlugin spec or else just use metadata name For example yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook spec source plugin name pluginName Delete this for auto discovery and set plugin if name was the only value or use proper sidecar plugin name Make sure the plugin has access to the tools it needs Plugins configured with argocd cm ran on the Argo CD image This gave it access to all the tools installed on that image by default see the Dockerfile https github com argoproj argo cd blob master Dockerfile for base image and installed tools You can either use a stock image like ubuntu busybox or alpine k8s or design your own base image with the tools your plugin needs For security avoid using images with more binaries installed than what your plugin actually needs Test the plugin After installing the plugin as a sidecar according to the directions above installing a config management plugin test it out on a few Applications before migrating all of them to the sidecar plugin Once tests have checked out remove the plugin entry from your argocd cm ConfigMap Additional Settings Preserve repository files mode By default config management plugin receives source repository files with reset file mode This is done for security reasons If you want to preserve original file mode you can set preserveFileMode to true in the plugin spec warning Make sure you trust the plugin you are using If you set preserveFileMode to true then the plugin might receive files with executable permissions which can be a security risk yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata name pluginName spec init command sample command args sample args generate command sample command args sample args preserveFileMode true Provide Git Credentials By default the config management plugin is responsible for providing its own credentials to additional Git repositories that may need to be accessed during manifest generation The reposerver has these credentials available in its git creds store When credential sharing is allowed the git credentials used by the reposerver to clone the repository contents are shared for the lifetime of the execution of the config management plugin utilizing git s ASKPASS method to make a call from the config management sidecar container to the reposerver to retrieve the initialized git credentials Utilizing ASKPASS means that credentials are not proactively shared but rather only provided when an operation requires them To allow the plugin to access the reposerver git credentials you can set provideGitCreds to true in the plugin spec warning Make sure you trust the plugin you are using If you set provideGitCreds to true then the plugin will receive credentials used to clone the source Git repository yaml apiVersion argoproj io v1alpha1 kind ConfigManagementPlugin metadata name pluginName spec init command sample command args sample args generate command sample command args sample args provideGitCreds true
argocd Metrics Application Controller Metrics gauge Information about Applications It contains labels such as and that reflect the application state in Argo CD Metrics about applications Scraped at the endpoint Metric Type Description Argo CD exposes different sets of Prometheus metrics per server
# Metrics Argo CD exposes different sets of Prometheus metrics per server. ## Application Controller Metrics Metrics about applications. Scraped at the `argocd-metrics:8082/metrics` endpoint. | Metric | Type | Description | |--------|:----:|-------------| | `argocd_app_info` | gauge | Information about Applications. It contains labels such as `sync_status` and `health_status` that reflect the application state in Argo CD. | | `argocd_app_condition` | gauge | Report Applications conditions. It contains the conditions currently present in the application status. | | `argocd_app_k8s_request_total` | counter | Number of Kubernetes requests executed during application reconciliation | | `argocd_app_labels` | gauge | Argo Application labels converted to Prometheus labels. Disabled by default. See section below about how to enable it. | | `argocd_app_orphaned_resources_count` | gauge | Number of orphaned resources per application. | | `argocd_app_reconcile` | histogram | Application reconciliation performance in seconds. | | `argocd_app_sync_total` | counter | Counter for application sync history | | `argocd_cluster_api_resource_objects` | gauge | Number of k8s resource objects in the cache. | | `argocd_cluster_api_resources` | gauge | Number of monitored Kubernetes API resources. | | `argocd_cluster_cache_age_seconds` | gauge | Cluster cache age in seconds. | | `argocd_cluster_connection_status` | gauge | The k8s cluster current connection status. | | `argocd_cluster_events_total` | counter | Number of processes k8s resource events. | | `argocd_cluster_info` | gauge | Information about cluster. | | `argocd_kubectl_exec_pending` | gauge | Number of pending kubectl executions | | `argocd_kubectl_exec_total` | counter | Number of kubectl executions | | `argocd_redis_request_duration` | histogram | Redis requests duration. | | `argocd_redis_request_total` | counter | Number of redis requests executed during application reconciliation | If you use Argo CD with many application and project creation and deletion, the metrics page will keep in cache your application and project's history. If you are having issues because of a large number of metrics cardinality due to deleted resources, you can schedule a metrics reset to clean the history with an application controller flag. Example: `--metrics-cache-expiration="24h0m0s"`. ### Exposing Application labels as Prometheus metrics There are use-cases where Argo CD Applications contain labels that are desired to be exposed as Prometheus metrics. Some examples are: * Having the team name as a label to allow routing alerts to specific receivers * Creating dashboards broken down by business units As the Application labels are specific to each company, this feature is disabled by default. To enable it, add the `--metrics-application-labels` flag to the Argo CD application controller. The example below will expose the Argo CD Application labels `team-name` and `business-unit` to Prometheus: containers: - command: - argocd-application-controller - --metrics-application-labels - team-name - --metrics-application-labels - business-unit In this case, the metric would look like: ``` # TYPE argocd_app_labels gauge argocd_app_labels{label_business_unit="bu-id-1",label_team_name="my-team",name="my-app-1",namespace="argocd",project="important-project"} 1 argocd_app_labels{label_business_unit="bu-id-1",label_team_name="my-team",name="my-app-2",namespace="argocd",project="important-project"} 1 argocd_app_labels{label_business_unit="bu-id-2",label_team_name="another-team",name="my-app-3",namespace="argocd",project="important-project"} 1 ``` ### Exposing Application conditions as Prometheus metrics There are use-cases where Argo CD Applications contain conditions that are desired to be exposed as Prometheus metrics. Some examples are: * Hunting orphaned resources across all deployed applications * Knowing which resources are excluded from ArgoCD As the Application conditions are specific to each company, this feature is disabled by default. To enable it, add the `--metrics-application-conditions` flag to the Argo CD application controller. The example below will expose the Argo CD Application condition `OrphanedResourceWarning` and `ExcludedResourceWarning` to Prometheus: ```yaml containers: - command: - argocd-application-controller - --metrics-application-conditions - OrphanedResourceWarning - --metrics-application-conditions - ExcludedResourceWarning ``` ## Application Set Controller metrics The Application Set controller exposes the following metrics for application sets. | Metric | Type | Description | |--------|:----:|-------------| | `argocd_appset_info` | gauge | Information about Application Sets. It contains labels for the name and namespace of an application set as well as `Resource_update_status` that reflects the `ResourcesUpToDate` property | | `argocd_appset_reconcile` | histogram | Application reconciliation performance in seconds. It contains labels for the name and namespace of an applicationset | | `argocd_appset_labels` | gauge | Applicationset labels translated to Prometheus labels. Disabled by default | | `argocd_appset_owned_applications` | gauge | Number of applications owned by the applicationset. It contains labels for the name and namespace of an applicationset. | Similar to the same metric in application controller (`argocd_app_labels`) the metric `argocd_appset_labels` is disabled by default. You can enable it by providing the `–metrics-applicationset-labels` argument to the applicationset controller. Once enabled it works exactly the same as application controller metrics (label_ appended to normalized label name). Available labels include Name, Namespace + all labels enabled by the command line options and their value (exactly like application controller metrics described in the previous section). ## API Server Metrics Metrics about API Server API request and response activity (request totals, response codes, etc...). Scraped at the `argocd-server-metrics:8083/metrics` endpoint. | Metric | Type | Description | |--------|:----:|-------------| | `argocd_redis_request_duration` | histogram | Redis requests duration. | | `argocd_redis_request_total` | counter | Number of Kubernetes requests executed during application reconciliation. | | `grpc_server_handled_total` | counter | Total number of RPCs completed on the server, regardless of success or failure. | | `grpc_server_msg_sent_total` | counter | Total number of gRPC stream messages sent by the server. | | `argocd_proxy_extension_request_total` | counter | Number of requests sent to the configured proxy extensions. | | `argocd_proxy_extension_request_duration_seconds` | histogram | Request duration in seconds between the Argo CD API server and the proxy extension backend. | ## Repo Server Metrics Metrics about the Repo Server. Scraped at the `argocd-repo-server:8084/metrics` endpoint. | Metric | Type | Description | |--------|:----:|-------------| | `argocd_git_request_duration_seconds` | histogram | Git requests duration seconds. | | `argocd_git_request_total` | counter | Number of git requests performed by repo server | | `argocd_git_fetch_fail_total` | counter | Number of git fetch requests failures by repo server | | `argocd_redis_request_duration_seconds` | histogram | Redis requests duration seconds. | | `argocd_redis_request_total` | counter | Number of Kubernetes requests executed during application reconciliation. | | `argocd_repo_pending_request_total` | gauge | Number of pending requests requiring repository lock | ## Prometheus Operator If using Prometheus Operator, the following ServiceMonitor example manifests can be used. Add a namespace where Argo CD is installed and change `metadata.labels.release` to the name of label selected by your Prometheus. ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-metrics labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-metrics endpoints: - port: metrics ``` ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-server-metrics labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-server-metrics endpoints: - port: metrics ``` ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-repo-server-metrics labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-repo-server endpoints: - port: metrics ``` ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-applicationset-controller-metrics labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-applicationset-controller endpoints: - port: metrics ``` ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-dex-server labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-dex-server endpoints: - port: metrics ``` ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-redis-haproxy-metrics labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-redis-ha-haproxy endpoints: - port: http-exporter-port ``` For notifications controller, you need to additionally add following: ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: argocd-notifications-controller labels: release: prometheus-operator spec: selector: matchLabels: app.kubernetes.io/name: argocd-notifications-controller-metrics endpoints: - port: metrics ``` ## Dashboards You can find an example Grafana dashboard [here](https://github.com/argoproj/argo-cd/blob/master/examples/dashboard.json) or check demo instance [dashboard](https://grafana.apps.argoproj.io). ![dashboard](../assets/dashboard.jpg)
argocd
Metrics Argo CD exposes different sets of Prometheus metrics per server Application Controller Metrics Metrics about applications Scraped at the argocd metrics 8082 metrics endpoint Metric Type Description argocd app info gauge Information about Applications It contains labels such as sync status and health status that reflect the application state in Argo CD argocd app condition gauge Report Applications conditions It contains the conditions currently present in the application status argocd app k8s request total counter Number of Kubernetes requests executed during application reconciliation argocd app labels gauge Argo Application labels converted to Prometheus labels Disabled by default See section below about how to enable it argocd app orphaned resources count gauge Number of orphaned resources per application argocd app reconcile histogram Application reconciliation performance in seconds argocd app sync total counter Counter for application sync history argocd cluster api resource objects gauge Number of k8s resource objects in the cache argocd cluster api resources gauge Number of monitored Kubernetes API resources argocd cluster cache age seconds gauge Cluster cache age in seconds argocd cluster connection status gauge The k8s cluster current connection status argocd cluster events total counter Number of processes k8s resource events argocd cluster info gauge Information about cluster argocd kubectl exec pending gauge Number of pending kubectl executions argocd kubectl exec total counter Number of kubectl executions argocd redis request duration histogram Redis requests duration argocd redis request total counter Number of redis requests executed during application reconciliation If you use Argo CD with many application and project creation and deletion the metrics page will keep in cache your application and project s history If you are having issues because of a large number of metrics cardinality due to deleted resources you can schedule a metrics reset to clean the history with an application controller flag Example metrics cache expiration 24h0m0s Exposing Application labels as Prometheus metrics There are use cases where Argo CD Applications contain labels that are desired to be exposed as Prometheus metrics Some examples are Having the team name as a label to allow routing alerts to specific receivers Creating dashboards broken down by business units As the Application labels are specific to each company this feature is disabled by default To enable it add the metrics application labels flag to the Argo CD application controller The example below will expose the Argo CD Application labels team name and business unit to Prometheus containers command argocd application controller metrics application labels team name metrics application labels business unit In this case the metric would look like TYPE argocd app labels gauge argocd app labels label business unit bu id 1 label team name my team name my app 1 namespace argocd project important project 1 argocd app labels label business unit bu id 1 label team name my team name my app 2 namespace argocd project important project 1 argocd app labels label business unit bu id 2 label team name another team name my app 3 namespace argocd project important project 1 Exposing Application conditions as Prometheus metrics There are use cases where Argo CD Applications contain conditions that are desired to be exposed as Prometheus metrics Some examples are Hunting orphaned resources across all deployed applications Knowing which resources are excluded from ArgoCD As the Application conditions are specific to each company this feature is disabled by default To enable it add the metrics application conditions flag to the Argo CD application controller The example below will expose the Argo CD Application condition OrphanedResourceWarning and ExcludedResourceWarning to Prometheus yaml containers command argocd application controller metrics application conditions OrphanedResourceWarning metrics application conditions ExcludedResourceWarning Application Set Controller metrics The Application Set controller exposes the following metrics for application sets Metric Type Description argocd appset info gauge Information about Application Sets It contains labels for the name and namespace of an application set as well as Resource update status that reflects the ResourcesUpToDate property argocd appset reconcile histogram Application reconciliation performance in seconds It contains labels for the name and namespace of an applicationset argocd appset labels gauge Applicationset labels translated to Prometheus labels Disabled by default argocd appset owned applications gauge Number of applications owned by the applicationset It contains labels for the name and namespace of an applicationset Similar to the same metric in application controller argocd app labels the metric argocd appset labels is disabled by default You can enable it by providing the metrics applicationset labels argument to the applicationset controller Once enabled it works exactly the same as application controller metrics label appended to normalized label name Available labels include Name Namespace all labels enabled by the command line options and their value exactly like application controller metrics described in the previous section API Server Metrics Metrics about API Server API request and response activity request totals response codes etc Scraped at the argocd server metrics 8083 metrics endpoint Metric Type Description argocd redis request duration histogram Redis requests duration argocd redis request total counter Number of Kubernetes requests executed during application reconciliation grpc server handled total counter Total number of RPCs completed on the server regardless of success or failure grpc server msg sent total counter Total number of gRPC stream messages sent by the server argocd proxy extension request total counter Number of requests sent to the configured proxy extensions argocd proxy extension request duration seconds histogram Request duration in seconds between the Argo CD API server and the proxy extension backend Repo Server Metrics Metrics about the Repo Server Scraped at the argocd repo server 8084 metrics endpoint Metric Type Description argocd git request duration seconds histogram Git requests duration seconds argocd git request total counter Number of git requests performed by repo server argocd git fetch fail total counter Number of git fetch requests failures by repo server argocd redis request duration seconds histogram Redis requests duration seconds argocd redis request total counter Number of Kubernetes requests executed during application reconciliation argocd repo pending request total gauge Number of pending requests requiring repository lock Prometheus Operator If using Prometheus Operator the following ServiceMonitor example manifests can be used Add a namespace where Argo CD is installed and change metadata labels release to the name of label selected by your Prometheus yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd metrics labels release prometheus operator spec selector matchLabels app kubernetes io name argocd metrics endpoints port metrics yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd server metrics labels release prometheus operator spec selector matchLabels app kubernetes io name argocd server metrics endpoints port metrics yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd repo server metrics labels release prometheus operator spec selector matchLabels app kubernetes io name argocd repo server endpoints port metrics yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd applicationset controller metrics labels release prometheus operator spec selector matchLabels app kubernetes io name argocd applicationset controller endpoints port metrics yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd dex server labels release prometheus operator spec selector matchLabels app kubernetes io name argocd dex server endpoints port metrics yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd redis haproxy metrics labels release prometheus operator spec selector matchLabels app kubernetes io name argocd redis ha haproxy endpoints port http exporter port For notifications controller you need to additionally add following yaml apiVersion monitoring coreos com v1 kind ServiceMonitor metadata name argocd notifications controller labels release prometheus operator spec selector matchLabels app kubernetes io name argocd notifications controller metrics endpoints port metrics Dashboards You can find an example Grafana dashboard here https github com argoproj argo cd blob master examples dashboard json or check demo instance dashboard https grafana apps argoproj io dashboard assets dashboard jpg
argocd The following groups of features won t be available in this engine capable of getting the desired state from Git repositories and Introduction Argo CD Core mode With this installation you will have a fully functional GitOps applying it in Kubernetes Argo CD Core is a different installation that runs Argo CD in headless
# Argo CD Core ## Introduction Argo CD Core is a different installation that runs Argo CD in headless mode. With this installation, you will have a fully functional GitOps engine capable of getting the desired state from Git repositories and applying it in Kubernetes. The following groups of features won't be available in this installation: - Argo CD RBAC model - Argo CD API - Argo CD Notification Controller - OIDC based authentication The following features will be partially available (see the [usage](#using) section below for more details): - Argo CD Web UI - Argo CD CLI - Multi-tenancy (strictly GitOps based on git push permissions) A few use-cases that justify running Argo CD Core are: - As a cluster admin, I want to rely on Kubernetes RBAC only. - As a devops engineer, I don't want to learn a new API or depend on another CLI to automate my deployments. I want to rely on the Kubernetes API only. - As a cluster admin, I don't want to provide Argo CD UI or Argo CD CLI to developers. ## Architecture Because Argo CD is designed with a component based architecture in mind, it is possible to have a more minimalist installation. In this case fewer components are installed and yet the main GitOps functionality remains operational. In the diagram below, the Core box, shows the components that will be installed while opting for Argo CD Core: ![Argo CD Core](../assets/argocd-core-components.png) Note that even if the Argo CD controller can run without Redis, it isn't recommended. The Argo CD controller uses Redis as an important caching mechanism reducing the load on Kube API and in Git. For this reason, Redis is also included in this installation method. ## Installing Argo CD Core can be installed by applying a single manifest file that contains all the required resources. Example: ``` export ARGOCD_VERSION=<desired argo cd release version (e.g. v2.7.0)> kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/$ARGOCD_VERSION/manifests/core-install.yaml ``` ## Using Once Argo CD Core is installed, users will be able to interact with it by relying on GitOps. The available Kubernetes resources will be the `Application` and the `ApplicationSet` CRDs. By using those resources, users will be able to deploy and manage applications in Kubernetes. It is still possible to use Argo CD CLI even when running Argo CD Core. In this case, the CLI will spawn a local API server process that will be used to handle the CLI command. Once the command is concluded, the local API Server process will also be terminated. This happens transparently for the user with no additional command required. Note that Argo CD Core will rely only on Kubernetes RBAC and the user (or the process) invoking the CLI needs to have access to the Argo CD namespace with the proper permission in the `Application` and `ApplicationSet` resources for executing a given command. To use Argo CD CLI in core mode, it is required to pass the `--core` flag with the `login` subcommand. Example: ```bash kubectl config set-context --current --namespace=argocd # change current kube context to argocd namespace argocd login --core ``` Similarly, users can also run the Web UI locally if they prefer to interact with Argo CD using this method. The Web UI can be started locally by running the following command: ``` argocd admin dashboard -n argocd ``` Argo CD Web UI will be available at `http://localhost:8080`
argocd
Argo CD Core Introduction Argo CD Core is a different installation that runs Argo CD in headless mode With this installation you will have a fully functional GitOps engine capable of getting the desired state from Git repositories and applying it in Kubernetes The following groups of features won t be available in this installation Argo CD RBAC model Argo CD API Argo CD Notification Controller OIDC based authentication The following features will be partially available see the usage using section below for more details Argo CD Web UI Argo CD CLI Multi tenancy strictly GitOps based on git push permissions A few use cases that justify running Argo CD Core are As a cluster admin I want to rely on Kubernetes RBAC only As a devops engineer I don t want to learn a new API or depend on another CLI to automate my deployments I want to rely on the Kubernetes API only As a cluster admin I don t want to provide Argo CD UI or Argo CD CLI to developers Architecture Because Argo CD is designed with a component based architecture in mind it is possible to have a more minimalist installation In this case fewer components are installed and yet the main GitOps functionality remains operational In the diagram below the Core box shows the components that will be installed while opting for Argo CD Core Argo CD Core assets argocd core components png Note that even if the Argo CD controller can run without Redis it isn t recommended The Argo CD controller uses Redis as an important caching mechanism reducing the load on Kube API and in Git For this reason Redis is also included in this installation method Installing Argo CD Core can be installed by applying a single manifest file that contains all the required resources Example export ARGOCD VERSION desired argo cd release version e g v2 7 0 kubectl create namespace argocd kubectl apply n argocd f https raw githubusercontent com argoproj argo cd ARGOCD VERSION manifests core install yaml Using Once Argo CD Core is installed users will be able to interact with it by relying on GitOps The available Kubernetes resources will be the Application and the ApplicationSet CRDs By using those resources users will be able to deploy and manage applications in Kubernetes It is still possible to use Argo CD CLI even when running Argo CD Core In this case the CLI will spawn a local API server process that will be used to handle the CLI command Once the command is concluded the local API Server process will also be terminated This happens transparently for the user with no additional command required Note that Argo CD Core will rely only on Kubernetes RBAC and the user or the process invoking the CLI needs to have access to the Argo CD namespace with the proper permission in the Application and ApplicationSet resources for executing a given command To use Argo CD CLI in core mode it is required to pass the core flag with the login subcommand Example bash kubectl config set context current namespace argocd change current kube context to argocd namespace argocd login core Similarly users can also run the Web UI locally if they prefer to interact with Argo CD using this method The Web UI can be started locally by running the following command argocd admin dashboard n argocd Argo CD Web UI will be available at http localhost 8080
argocd Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues warning Argo CD administrators can define a certain set of namespaces where resources may be created updated and reconciled in However applications in these additional namespaces will only be allowed to use certain as configured by the Argo CD administrators This allows ordinary Argo CD users e g application teams to use patterns like declarative management of resources implementing app of apps and others without the risk of a privilege escalation through usage of other that would exceed the permissions granted to the application teams As of version 2 5 Argo CD supports managing resources in namespaces other than the control plane s namespace which is usually but this feature has to be explicitly enabled and configured appropriately Introduction Applications in any namespace
# Applications in any namespace !!! warning Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues. ## Introduction As of version 2.5, Argo CD supports managing `Application` resources in namespaces other than the control plane's namespace (which is usually `argocd`), but this feature has to be explicitly enabled and configured appropriately. Argo CD administrators can define a certain set of namespaces where `Application` resources may be created, updated and reconciled in. However, applications in these additional namespaces will only be allowed to use certain `AppProjects`, as configured by the Argo CD administrators. This allows ordinary Argo CD users (e.g. application teams) to use patterns like declarative management of `Application` resources, implementing app-of-apps and others without the risk of a privilege escalation through usage of other `AppProjects` that would exceed the permissions granted to the application teams. Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature. One additional advantage of adopting applications in any namespace is to allow end-users to configure notifications for their Argo CD application in the namespace where Argo CD application is running in. See notifications [namespace based configuration](notifications/index.md#namespace-based-configuration) page for more information. ## Prerequisites ### Cluster-scoped Argo CD installation This feature can only be enabled and used when your Argo CD is installed as a cluster-wide instance, so it has permissions to list and manipulate resources on a cluster scope. It will not work with an Argo CD installed in namespace-scoped mode. ### Switch resource tracking method Also, while technically not necessary, it is strongly suggested that you switch the application tracking method from the default `label` setting to either `annotation` or `annotation+label`. The reasoning for this is, that application names will be a composite of the namespace's name and the name of the `Application`, and this can easily exceed the 63 characters length limit imposed on label values. Annotations have a notably greater length limit. To enable annotation based resource tracking, refer to the documentation about [resource tracking methods](../../user-guide/resource_tracking/) ## Implementation details ### Overview In order for an application to be managed and reconciled outside the Argo CD's control plane namespace, two prerequisites must match: 1. The `Application`'s namespace must be explicitly enabled using the `--application-namespaces` parameter for the `argocd-application-controller` and `argocd-server` workloads. This parameter controls the list of namespaces that Argo CD will be allowed to source `Application` resources from globally. Any namespace not configured here cannot be used from any `AppProject`. 1. The `AppProject` referenced by the `.spec.project` field of the `Application` must have the namespace listed in its `.spec.sourceNamespaces` field. This setting will determine whether an `Application` may use a certain `AppProject`. If an `Application` specifies an `AppProject` that is not allowed, Argo CD refuses to process this `Application`. As stated above, any namespace configured in the `.spec.sourceNamespaces` field must also be enabled globally. `Applications` in different namespaces can be created and managed just like any other `Application` in the `argocd` namespace previously, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc). ### Reconfigure Argo CD to allow certain namespaces #### Change workload startup parameters In order to enable this feature, the Argo CD administrator must reconfigure the `argocd-server` and `argocd-application-controller` workloads to add the `--application-namespaces` parameter to the container's startup command. The `--application-namespaces` parameter takes a comma-separated list of namespaces where `Applications` are to be allowed in. Each entry of the list supports: - shell-style wildcards such as `*`, so for example the entry `app-team-*` would match `app-team-one` and `app-team-two`. To enable all namespaces on the cluster where Argo CD is running on, you can just specify `*`, i.e. `--application-namespaces=*`. - regex, requires wrapping the string in ```/```, example to allow all namespaces except a particular one: ```/^((?!not-allowed).)*$/```. The startup parameters for both, the `argocd-server` and the `argocd-application-controller` can also be conveniently set up and kept in sync by specifying the `application.namespaces` settings in the `argocd-cmd-params-cm` ConfigMap _instead_ of changing the manifests for the respective workloads. For example: ```yaml data: application.namespaces: app-team-one, app-team-two ``` would allow the `app-team-one` and `app-team-two` namespaces for managing `Application` resources. After a change to the `argocd-cmd-params-cm` namespace, the appropriate workloads need to be restarted: ```bash kubectl rollout restart -n argocd deployment argocd-server kubectl rollout restart -n argocd statefulset argocd-application-controller ``` #### Adapt Kubernetes RBAC We decided to not extend the Kubernetes RBAC for the `argocd-server` workload by default for the time being. If you want `Applications` in other namespaces to be managed by the Argo CD API (i.e. the CLI and UI), you need to extend the Kubernetes permissions for the `argocd-server` ServiceAccount. We supply a `ClusterRole` and `ClusterRoleBinding` suitable for this purpose in the `examples/k8s-rbac/argocd-server-applications` directory. For a default Argo CD installation (i.e. installed to the `argocd` namespace), you can just apply them as-is: ```shell kubectl apply -k examples/k8s-rbac/argocd-server-applications/ ``` `argocd-notifications-controller-rbac-clusterrole.yaml` and `argocd-notifications-controller-rbac-clusterrolebinding.yaml` are used to support notifications controller to notify apps in all namespaces. !!! note At some later point in time, we may make this cluster role part of the default installation manifests. ### Allowing additional namespaces in an AppProject Any user with Kubernetes access to the Argo CD control plane's namespace (`argocd`), especially those with permissions to create or update `Applications` in a declarative way, is to be considered an Argo CD admin. This prevented unprivileged Argo CD users from declaratively creating or managing `Applications` in the past. Those users were constrained to using the API instead, subject to Argo CD RBAC which ensures only `Applications` in allowed `AppProjects` were created. For an `Application` to be created outside the `argocd` namespace, the `AppProject` referred to in the `Application`'s `.spec.project` field must include the `Application`'s namespace in its `.spec.sourceNamespaces` field. For example, consider the two following (incomplete) `AppProject` specs: ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: project-one namespace: argocd spec: sourceNamespaces: - namespace-one ``` and ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: project-two namespace: argocd spec: sourceNamespaces: - namespace-two ``` In order for an Application to set `.spec.project` to `project-one`, it would have to be created in either namespace `namespace-one` or `argocd`. Likewise, in order for an Application to set `.spec.project` to `project-two`, it would have to be created in either namespace `namespace-two` or `argocd`. If an Application in `namespace-two` would set their `.spec.project` to `project-one` or an Application in `namespace-one` would set their `.spec.project` to `project-two`, Argo CD would consider this as a permission violation and refuse to reconcile the Application. Also, the Argo CD API will enforce these constraints, regardless of the Argo CD RBAC permissions. The `.spec.sourceNamespaces` field of the `AppProject` is a list that can contain an arbitrary amount of namespaces, and each entry supports shell-style wildcard, so that you can allow namespaces with patterns like `team-one-*`. !!! warning Do not add user controlled namespaces in the `.spec.sourceNamespaces` field of any privileged AppProject like the `default` project. Always make sure that the AppProject follows the principle of granting least required privileges. Never grant access to the `argocd` namespace within the AppProject. !!! note For backwards compatibility, Applications in the Argo CD control plane's namespace (`argocd`) are allowed to set their `.spec.project` field to reference any AppProject, regardless of the restrictions placed by the AppProject's `.spec.sourceNamespaces` field. ### Application names For the CLI and UI, applications are now referred to and displayed as in the format `<namespace>/<name>`. For backwards compatibility, if the namespace of the Application is the control plane's namespace (i.e. `argocd`), the `<namespace>` can be omitted from the application name when referring to it. For example, the application names `argocd/someapp` and `someapp` are semantically the same and refer to the same application in the CLI and the UI. ### Application RBAC The RBAC syntax for Application objects has been changed from `<project>/<application>` to `<project>/<namespace>/<application>` to accommodate the need to restrict access based on the source namespace of the Application to be managed. For backwards compatibility, Applications in the `argocd` namespace can still be refered to as `<project>/<application>` in the RBAC policy rules. Wildcards do not make any distinction between project and application namespaces yet. For example, the following RBAC rule would match any application belonging to project `foo`, regardless of the namespace it is created in: ``` p, somerole, applications, get, foo/*, allow ``` If you want to restrict access to be granted only to `Applications` in project `foo` within namespace `bar`, the rule would need to be adapted as follows: ``` p, somerole, applications, get, foo/bar/*, allow ``` ## Managing applications in other namespaces ### Declaratively For declarative management of Applications, just create the Application from a YAML or JSON manifest in the desired namespace. Make sure that the `.spec.project` field refers to an AppProject that allows this namespace. For example, the following (incomplete) Application manifest creates an Application in the namespace `some-namespace`: ```yaml kind: Application apiVersion: argoproj.io/v1alpha1 metadata: name: some-app namespace: some-namespace spec: project: some-project # ... ``` The project `some-project` will then need to specify `some-namespace` in the list of allowed source namespaces, e.g. ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: some-project namespace: argocd spec: sourceNamespaces: - some-namespace ``` ### Using the CLI You can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace. For example, to retrieve the `Application` named `foo` in the namespace `bar`, you can use the following CLI command: ```shell argocd app get foo/bar ``` Likewise, to manage this application, keep referring to it as `foo/bar`: ```bash # Create an application argocd app create foo/bar ... # Sync the application argocd app sync foo/bar # Delete the application argocd app delete foo/bar # Retrieve application's manifest argocd app manifests foo/bar ``` As stated previously, for applications in the Argo CD's control plane namespace, you can omit the namespace from the application name. ### Using the UI Similar to the CLI, you can refer to the application in the UI as `foo/bar`. For example, to create an application named `bar` in the namespace `foo` in the web UI, set the application name in the creation dialogue's _Application Name_ field to `foo/bar`. If the namespace is omitted, the control plane's namespace will be used. ### Using the REST API If you are using the REST API, the namespace for `Application` cannot be specified as the application name, and resources need to be specified using the optional `appNamespace` query parameter. For example, to work with the `Application` resource named `foo` in the namespace `bar`, the request would look like follows: ```bash GET /api/v1/applications/foo?appNamespace=bar ``` For other operations such as `POST` and `PUT`, the `appNamespace` parameter must be part of the request's payload. For `Application` resources in the control plane namespace, this parameter can be omitted.
argocd
Applications in any namespace warning Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues Introduction As of version 2 5 Argo CD supports managing Application resources in namespaces other than the control plane s namespace which is usually argocd but this feature has to be explicitly enabled and configured appropriately Argo CD administrators can define a certain set of namespaces where Application resources may be created updated and reconciled in However applications in these additional namespaces will only be allowed to use certain AppProjects as configured by the Argo CD administrators This allows ordinary Argo CD users e g application teams to use patterns like declarative management of Application resources implementing app of apps and others without the risk of a privilege escalation through usage of other AppProjects that would exceed the permissions granted to the application teams Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature One additional advantage of adopting applications in any namespace is to allow end users to configure notifications for their Argo CD application in the namespace where Argo CD application is running in See notifications namespace based configuration notifications index md namespace based configuration page for more information Prerequisites Cluster scoped Argo CD installation This feature can only be enabled and used when your Argo CD is installed as a cluster wide instance so it has permissions to list and manipulate resources on a cluster scope It will not work with an Argo CD installed in namespace scoped mode Switch resource tracking method Also while technically not necessary it is strongly suggested that you switch the application tracking method from the default label setting to either annotation or annotation label The reasoning for this is that application names will be a composite of the namespace s name and the name of the Application and this can easily exceed the 63 characters length limit imposed on label values Annotations have a notably greater length limit To enable annotation based resource tracking refer to the documentation about resource tracking methods user guide resource tracking Implementation details Overview In order for an application to be managed and reconciled outside the Argo CD s control plane namespace two prerequisites must match 1 The Application s namespace must be explicitly enabled using the application namespaces parameter for the argocd application controller and argocd server workloads This parameter controls the list of namespaces that Argo CD will be allowed to source Application resources from globally Any namespace not configured here cannot be used from any AppProject 1 The AppProject referenced by the spec project field of the Application must have the namespace listed in its spec sourceNamespaces field This setting will determine whether an Application may use a certain AppProject If an Application specifies an AppProject that is not allowed Argo CD refuses to process this Application As stated above any namespace configured in the spec sourceNamespaces field must also be enabled globally Applications in different namespaces can be created and managed just like any other Application in the argocd namespace previously either declaratively or through the Argo CD API e g using the CLI the web UI the REST API etc Reconfigure Argo CD to allow certain namespaces Change workload startup parameters In order to enable this feature the Argo CD administrator must reconfigure the argocd server and argocd application controller workloads to add the application namespaces parameter to the container s startup command The application namespaces parameter takes a comma separated list of namespaces where Applications are to be allowed in Each entry of the list supports shell style wildcards such as so for example the entry app team would match app team one and app team two To enable all namespaces on the cluster where Argo CD is running on you can just specify i e application namespaces regex requires wrapping the string in example to allow all namespaces except a particular one not allowed The startup parameters for both the argocd server and the argocd application controller can also be conveniently set up and kept in sync by specifying the application namespaces settings in the argocd cmd params cm ConfigMap instead of changing the manifests for the respective workloads For example yaml data application namespaces app team one app team two would allow the app team one and app team two namespaces for managing Application resources After a change to the argocd cmd params cm namespace the appropriate workloads need to be restarted bash kubectl rollout restart n argocd deployment argocd server kubectl rollout restart n argocd statefulset argocd application controller Adapt Kubernetes RBAC We decided to not extend the Kubernetes RBAC for the argocd server workload by default for the time being If you want Applications in other namespaces to be managed by the Argo CD API i e the CLI and UI you need to extend the Kubernetes permissions for the argocd server ServiceAccount We supply a ClusterRole and ClusterRoleBinding suitable for this purpose in the examples k8s rbac argocd server applications directory For a default Argo CD installation i e installed to the argocd namespace you can just apply them as is shell kubectl apply k examples k8s rbac argocd server applications argocd notifications controller rbac clusterrole yaml and argocd notifications controller rbac clusterrolebinding yaml are used to support notifications controller to notify apps in all namespaces note At some later point in time we may make this cluster role part of the default installation manifests Allowing additional namespaces in an AppProject Any user with Kubernetes access to the Argo CD control plane s namespace argocd especially those with permissions to create or update Applications in a declarative way is to be considered an Argo CD admin This prevented unprivileged Argo CD users from declaratively creating or managing Applications in the past Those users were constrained to using the API instead subject to Argo CD RBAC which ensures only Applications in allowed AppProjects were created For an Application to be created outside the argocd namespace the AppProject referred to in the Application s spec project field must include the Application s namespace in its spec sourceNamespaces field For example consider the two following incomplete AppProject specs yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name project one namespace argocd spec sourceNamespaces namespace one and yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name project two namespace argocd spec sourceNamespaces namespace two In order for an Application to set spec project to project one it would have to be created in either namespace namespace one or argocd Likewise in order for an Application to set spec project to project two it would have to be created in either namespace namespace two or argocd If an Application in namespace two would set their spec project to project one or an Application in namespace one would set their spec project to project two Argo CD would consider this as a permission violation and refuse to reconcile the Application Also the Argo CD API will enforce these constraints regardless of the Argo CD RBAC permissions The spec sourceNamespaces field of the AppProject is a list that can contain an arbitrary amount of namespaces and each entry supports shell style wildcard so that you can allow namespaces with patterns like team one warning Do not add user controlled namespaces in the spec sourceNamespaces field of any privileged AppProject like the default project Always make sure that the AppProject follows the principle of granting least required privileges Never grant access to the argocd namespace within the AppProject note For backwards compatibility Applications in the Argo CD control plane s namespace argocd are allowed to set their spec project field to reference any AppProject regardless of the restrictions placed by the AppProject s spec sourceNamespaces field Application names For the CLI and UI applications are now referred to and displayed as in the format namespace name For backwards compatibility if the namespace of the Application is the control plane s namespace i e argocd the namespace can be omitted from the application name when referring to it For example the application names argocd someapp and someapp are semantically the same and refer to the same application in the CLI and the UI Application RBAC The RBAC syntax for Application objects has been changed from project application to project namespace application to accommodate the need to restrict access based on the source namespace of the Application to be managed For backwards compatibility Applications in the argocd namespace can still be refered to as project application in the RBAC policy rules Wildcards do not make any distinction between project and application namespaces yet For example the following RBAC rule would match any application belonging to project foo regardless of the namespace it is created in p somerole applications get foo allow If you want to restrict access to be granted only to Applications in project foo within namespace bar the rule would need to be adapted as follows p somerole applications get foo bar allow Managing applications in other namespaces Declaratively For declarative management of Applications just create the Application from a YAML or JSON manifest in the desired namespace Make sure that the spec project field refers to an AppProject that allows this namespace For example the following incomplete Application manifest creates an Application in the namespace some namespace yaml kind Application apiVersion argoproj io v1alpha1 metadata name some app namespace some namespace spec project some project The project some project will then need to specify some namespace in the list of allowed source namespaces e g yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name some project namespace argocd spec sourceNamespaces some namespace Using the CLI You can use all existing Argo CD CLI commands for managing applications in other namespaces exactly as you would use the CLI to manage applications in the control plane s namespace For example to retrieve the Application named foo in the namespace bar you can use the following CLI command shell argocd app get foo bar Likewise to manage this application keep referring to it as foo bar bash Create an application argocd app create foo bar Sync the application argocd app sync foo bar Delete the application argocd app delete foo bar Retrieve application s manifest argocd app manifests foo bar As stated previously for applications in the Argo CD s control plane namespace you can omit the namespace from the application name Using the UI Similar to the CLI you can refer to the application in the UI as foo bar For example to create an application named bar in the namespace foo in the web UI set the application name in the creation dialogue s Application Name field to foo bar If the namespace is omitted the control plane s namespace will be used Using the REST API If you are using the REST API the namespace for Application cannot be specified as the application name and resources need to be specified using the optional appNamespace query parameter For example to work with the Application resource named foo in the namespace bar the request would look like follows bash GET api v1 applications foo appNamespace bar For other operations such as POST and PUT the appNamespace parameter must be part of the request s payload For Application resources in the control plane namespace this parameter can be omitted
argocd warning Alpha Feature Since 2 13 0 Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues warning the control plane operations feature that allows you to control the service account used for the sync operation The configured service account Application Sync using impersonation could have lesser privileges required for creating resources compared to the highly privileged access required for This is an experimental
# Application Sync using impersonation !!! warning "Alpha Feature (Since 2.13.0)" This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha) feature that allows you to control the service account used for the sync operation. The configured service account could have lesser privileges required for creating resources compared to the highly privileged access required for the control plane operations. !!! warning Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues. ## Introduction Argo CD supports syncing `Application` resources using the same service account used for its control plane operations. This feature enables users to decouple service account used for application sync from the service account used for control plane operations. By default, application syncs in Argo CD have the same privileges as the Argo CD control plane. As a consequence, in a multi-tenant setup, the Argo CD control plane privileges needs to match the tenant that needs the highest privileges. As an example, if an Argo CD instance has 10 Applications and only one of them requires admin privileges, then the Argo CD control plane must have admin privileges in order to be able to sync that one Application. This provides an opportunity for malicious tenants to gain admin level access. Argo CD provides a multi-tenancy model to restrict what each `Application` is authorized to do using `AppProjects`, however it is not secure enough and if Argo CD is compromised, attackers will easily gain `cluster-admin` access to the cluster. Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature, as it is disabled by default. !!! note This feature is considered alpha as of now. Some of the implementation details may change over the course of time until it is promoted to a stable status. We will be happy if early adopters use this feature and provide us with bug reports and feedback. ### What is Impersonation Impersonation is a feature in Kubernetes and enabled in the `kubectl` CLI client, using which, a user can act as another user through impersonation headers. For example, an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied. Impersonation requests first authenticate as the requesting user, then switch to the impersonated user info. ## Prerequisites In a multi-team/multi-tenant environment, a team/tenant is typically granted access to a target namespace to self-manage their kubernetes resources in a declarative way. A typical tenant onboarding process looks like below: 1. The platform admin creates a tenant namespace and the service account to be used for creating the resources is also created in the same tenant namespace. 2. The platform admin creates one or more Role(s) to manage kubernetes resources in the tenant namespace 3. The platform admin creates one or more RoleBinding(s) to map the service account to the role(s) created in the previous steps. 4. The platform admin can choose to use either the [apps-in-any-namespace](./app-any-namespace.md) feature or provide access to tenants to create applications in the ArgoCD control plane namespace. 5. If the platform admin chooses apps-in-any-namespace feature, tenants can self-service their Argo applications in their respective tenant namespaces and no additional access needs to be provided for the control plane namespace. ## Implementation details ### Overview In order for an application to use a different service account for the application sync operation, the following steps needs to be performed: 1. The impersonation feature flag should be enabled. Please refer the steps provided in [Enable application sync with impersonation feature](#enable-application-sync-with-impersonation-feature) 2. The `AppProject` referenced by the `.spec.project` field of the `Application` must have the `DestinationServiceAccounts` mapping the destination server and namespace to a service account to be used for the sync operation. Please refer the steps provided in [Configuring destination service accounts](#configuring-destination-service-accounts) ### Enable application sync with impersonation feature In order to enable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below: ```yaml data: application.sync.impersonation.enabled: "true" ``` ### Disable application sync with impersonation feature In order to disable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below: ```yaml data: application.sync.impersonation.enabled: "false" ``` !!! note This feature is disabled by default. !!! note This feature can be enabled/disabled only at the system level and once enabled/disabled it is applicable to all Applications managed by ArgoCD. ## Configuring destination service accounts Destination service accounts can be added to the `AppProject` under `.spec.destinationServiceAccounts`. Specify the target destination `server` and `namespace` and provide the service account to be used for the sync operation using `defaultServiceAccount` field. Applications that refer this `AppProject` will use the corresponding service account configured for its destination. During the application sync operation, the controller loops through the available `destinationServiceAccounts` in the mapped `AppProject` and tries to find a matching candidate. If there are multiple matches for a destination server and namespace combination, then the first valid match will be considered. If there are no matches, then an error is reported during the sync operation. In order to avoid such sync errors, it is highly recommended that a valid service account may be configured as a catch-all configuration, for all target destinations and kept in lowest order of priority. It is possible to specify service accounts along with its namespace. eg: `tenant1-ns:guestbook-deployer`. If no namespace is provided for the service account, then the Application's `spec.destination.namespace` will be used. If no namespace is provided for the service account and the optional `spec.destination.namespace` field is also not provided in the `Application`, then the Application's namespace will be used. `DestinationServiceAccounts` associated to a `AppProject` can be created and managed, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc). ### Using declarative yaml For declaratively configuring destination service accounts, create an yaml file for the `AppProject` as below and apply the changes using `kubectl apply` command. ```yaml apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: my-project namespace: argocd spec: description: Example Project # Allow manifests to deploy from any Git repos sourceRepos: - '*' destinations: - '*' destinationServiceAccounts: - server: https://kubernetes.default.svc namespace: guestbook defaultServiceAccount: guestbook-deployer - server: https://kubernetes.default.svc namespace: guestbook-dev defaultServiceAccount: guestbook-dev-deployer - server: https://kubernetes.default.svc namespace: guestbook-stage defaultServiceAccount: guestbook-stage-deployer - server: https://kubernetes.default.svc # catch-all configuration namespace: '*' defaultServiceAccount: default ``` ### Using the CLI Destination service accounts can be added to an `AppProject` using the ArgoCD CLI. For example, to add a destination service account for `in-cluster` and `guestbook` namespace, you can use the following CLI command: ```shell argocd proj add-destination-service-account my-project https://kubernetes.default.svc guestbook guestbook-sa ``` Likewise, to remove the destination service account from an `AppProject`, you can use the following CLI command: ```shell argocd proj remove-destination-service-account my-project https://kubernetes.default.svc guestbook ``` ### Using the UI Similar to the CLI, you can add destination service account when creating or updating an `AppProject` from the UI
argocd
Application Sync using impersonation warning Alpha Feature Since 2 13 0 This is an experimental alpha quality https github com argoproj argoproj blob main community feature status md alpha feature that allows you to control the service account used for the sync operation The configured service account could have lesser privileges required for creating resources compared to the highly privileged access required for the control plane operations warning Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues Introduction Argo CD supports syncing Application resources using the same service account used for its control plane operations This feature enables users to decouple service account used for application sync from the service account used for control plane operations By default application syncs in Argo CD have the same privileges as the Argo CD control plane As a consequence in a multi tenant setup the Argo CD control plane privileges needs to match the tenant that needs the highest privileges As an example if an Argo CD instance has 10 Applications and only one of them requires admin privileges then the Argo CD control plane must have admin privileges in order to be able to sync that one Application This provides an opportunity for malicious tenants to gain admin level access Argo CD provides a multi tenancy model to restrict what each Application is authorized to do using AppProjects however it is not secure enough and if Argo CD is compromised attackers will easily gain cluster admin access to the cluster Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature as it is disabled by default note This feature is considered alpha as of now Some of the implementation details may change over the course of time until it is promoted to a stable status We will be happy if early adopters use this feature and provide us with bug reports and feedback What is Impersonation Impersonation is a feature in Kubernetes and enabled in the kubectl CLI client using which a user can act as another user through impersonation headers For example an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied Impersonation requests first authenticate as the requesting user then switch to the impersonated user info Prerequisites In a multi team multi tenant environment a team tenant is typically granted access to a target namespace to self manage their kubernetes resources in a declarative way A typical tenant onboarding process looks like below 1 The platform admin creates a tenant namespace and the service account to be used for creating the resources is also created in the same tenant namespace 2 The platform admin creates one or more Role s to manage kubernetes resources in the tenant namespace 3 The platform admin creates one or more RoleBinding s to map the service account to the role s created in the previous steps 4 The platform admin can choose to use either the apps in any namespace app any namespace md feature or provide access to tenants to create applications in the ArgoCD control plane namespace 5 If the platform admin chooses apps in any namespace feature tenants can self service their Argo applications in their respective tenant namespaces and no additional access needs to be provided for the control plane namespace Implementation details Overview In order for an application to use a different service account for the application sync operation the following steps needs to be performed 1 The impersonation feature flag should be enabled Please refer the steps provided in Enable application sync with impersonation feature enable application sync with impersonation feature 2 The AppProject referenced by the spec project field of the Application must have the DestinationServiceAccounts mapping the destination server and namespace to a service account to be used for the sync operation Please refer the steps provided in Configuring destination service accounts configuring destination service accounts Enable application sync with impersonation feature In order to enable this feature the Argo CD administrator must reconfigure the application sync impersonation enabled settings in the argocd cm ConfigMap as below yaml data application sync impersonation enabled true Disable application sync with impersonation feature In order to disable this feature the Argo CD administrator must reconfigure the application sync impersonation enabled settings in the argocd cm ConfigMap as below yaml data application sync impersonation enabled false note This feature is disabled by default note This feature can be enabled disabled only at the system level and once enabled disabled it is applicable to all Applications managed by ArgoCD Configuring destination service accounts Destination service accounts can be added to the AppProject under spec destinationServiceAccounts Specify the target destination server and namespace and provide the service account to be used for the sync operation using defaultServiceAccount field Applications that refer this AppProject will use the corresponding service account configured for its destination During the application sync operation the controller loops through the available destinationServiceAccounts in the mapped AppProject and tries to find a matching candidate If there are multiple matches for a destination server and namespace combination then the first valid match will be considered If there are no matches then an error is reported during the sync operation In order to avoid such sync errors it is highly recommended that a valid service account may be configured as a catch all configuration for all target destinations and kept in lowest order of priority It is possible to specify service accounts along with its namespace eg tenant1 ns guestbook deployer If no namespace is provided for the service account then the Application s spec destination namespace will be used If no namespace is provided for the service account and the optional spec destination namespace field is also not provided in the Application then the Application s namespace will be used DestinationServiceAccounts associated to a AppProject can be created and managed either declaratively or through the Argo CD API e g using the CLI the web UI the REST API etc Using declarative yaml For declaratively configuring destination service accounts create an yaml file for the AppProject as below and apply the changes using kubectl apply command yaml apiVersion argoproj io v1alpha1 kind AppProject metadata name my project namespace argocd spec description Example Project Allow manifests to deploy from any Git repos sourceRepos destinations destinationServiceAccounts server https kubernetes default svc namespace guestbook defaultServiceAccount guestbook deployer server https kubernetes default svc namespace guestbook dev defaultServiceAccount guestbook dev deployer server https kubernetes default svc namespace guestbook stage defaultServiceAccount guestbook stage deployer server https kubernetes default svc catch all configuration namespace defaultServiceAccount default Using the CLI Destination service accounts can be added to an AppProject using the ArgoCD CLI For example to add a destination service account for in cluster and guestbook namespace you can use the following CLI command shell argocd proj add destination service account my project https kubernetes default svc guestbook guestbook sa Likewise to remove the destination service account from an AppProject you can use the following CLI command shell argocd proj remove destination service account my project https kubernetes default svc guestbook Using the UI Similar to the CLI you can add destination service account when creating or updating an AppProject from the UI
argocd user management system and has only one built in user The user is a superuser and Once SSO or local users are configured additional RBAC roles can be defined and SSO groups or local users can then be mapped to roles it has unrestricted access to the system RBAC requires or The RBAC feature enables restrictions of access to Argo CD resources Argo CD does not have its own The global RBAC config map see RBAC Configuration There are two main components where RBAC configuration can be defined
# RBAC Configuration The RBAC feature enables restrictions of access to Argo CD resources. Argo CD does not have its own user management system and has only one built-in user, `admin`. The `admin` user is a superuser and it has unrestricted access to the system. RBAC requires [SSO configuration](user-management/index.md) or [one or more local users setup](user-management/index.md). Once SSO or local users are configured, additional RBAC roles can be defined, and SSO groups or local users can then be mapped to roles. There are two main components where RBAC configuration can be defined: - The global RBAC config map (see [argo-rbac-cm.yaml](argocd-rbac-cm-yaml.md)) - The [AppProject's roles](../user-guide/projects.md#project-roles) ## Basic Built-in Roles Argo CD has two pre-defined roles but RBAC configuration allows defining roles and groups (see below). - `role:readonly`: read-only access to all resources - `role:admin`: unrestricted access to all resources These default built-in role definitions can be seen in [builtin-policy.csv](https://github.com/argoproj/argo-cd/blob/master/assets/builtin-policy.csv) ## Default Policy for Authenticated Users When a user is authenticated in Argo CD, it will be granted the role specified in `policy.default`. !!! warning "Restricting Default Permissions" **All authenticated users get _at least_ the permissions granted by the default policies. This access cannot be blocked by a `deny` rule.** It is recommended to create a new `role:authenticated` with the minimum set of permissions possible, then grant permissions to individual roles as needed. ## Anonymous Access Enabling anonymous access to the Argo CD instance allows users to assume the default role permissions specified by `policy.default` **without being authenticated**. The anonymous access to Argo CD can be enabled using the `users.anonymous.enabled` field in `argocd-cm` (see [argocd-cm.yaml](argocd-cm-yaml.md)). !!! warning When enabling anonymous access, consider creating a new default role and assigning it to the default policies with `policy.default: role:unauthenticated`. ## RBAC Model Structure The model syntax is based on [Casbin](https://casbin.org/docs/overview). There are two different types of syntax: one for assigning policies, and another one for assigning users to internal roles. **Group**: Allows to assign authenticated users/groups to internal roles. Syntax: `g, <user/group>, <role>` - `<user/group>`: The entity to whom the role will be assigned. It can be a local user or a user authenticated with SSO. When SSO is used, the `user` will be based on the `sub` claims, while the group is one of the values returned by the `scopes` configuration. - `<role>`: The internal role to which the entity will be assigned. **Policy**: Allows to assign permissions to an entity. Syntax: `p, <role/user/group>, <resource>, <action>, <object>, <effect>` - `<role/user/group>`: The entity to whom the policy will be assigned - `<resource>`: The type of resource on which the action is performed. - `<action>`: The operation that is being performed on the resource. - `<object>`: The object identifier representing the resource on which the action is performed. Depending on the resource, the object's format will vary. - `<effect>`: Whether this policy should grant or restrict the operation on the target object. One of `allow` or `deny`. Below is a table that summarizes all possible resources and which actions are valid for each of them. | Resource\Action | get | create | update | delete | sync | action | override | invoke | | :------------------ | :-: | :----: | :----: | :----: | :--: | :----: | :------: | :----: | | **applications** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | **applicationsets** | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | | **clusters** | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | | **projects** | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | | **repositories** | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | | **accounts** | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | **certificates** | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | **gpgkeys** | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | **logs** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **exec** | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **extensions** | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ### Application-Specific Policy Some policy only have meaning within an application. It is the case with the following resources: - `applications` - `applicationsets` - `logs` - `exec` While they can be set in the global configuration, they can also be configured in [AppProject's roles](../user-guide/projects.md#project-roles). The expected `<object>` value in the policy structure is replaced by `<app-project>/<app-name>`. For instance, these policies would grant `example-user` access to get any applications, but only be able to see logs in `my-app` application part of the `example-project` project. ```csv p, example-user, applications, get, *, allow p, example-user, logs, get, example-project/my-app, allow ``` #### Application in Any Namespaces When [application in any namespace](app-any-namespace.md) is enabled, the expected `<object>` value in the policy structure is replaced by `<app-project>/<app-ns>/<app-name>`. Since multiple applications could have the same name in the same project, the policy below makes sure to restrict access only to `app-namespace`. ```csv p, example-user, applications, get, */app-namespace/*, allow p, example-user, logs, get, example-project/app-namespace/my-app, allow ``` ### The `applications` resource The `applications` resource is an [Application-Specific Policy](#application-specific-policy). #### Fine-grained Permissions for `update`/`delete` action The `update` and `delete` actions, when granted on an application, will allow the user to perform the operation on the application itself **and** all of its resources. It can be desirable to only allow `update` or `delete` on specific resources within an application. To do so, when the action if performed on an application's resource, the `<action>` will have the `<action>/<group>/<kind>/<ns>/<name>` format. For instance, to grant access to `example-user` to only delete Pods in the `prod-app` Application, the policy could be: ```csv p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow ``` !!!warning "Understand glob pattern behavior" Argo CD RBAC does not use `/` as a separator when evaluating glob patterns. So the pattern `delete/*/kind/*` will match `delete/<group>/kind/<namespace>/<name>` but also `delete/<group>/<kind>/kind/<name>`. The fact that both of these match will generally not be a problem, because resource kinds generally contain capital letters, and namespaces cannot contain capital letters. However, it is possible for a resource kind to be lowercase. So it is better to just always include all the parts of the resource in the pattern (in other words, always use four slashes). If we want to grant access to the user to update all resources of an application, but not the application itself: ```csv p, example-user, applications, update/*, default/prod-app, allow ``` If we want to explicitly deny delete of the application, but allow the user to delete Pods: ```csv p, example-user, applications, delete, default/prod-app, deny p, example-user, applications, delete/*/Pod/*/*, default/prod-app, allow ``` !!! note It is not possible to deny fine-grained permissions for a sub-resource if the action was **explicitly allowed on the application**. For instance, the following policies will **allow** a user to delete the Pod and any other resources in the application: ```csv p, example-user, applications, delete, default/prod-app, allow p, example-user, applications, delete/*/Pod/*/*, default/prod-app, deny ``` #### The `action` action The `action` action corresponds to either built-in resource customizations defined [in the Argo CD repository](https://github.com/argoproj/argo-cd/tree/master/resource_customizations), or to [custom resource actions](resource_actions.md#custom-resource-actions) defined by you. See the [resource actions documentation](resource_actions.md#built-in-actions) for a list of built-in actions. The `<action>` has the `action/<group>/<kind>/<action-name>` format. For example, a resource customization path `resource_customizations/extensions/DaemonSet/actions/restart/action.lua` corresponds to the `action` path `action/extensions/DaemonSet/restart`. If the resource is not under a group (for example, Pods or ConfigMaps), then the path will be `action//Pod/action-name`. The following policies allows the user to perform any action on the DaemonSet resources, as well as the `maintenance-off` action on a Pod: ```csv p, example-user, applications, action//Pod/maintenance-off, default/*, allow p, example-user, applications, action/extensions/DaemonSet/*, default/*, allow ``` To allow the user to perform any actions: ```csv p, example-user, applications, action/*, default/*, allow ``` #### The `override` action When granted along with the `sync` action, the override action will allow a user to synchronize local manifests to the Application. These manifests will be used instead of the configured source, until the next sync is performed. ### The `applicationsets` resource The `applicationsets` resource is an [Application-Specific policy](#application-specific-policy). [ApplicationSets](applicationset/index.md) provide a declarative way to automatically create/update/delete Applications. Allowing the `create` action on the resource effectively grants the ability to create Applications. While it doesn't allow the user to create Applications directly, they can create Applications via an ApplicationSet. !!! note In v2.5, it is not possible to create an ApplicationSet with a templated Project field (e.g. `project: `) via the API (or, by extension, the CLI). Disallowing templated projects makes project restrictions via RBAC safe: With the resource being application-specific, the `<object>` of the applicationsets policy will have the format `<app-project>/<app-name>`. However, since an ApplicationSet does belong to any project, the `<app-project>` value represents the projects in which the ApplicationSet will be able to create Applications. With the following policy, a `dev-group` user will be unable to create an ApplicationSet capable of creating Applications outside the `dev-project` project. ```csv p, dev-group, applicationsets, *, dev-project/*, allow ``` ### The `logs` resource The `logs` resource is an [Application-Specific Policy](#application-specific-policy). When granted with the `get` action, this policy allows a user to see Pod's logs of an application via the Argo CD UI. The functionality is similar to `kubectl logs`. ### The `exec` resource The `exec` resource is an [Application-Specific Policy](#application-specific-policy). When granted with the `create` action, this policy allows a user to `exec` into Pods of an application via the Argo CD UI. The functionality is similar to `kubectl exec`. See [Web-based Terminal](web_based_terminal.md) for more info. ### The `extensions` resource With the `extensions` resource, it is possible to configure permissions to invoke [proxy extensions](../developer-guide/extensions/proxy-extensions.md). The `extensions` RBAC validation works in conjunction with the `applications` resource. A user **needs to have read permission on the application** where the request is originated from. Consider the example below, it will allow the `example-user` to invoke the `httpbin` extensions in all applications under the `default` project. ```csv p, example-user, applications, get, default/*, allow p, example-user, extensions, invoke, httpbin, allow ``` ### The `deny` effect When `deny` is used as an effect in a policy, it will be effective if the policy matches. Even if more specific policies with the `allow` effect match as well, the `deny` will have priority. The order in which the policies appears in the policy file configuration has no impact, and the result is deterministic. ## Policies Evaluation and Matching The evaluation of access is done in two parts: validating against the default policy configuration, then validating against the policies for the current user. **If an action is allowed or denied by the default policies, then this effect will be effective without further evaluation**. When the effect is undefined, the evaluation will continue with subject-specific policies. The access will be evaluated for the user, then for each configured group that the user is part of. The matching engine, configured in `policy.matchMode`, can use two different match modes to compare the values of tokens: - `glob`: based on the [`glob` package](https://pkg.go.dev/github.com/gobwas/glob). - `regex`: based on the [`regexp` package](https://pkg.go.dev/regexp). When all tokens match during the evaluation, the effect will be returned. The evaluation will continue until all matching policies are evaluated, or until a policy with the `deny` effect matches. After all policies are evaluated, if there was at least one `allow` effect and no `deny`, access will be granted. ### Glob matching When `glob` is used, the policy tokens are treated as single terms, without separators. Consider the following policy: ``` p, example-user, applications, action/extensions/*, default/*, allow ``` When the `example-user` executes the `extensions/DaemonSet/test` action, the following `glob` matches will happen: 1. The current user `example-user` matches the token `example-user`. 2. The value `applications` matches the token `applications`. 3. The value `action/extensions/DaemonSet/test` matches `action/extensions/*`. Note that `/` is not treated as a separator and the use of `**` is not necessary. 4. The value `default/my-app` matches `default/*`. ## Using SSO Users/Groups The `scopes` field controls which OIDC scopes to examine during RBAC enforcement (in addition to `sub` scope). If omitted, it defaults to `'[groups]'`. The scope value can be a string, or a list of strings. For more information on `scopes` please review the [User Management Documentation](user-management/index.md). The following example shows targeting `email` as well as `groups` from your OIDC provider. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd labels: app.kubernetes.io/name: argocd-rbac-cm app.kubernetes.io/part-of: argocd data: policy.csv: | p, my-org:team-alpha, applications, sync, my-project/*, allow g, my-org:team-beta, role:admin g, [email protected], role:admin policy.default: role:readonly scopes: '[groups, email]' ``` This can be useful to associate users' emails and groups directly in AppProject. ```yaml apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: team-beta-project namespace: argocd spec: roles: - name: admin description: Admin privileges to team-beta policies: - p, proj:team-beta-project:admin, applications, *, *, allow groups: - [email protected] # Value from the email scope - my-org:team-beta # Value from the groups scope ``` ## Local Users/Accounts [Local users](user-management/index.md#local-usersaccounts) are assigned access by either grouping them with a role or by assigning policies directly to them. The example below shows how to assign a policy directly to a local user. ```yaml p, my-local-user, applications, sync, my-project/*, allow ``` This example shows how to assign a role to a local user. ```yaml g, my-local-user, role:admin ``` !!! warning "Ambiguous Group Assignments" If you have [enabled SSO](user-management/index.md#sso), any SSO user with a scope that matches a local user will be added to the same roles as the local user. For example, if local user `sally` is assigned to `role:admin`, and if an SSO user has a scope which happens to be named `sally`, that SSO user will also be assigned to `role:admin`. An example of where this may be a problem is if your SSO provider is an SCM, and org members are automatically granted scopes named after the orgs. If a user can create or add themselves to an org in the SCM, they can gain the permissions of the local user with the same name. To avoid ambiguity, if you are using local users and SSO, it is recommended to assign policies directly to local users, and not to assign roles to local users. In other words, instead of using `g, my-local-user, role:admin`, you should explicitly assign policies to `my-local-user`: ```yaml p, my-local-user, *, *, *, allow ``` ## Policy CSV Composition It is possible to provide additional entries in the `argocd-rbac-cm` configmap to compose the final policy csv. In this case, the key must follow the pattern `policy.<any string>.csv`. Argo CD will concatenate all additional policies it finds with this pattern below the main one ('policy.csv'). The order of additional provided policies are determined by the key string. Example: if two additional policies are provided with keys `policy.A.csv` and `policy.B.csv`, it will first concatenate `policy.A.csv` and then `policy.B.csv`. This is useful to allow composing policies in config management tools like Kustomize, Helm, etc. The example below shows how a Kustomize patch can be provided in an overlay to add additional configuration to an existing RBAC ConfigMap. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd data: policy.tester-overlay.csv: | p, role:tester, applications, *, */*, allow p, role:tester, projects, *, *, allow g, my-org:team-qa, role:tester ``` ## Validating and testing your RBAC policies If you want to ensure that your RBAC policies are working as expected, you can use the [`argocd admin settings rbac` command](../user-guide/commands/argocd_admin_settings_rbac.md) to validate them. This tool allows you to test whether a certain role or subject can perform the requested action with a policy that's not live yet in the system, i.e. from a local file or config map. Additionally, it can be used against the live RBAC configuration in the cluster your Argo CD is running in. ### Validating a policy To check whether your new policy configuration is valid and understood by Argo CD's RBAC implementation, you can use the [`argocd admin settings rbac validate` command](../user-guide/commands/argocd_admin_settings_rbac_validate.md). ### Testing a policy To test whether a role or subject (group or local user) has sufficient permissions to execute certain actions on certain resources, you can use the [`argocd admin settings rbac can` command](../user-guide/commands/argocd_admin_settings_rbac_can.md).
argocd
RBAC Configuration The RBAC feature enables restrictions of access to Argo CD resources Argo CD does not have its own user management system and has only one built in user admin The admin user is a superuser and it has unrestricted access to the system RBAC requires SSO configuration user management index md or one or more local users setup user management index md Once SSO or local users are configured additional RBAC roles can be defined and SSO groups or local users can then be mapped to roles There are two main components where RBAC configuration can be defined The global RBAC config map see argo rbac cm yaml argocd rbac cm yaml md The AppProject s roles user guide projects md project roles Basic Built in Roles Argo CD has two pre defined roles but RBAC configuration allows defining roles and groups see below role readonly read only access to all resources role admin unrestricted access to all resources These default built in role definitions can be seen in builtin policy csv https github com argoproj argo cd blob master assets builtin policy csv Default Policy for Authenticated Users When a user is authenticated in Argo CD it will be granted the role specified in policy default warning Restricting Default Permissions All authenticated users get at least the permissions granted by the default policies This access cannot be blocked by a deny rule It is recommended to create a new role authenticated with the minimum set of permissions possible then grant permissions to individual roles as needed Anonymous Access Enabling anonymous access to the Argo CD instance allows users to assume the default role permissions specified by policy default without being authenticated The anonymous access to Argo CD can be enabled using the users anonymous enabled field in argocd cm see argocd cm yaml argocd cm yaml md warning When enabling anonymous access consider creating a new default role and assigning it to the default policies with policy default role unauthenticated RBAC Model Structure The model syntax is based on Casbin https casbin org docs overview There are two different types of syntax one for assigning policies and another one for assigning users to internal roles Group Allows to assign authenticated users groups to internal roles Syntax g user group role user group The entity to whom the role will be assigned It can be a local user or a user authenticated with SSO When SSO is used the user will be based on the sub claims while the group is one of the values returned by the scopes configuration role The internal role to which the entity will be assigned Policy Allows to assign permissions to an entity Syntax p role user group resource action object effect role user group The entity to whom the policy will be assigned resource The type of resource on which the action is performed action The operation that is being performed on the resource object The object identifier representing the resource on which the action is performed Depending on the resource the object s format will vary effect Whether this policy should grant or restrict the operation on the target object One of allow or deny Below is a table that summarizes all possible resources and which actions are valid for each of them Resource Action get create update delete sync action override invoke applications applicationsets clusters projects repositories accounts certificates gpgkeys logs exec extensions Application Specific Policy Some policy only have meaning within an application It is the case with the following resources applications applicationsets logs exec While they can be set in the global configuration they can also be configured in AppProject s roles user guide projects md project roles The expected object value in the policy structure is replaced by app project app name For instance these policies would grant example user access to get any applications but only be able to see logs in my app application part of the example project project csv p example user applications get allow p example user logs get example project my app allow Application in Any Namespaces When application in any namespace app any namespace md is enabled the expected object value in the policy structure is replaced by app project app ns app name Since multiple applications could have the same name in the same project the policy below makes sure to restrict access only to app namespace csv p example user applications get app namespace allow p example user logs get example project app namespace my app allow The applications resource The applications resource is an Application Specific Policy application specific policy Fine grained Permissions for update delete action The update and delete actions when granted on an application will allow the user to perform the operation on the application itself and all of its resources It can be desirable to only allow update or delete on specific resources within an application To do so when the action if performed on an application s resource the action will have the action group kind ns name format For instance to grant access to example user to only delete Pods in the prod app Application the policy could be csv p example user applications delete Pod default prod app allow warning Understand glob pattern behavior Argo CD RBAC does not use as a separator when evaluating glob patterns So the pattern delete kind will match delete group kind namespace name but also delete group kind kind name The fact that both of these match will generally not be a problem because resource kinds generally contain capital letters and namespaces cannot contain capital letters However it is possible for a resource kind to be lowercase So it is better to just always include all the parts of the resource in the pattern in other words always use four slashes If we want to grant access to the user to update all resources of an application but not the application itself csv p example user applications update default prod app allow If we want to explicitly deny delete of the application but allow the user to delete Pods csv p example user applications delete default prod app deny p example user applications delete Pod default prod app allow note It is not possible to deny fine grained permissions for a sub resource if the action was explicitly allowed on the application For instance the following policies will allow a user to delete the Pod and any other resources in the application csv p example user applications delete default prod app allow p example user applications delete Pod default prod app deny The action action The action action corresponds to either built in resource customizations defined in the Argo CD repository https github com argoproj argo cd tree master resource customizations or to custom resource actions resource actions md custom resource actions defined by you See the resource actions documentation resource actions md built in actions for a list of built in actions The action has the action group kind action name format For example a resource customization path resource customizations extensions DaemonSet actions restart action lua corresponds to the action path action extensions DaemonSet restart If the resource is not under a group for example Pods or ConfigMaps then the path will be action Pod action name The following policies allows the user to perform any action on the DaemonSet resources as well as the maintenance off action on a Pod csv p example user applications action Pod maintenance off default allow p example user applications action extensions DaemonSet default allow To allow the user to perform any actions csv p example user applications action default allow The override action When granted along with the sync action the override action will allow a user to synchronize local manifests to the Application These manifests will be used instead of the configured source until the next sync is performed The applicationsets resource The applicationsets resource is an Application Specific policy application specific policy ApplicationSets applicationset index md provide a declarative way to automatically create update delete Applications Allowing the create action on the resource effectively grants the ability to create Applications While it doesn t allow the user to create Applications directly they can create Applications via an ApplicationSet note In v2 5 it is not possible to create an ApplicationSet with a templated Project field e g project via the API or by extension the CLI Disallowing templated projects makes project restrictions via RBAC safe With the resource being application specific the object of the applicationsets policy will have the format app project app name However since an ApplicationSet does belong to any project the app project value represents the projects in which the ApplicationSet will be able to create Applications With the following policy a dev group user will be unable to create an ApplicationSet capable of creating Applications outside the dev project project csv p dev group applicationsets dev project allow The logs resource The logs resource is an Application Specific Policy application specific policy When granted with the get action this policy allows a user to see Pod s logs of an application via the Argo CD UI The functionality is similar to kubectl logs The exec resource The exec resource is an Application Specific Policy application specific policy When granted with the create action this policy allows a user to exec into Pods of an application via the Argo CD UI The functionality is similar to kubectl exec See Web based Terminal web based terminal md for more info The extensions resource With the extensions resource it is possible to configure permissions to invoke proxy extensions developer guide extensions proxy extensions md The extensions RBAC validation works in conjunction with the applications resource A user needs to have read permission on the application where the request is originated from Consider the example below it will allow the example user to invoke the httpbin extensions in all applications under the default project csv p example user applications get default allow p example user extensions invoke httpbin allow The deny effect When deny is used as an effect in a policy it will be effective if the policy matches Even if more specific policies with the allow effect match as well the deny will have priority The order in which the policies appears in the policy file configuration has no impact and the result is deterministic Policies Evaluation and Matching The evaluation of access is done in two parts validating against the default policy configuration then validating against the policies for the current user If an action is allowed or denied by the default policies then this effect will be effective without further evaluation When the effect is undefined the evaluation will continue with subject specific policies The access will be evaluated for the user then for each configured group that the user is part of The matching engine configured in policy matchMode can use two different match modes to compare the values of tokens glob based on the glob package https pkg go dev github com gobwas glob regex based on the regexp package https pkg go dev regexp When all tokens match during the evaluation the effect will be returned The evaluation will continue until all matching policies are evaluated or until a policy with the deny effect matches After all policies are evaluated if there was at least one allow effect and no deny access will be granted Glob matching When glob is used the policy tokens are treated as single terms without separators Consider the following policy p example user applications action extensions default allow When the example user executes the extensions DaemonSet test action the following glob matches will happen 1 The current user example user matches the token example user 2 The value applications matches the token applications 3 The value action extensions DaemonSet test matches action extensions Note that is not treated as a separator and the use of is not necessary 4 The value default my app matches default Using SSO Users Groups The scopes field controls which OIDC scopes to examine during RBAC enforcement in addition to sub scope If omitted it defaults to groups The scope value can be a string or a list of strings For more information on scopes please review the User Management Documentation user management index md The following example shows targeting email as well as groups from your OIDC provider yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm namespace argocd labels app kubernetes io name argocd rbac cm app kubernetes io part of argocd data policy csv p my org team alpha applications sync my project allow g my org team beta role admin g user example org role admin policy default role readonly scopes groups email This can be useful to associate users emails and groups directly in AppProject yaml apiVersion argoproj io v1alpha1 kind AppProject metadata name team beta project namespace argocd spec roles name admin description Admin privileges to team beta policies p proj team beta project admin applications allow groups user example org Value from the email scope my org team beta Value from the groups scope Local Users Accounts Local users user management index md local usersaccounts are assigned access by either grouping them with a role or by assigning policies directly to them The example below shows how to assign a policy directly to a local user yaml p my local user applications sync my project allow This example shows how to assign a role to a local user yaml g my local user role admin warning Ambiguous Group Assignments If you have enabled SSO user management index md sso any SSO user with a scope that matches a local user will be added to the same roles as the local user For example if local user sally is assigned to role admin and if an SSO user has a scope which happens to be named sally that SSO user will also be assigned to role admin An example of where this may be a problem is if your SSO provider is an SCM and org members are automatically granted scopes named after the orgs If a user can create or add themselves to an org in the SCM they can gain the permissions of the local user with the same name To avoid ambiguity if you are using local users and SSO it is recommended to assign policies directly to local users and not to assign roles to local users In other words instead of using g my local user role admin you should explicitly assign policies to my local user yaml p my local user allow Policy CSV Composition It is possible to provide additional entries in the argocd rbac cm configmap to compose the final policy csv In this case the key must follow the pattern policy any string csv Argo CD will concatenate all additional policies it finds with this pattern below the main one policy csv The order of additional provided policies are determined by the key string Example if two additional policies are provided with keys policy A csv and policy B csv it will first concatenate policy A csv and then policy B csv This is useful to allow composing policies in config management tools like Kustomize Helm etc The example below shows how a Kustomize patch can be provided in an overlay to add additional configuration to an existing RBAC ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm namespace argocd data policy tester overlay csv p role tester applications allow p role tester projects allow g my org team qa role tester Validating and testing your RBAC policies If you want to ensure that your RBAC policies are working as expected you can use the argocd admin settings rbac command user guide commands argocd admin settings rbac md to validate them This tool allows you to test whether a certain role or subject can perform the requested action with a policy that s not live yet in the system i e from a local file or config map Additionally it can be used against the live RBAC configuration in the cluster your Argo CD is running in Validating a policy To check whether your new policy configuration is valid and understood by Argo CD s RBAC implementation you can use the argocd admin settings rbac validate command user guide commands argocd admin settings rbac validate md Testing a policy To test whether a role or subject group or local user has sufficient permissions to execute certain actions on certain resources you can use the argocd admin settings rbac can command user guide commands argocd admin settings rbac can md
argocd loading a CSS file directly onto the argocd server container Both mechanisms are driven by modifying the argocd cm configMap Argo CD imports the majority of its UI stylesheets from the project Sometimes it may be desired to customize certain components of the UI for branding purposes or to Such custom styling can be applied either by supplying a URL to a remotely hosted CSS file or by help distinguish between multiple instances of Argo CD running in different environments Custom Styles
# Custom Styles Argo CD imports the majority of its UI stylesheets from the [argo-ui](https://github.com/argoproj/argo-ui) project. Sometimes, it may be desired to customize certain components of the UI for branding purposes or to help distinguish between multiple instances of Argo CD running in different environments. Such custom styling can be applied either by supplying a URL to a remotely hosted CSS file, or by loading a CSS file directly onto the argocd-server container. Both mechanisms are driven by modifying the argocd-cm configMap. ## Adding Styles Via Remote URL The first method simply requires the addition of the remote URL to the argocd-cm configMap: ### argocd-cm ```yaml --- apiVersion: v1 kind: ConfigMap metadata: ... name: argocd-cm data: ui.cssurl: "https://www.example.com/my-styles.css" ``` ## Adding Styles Via Volume Mounts The second method requires mounting the CSS file directly onto the argocd-server container and then providing the argocd-cm with the properly configured path to that file. In the following example, the CSS file is actually defined inside of a separate configMap (the same effect could be achieved by generating or downloading a CSS file in an initContainer): ### argocd-cm ```yaml --- apiVersion: v1 kind: ConfigMap metadata: ... name: argocd-cm data: ui.cssurl: "./custom/my-styles.css" ``` Note that the `cssurl` should be specified relative to the "/shared/app" directory; not as an absolute path. ### argocd-styles-cm ```yaml --- apiVersion: v1 kind: ConfigMap metadata: ... name: argocd-styles-cm data: my-styles.css: | .sidebar { background: linear-gradient(to bottom, #999, #777, #333, #222, #111); } ``` ### argocd-server ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: argocd-server ... spec: template: ... spec: containers: - command: ... volumeMounts: ... - mountPath: /shared/app/custom name: styles ... volumes: ... - configMap: name: argocd-styles-cm name: styles ``` Note that the CSS file should be mounted within a subdirectory of the "/shared/app" directory (e.g. "/shared/app/custom"). Otherwise, the file will likely fail to be imported by the browser with an "incorrect MIME type" error. The subdirectory can be changed using `server.staticassets` key of the [argocd-cmd-params-cm.yaml](./argocd-cmd-params-cm.yaml) ConfigMap. ## Developing Style Overlays The styles specified in the injected CSS file should be specific to components and classes defined in [argo-ui](https://github.com/argoproj/argo-ui). It is recommended to test out the styles you wish to apply first by making use of your browser's built-in developer tools. For a more full-featured experience, you may wish to build a separate project using the [Argo CD UI dev server](https://webpack.js.org/configuration/dev-server/). ## Banners Argo CD can optionally display a banner that can be used to notify your users of upcoming maintenance and operational changes. This feature can be enabled by specifying the banner message using the `ui.bannercontent` field in the `argocd-cm` ConfigMap and Argo CD will display this message at the top of every UI page. You can optionally add a link to this message by setting `ui.bannerurl`. You can also make the banner sticky (permanent) by setting `ui.bannerpermanent` to true and change its position to "both" or "bottom" by using `ui.bannerposition: "both"`, allowing the banner to display on both the top and bottom, or `ui.bannerposition: "bottom"` to display it exclusively at the bottom. ### argocd-cm ```yaml --- apiVersion: v1 kind: ConfigMap metadata: ... name: argocd-cm data: ui.bannercontent: "Banner message linked to a URL" ui.bannerurl: "www.bannerlink.com" ui.bannerpermanent: "true" ui.bannerposition: "bottom" ``` ![banner with link](../assets/banner.png)
argocd
Custom Styles Argo CD imports the majority of its UI stylesheets from the argo ui https github com argoproj argo ui project Sometimes it may be desired to customize certain components of the UI for branding purposes or to help distinguish between multiple instances of Argo CD running in different environments Such custom styling can be applied either by supplying a URL to a remotely hosted CSS file or by loading a CSS file directly onto the argocd server container Both mechanisms are driven by modifying the argocd cm configMap Adding Styles Via Remote URL The first method simply requires the addition of the remote URL to the argocd cm configMap argocd cm yaml apiVersion v1 kind ConfigMap metadata name argocd cm data ui cssurl https www example com my styles css Adding Styles Via Volume Mounts The second method requires mounting the CSS file directly onto the argocd server container and then providing the argocd cm with the properly configured path to that file In the following example the CSS file is actually defined inside of a separate configMap the same effect could be achieved by generating or downloading a CSS file in an initContainer argocd cm yaml apiVersion v1 kind ConfigMap metadata name argocd cm data ui cssurl custom my styles css Note that the cssurl should be specified relative to the shared app directory not as an absolute path argocd styles cm yaml apiVersion v1 kind ConfigMap metadata name argocd styles cm data my styles css sidebar background linear gradient to bottom 999 777 333 222 111 argocd server yaml apiVersion apps v1 kind Deployment metadata name argocd server spec template spec containers command volumeMounts mountPath shared app custom name styles volumes configMap name argocd styles cm name styles Note that the CSS file should be mounted within a subdirectory of the shared app directory e g shared app custom Otherwise the file will likely fail to be imported by the browser with an incorrect MIME type error The subdirectory can be changed using server staticassets key of the argocd cmd params cm yaml argocd cmd params cm yaml ConfigMap Developing Style Overlays The styles specified in the injected CSS file should be specific to components and classes defined in argo ui https github com argoproj argo ui It is recommended to test out the styles you wish to apply first by making use of your browser s built in developer tools For a more full featured experience you may wish to build a separate project using the Argo CD UI dev server https webpack js org configuration dev server Banners Argo CD can optionally display a banner that can be used to notify your users of upcoming maintenance and operational changes This feature can be enabled by specifying the banner message using the ui bannercontent field in the argocd cm ConfigMap and Argo CD will display this message at the top of every UI page You can optionally add a link to this message by setting ui bannerurl You can also make the banner sticky permanent by setting ui bannerpermanent to true and change its position to both or bottom by using ui bannerposition both allowing the banner to display on both the top and bottom or ui bannerposition bottom to display it exclusively at the bottom argocd cm yaml apiVersion v1 kind ConfigMap metadata name argocd cm data ui bannercontent Banner message linked to a URL ui bannerurl www bannerlink com ui bannerpermanent true ui bannerposition bottom banner with link assets banner png
argocd This guide is for operators who have already installed Argo CD and have a new cluster and are looking to install many apps in that cluster Admins should review pull requests to that repository paying particular attention to the field in each Cluster Bootstrapping The ability to create Applications in arbitrary warning App of Apps is an admin only tool There s no one particular pattern to solve this problem e g you could write a script to create your apps or you could even manually create them However users of Argo CD tend to use the app of apps pattern is an admin level capability Only admins should have push access to the parent Application s source repository
# Cluster Bootstrapping This guide is for operators who have already installed Argo CD, and have a new cluster and are looking to install many apps in that cluster. There's no one particular pattern to solve this problem, e.g. you could write a script to create your apps, or you could even manually create them. However, users of Argo CD tend to use the **app of apps pattern**. !!!warning "App of Apps is an admin-only tool" The ability to create Applications in arbitrary [Projects](./declarative-setup.md#projects) is an admin-level capability. Only admins should have push access to the parent Application's source repository. Admins should review pull requests to that repository, paying particular attention to the `project` field in each Application. Projects with access to the namespace in which Argo CD is installed effectively have admin-level privileges. ## App Of Apps Pattern [Declaratively](declarative-setup.md) specify one Argo CD app that consists only of other apps. ![Application of Applications](../assets/application-of-applications.png) ### Helm Example This example shows how to use Helm to achieve this. You can, of course, use another tool if you like. A typical layout of your Git repository for this might be: ``` ├── Chart.yaml ├── templates │   ├── guestbook.yaml │   ├── helm-dependency.yaml │   ├── helm-guestbook.yaml │   └── kustomize-guestbook.yaml └── values.yaml ``` `Chart.yaml` is boiler-plate. `templates` contains one file for each child app, roughly: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: destination: namespace: argocd server: project: default source: path: guestbook repoURL: https://github.com/argoproj/argocd-example-apps targetRevision: HEAD ``` The sync policy to automated + prune, so that child apps are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that your apps are deleted correctly. Fix the revision to a specific Git commit SHA to make sure that, even if the child apps repo changes, the app will only change when the parent app change that revision. Alternatively, you can set it to HEAD or a branch name. As you probably want to override the cluster server, this is a templated values. `values.yaml` contains the default values: ```yaml spec: destination: server: https://kubernetes.default.svc ``` Next, you need to create and sync your parent app, e.g. via the CLI: ```bash argocd app create apps \ --dest-namespace argocd \ --dest-server https://kubernetes.default.svc \ --repo https://github.com/argoproj/argocd-example-apps.git \ --path apps argocd app sync apps ``` The parent app will appear as in-sync but the child apps will be out of sync: ![New App Of Apps](../assets/new-app-of-apps.png) > NOTE: You may want to modify this behavior to bootstrap your cluster in waves; see [v1.8 upgrade notes](upgrading/1.7-1.8.md) for information on changing this. You can either sync via the UI, firstly filter by the correct label: ![Filter Apps](../assets/filter-apps.png) Then select the "out of sync" apps and sync: ![Sync Apps](../assets/sync-apps.png) Or, via the CLI: ```bash argocd app sync -l app.kubernetes.io/instance=apps ``` View [the example on GitHub](https://github.com/argoproj/argocd-example-apps/tree/master/apps). ### Cascading deletion If you want to ensure that child-apps and all of their resources are deleted when the parent-app is deleted make sure to add the appropriate [finalizer](../user-guide/app_deletion.md#about-the-deletion-finalizer) to your `Application` definition ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: ... ``` ### Ignoring differences in child applications To allow changes in child apps without triggering an out-of-sync status, or modification for debugging etc, the app of apps pattern works with [diff customization](../user-guide/diffing/). The example below shows how to ignore changes to syncPolicy and other common values. ```yaml spec: ... syncPolicy: ... syncOptions: - RespectIgnoreDifferences=true ... ignoreDifferences: - group: "*" kind: "Application" namespace: "*" jsonPointers: # Allow manually disabling auto sync for apps, useful for debugging. - /spec/syncPolicy/automated # These are automatically updated on a regular basis. Not ignoring last applied configuration since it's used for computing diffs after normalization. - /metadata/annotations/argocd.argoproj.io~1refresh - /operation ... ```
argocd
Cluster Bootstrapping This guide is for operators who have already installed Argo CD and have a new cluster and are looking to install many apps in that cluster There s no one particular pattern to solve this problem e g you could write a script to create your apps or you could even manually create them However users of Argo CD tend to use the app of apps pattern warning App of Apps is an admin only tool The ability to create Applications in arbitrary Projects declarative setup md projects is an admin level capability Only admins should have push access to the parent Application s source repository Admins should review pull requests to that repository paying particular attention to the project field in each Application Projects with access to the namespace in which Argo CD is installed effectively have admin level privileges App Of Apps Pattern Declaratively declarative setup md specify one Argo CD app that consists only of other apps Application of Applications assets application of applications png Helm Example This example shows how to use Helm to achieve this You can of course use another tool if you like A typical layout of your Git repository for this might be Chart yaml templates guestbook yaml helm dependency yaml helm guestbook yaml kustomize guestbook yaml values yaml Chart yaml is boiler plate templates contains one file for each child app roughly yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd finalizers resources finalizer argocd argoproj io spec destination namespace argocd server project default source path guestbook repoURL https github com argoproj argocd example apps targetRevision HEAD The sync policy to automated prune so that child apps are automatically created synced and deleted when the manifest is changed but you may wish to disable this I ve also added the finalizer which will ensure that your apps are deleted correctly Fix the revision to a specific Git commit SHA to make sure that even if the child apps repo changes the app will only change when the parent app change that revision Alternatively you can set it to HEAD or a branch name As you probably want to override the cluster server this is a templated values values yaml contains the default values yaml spec destination server https kubernetes default svc Next you need to create and sync your parent app e g via the CLI bash argocd app create apps dest namespace argocd dest server https kubernetes default svc repo https github com argoproj argocd example apps git path apps argocd app sync apps The parent app will appear as in sync but the child apps will be out of sync New App Of Apps assets new app of apps png NOTE You may want to modify this behavior to bootstrap your cluster in waves see v1 8 upgrade notes upgrading 1 7 1 8 md for information on changing this You can either sync via the UI firstly filter by the correct label Filter Apps assets filter apps png Then select the out of sync apps and sync Sync Apps assets sync apps png Or via the CLI bash argocd app sync l app kubernetes io instance apps View the example on GitHub https github com argoproj argocd example apps tree master apps Cascading deletion If you want to ensure that child apps and all of their resources are deleted when the parent app is deleted make sure to add the appropriate finalizer user guide app deletion md about the deletion finalizer to your Application definition yaml apiVersion argoproj io v1alpha1 kind Application metadata name guestbook namespace argocd finalizers resources finalizer argocd argoproj io spec Ignoring differences in child applications To allow changes in child apps without triggering an out of sync status or modification for debugging etc the app of apps pattern works with diff customization user guide diffing The example below shows how to ignore changes to syncPolicy and other common values yaml spec syncPolicy syncOptions RespectIgnoreDifferences true ignoreDifferences group kind Application namespace jsonPointers Allow manually disabling auto sync for apps useful for debugging spec syncPolicy automated These are automatically updated on a regular basis Not ignoring last applied configuration since it s used for computing diffs after normalization metadata annotations argocd argoproj io 1refresh operation
argocd Installation Argo CD has two type of installations multi tenant and core The end users can access Argo CD via the API server using the Web UI or CLI The CLI has to be configured using command The multi tenant installation is the most common way to install Argo CD This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team Multi Tenant
# Installation Argo CD has two type of installations: multi-tenant and core. ## Multi-Tenant The multi-tenant installation is the most common way to install Argo CD. This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team. The end-users can access Argo CD via the API server using the Web UI or `argocd` CLI. The `argocd` CLI has to be configured using `argocd login <server-host>` command (learn more [here](../user-guide/commands/argocd_login.md)). Two types of installation manifests are provided: ### Non High Availability: Not recommended for production use. This type of installation is typically used during evaluation period for demonstrations and testing. * [install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/install.yaml) - Standard Argo CD installation with cluster-admin access. Use this manifest set if you plan to use Argo CD to deploy applications in the same cluster that Argo CD runs in (i.e. kubernetes.svc.default). It will still be able to deploy to external clusters with inputted credentials. > Note: The ClusterRoleBinding in the installation manifest is bound to a ServiceAccount in the argocd namespace. > Be cautious when modifying the namespace, as changing it may cause permission-related errors unless the ClusterRoleBinding is correctly adjusted to reflect the new namespace. * [namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/namespace-install.yaml) - Installation of Argo CD which requires only namespace level privileges (does not need cluster roles). Use this manifest set if you do not need Argo CD to deploy applications in the same cluster that Argo CD runs in, and will rely solely on inputted cluster credentials. An example of using this set of manifests is if you run several Argo CD instances for different teams, where each instance will be deploying applications to external clusters. It will still be possible to deploy to the same cluster (kubernetes.svc.default) with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster --namespace <YOUR NAMESPACE>`). > Note: Argo CD CRDs are not included into [namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/namespace-install.yaml). > and have to be installed separately. The CRD manifests are located in the [manifests/crds](https://github.com/argoproj/argo-cd/blob/master/manifests/crds) directory. > Use the following command to install them: > ``` > kubectl apply -k https://github.com/argoproj/argo-cd/manifests/crds\?ref\=stable > ``` ### High Availability: High Availability installation is recommended for production use. This bundle includes the same components but tuned for high availability and resiliency. * [ha/install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/ha/install.yaml) - the same as install.yaml but with multiple replicas for supported components. * [ha/namespace-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/ha/namespace-install.yaml) - the same as namespace-install.yaml but with multiple replicas for supported components. ## Core The Argo CD Core installation is primarily used to deploy Argo CD in headless mode. This type of installation is most suitable for cluster administrators who independently use Argo CD and don't need multi-tenancy features. This installation includes fewer components and is easier to setup. The bundle does not include the API server or UI, and installs the lightweight (non-HA) version of each component. Installation manifest is available at [core-install.yaml](https://github.com/argoproj/argo-cd/blob/master/manifests/core-install.yaml). For more details about Argo CD Core please refer to the [official documentation](./core.md) ## Kustomize The Argo CD manifests can also be installed using Kustomize. It is recommended to include the manifest as a remote resource and apply additional customizations using Kustomize patches. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: argocd resources: - https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.2/manifests/install.yaml ``` For an example of this, see the [kustomization.yaml](https://github.com/argoproj/argoproj-deployments/blob/master/argocd/kustomization.yaml) used to deploy the [Argoproj CI/CD infrastructure](https://github.com/argoproj/argoproj-deployments#argoproj-deployments). #### Installing Argo CD in a Custom Namespace If you want to install Argo CD in a namespace other than the default argocd, you can use Kustomize to apply a patch that updates the ClusterRoleBinding to reference the correct namespace for the ServiceAccount. This ensures that the necessary permissions are correctly set in your custom namespace. Below is an example of how to configure your kustomization.yaml to install Argo CD in a custom namespace: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: <your-custom-namespace> resources: - https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.2/manifests/install.yaml patches: - patch: |- - op: replace path: /subjects/0/namespace value: <your-custom-namespace> target: kind: ClusterRoleBinding ``` This patch ensures that the ClusterRoleBinding correctly maps to the ServiceAccount in your custom namespace, preventing any permission-related issues during the deployment. ## Helm The Argo CD can be installed using [Helm](https://helm.sh/). The Helm chart is currently community maintained and available at [argo-helm/charts/argo-cd](https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd). ## Supported versions For detailed information regarding Argo CD's version support policy, please refer to the [Release Process and Cadence documentation](https://argo-cd.readthedocs.io/en/stable/developer-guide/release-process-and-cadence/). ## Tested versions The following table shows the versions of Kubernetes that are tested with each version of Argo CD. {!docs/operator-manual/tested-kubernetes-versions.md!}
argocd
Installation Argo CD has two type of installations multi tenant and core Multi Tenant The multi tenant installation is the most common way to install Argo CD This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team The end users can access Argo CD via the API server using the Web UI or argocd CLI The argocd CLI has to be configured using argocd login server host command learn more here user guide commands argocd login md Two types of installation manifests are provided Non High Availability Not recommended for production use This type of installation is typically used during evaluation period for demonstrations and testing install yaml https github com argoproj argo cd blob master manifests install yaml Standard Argo CD installation with cluster admin access Use this manifest set if you plan to use Argo CD to deploy applications in the same cluster that Argo CD runs in i e kubernetes svc default It will still be able to deploy to external clusters with inputted credentials Note The ClusterRoleBinding in the installation manifest is bound to a ServiceAccount in the argocd namespace Be cautious when modifying the namespace as changing it may cause permission related errors unless the ClusterRoleBinding is correctly adjusted to reflect the new namespace namespace install yaml https github com argoproj argo cd blob master manifests namespace install yaml Installation of Argo CD which requires only namespace level privileges does not need cluster roles Use this manifest set if you do not need Argo CD to deploy applications in the same cluster that Argo CD runs in and will rely solely on inputted cluster credentials An example of using this set of manifests is if you run several Argo CD instances for different teams where each instance will be deploying applications to external clusters It will still be possible to deploy to the same cluster kubernetes svc default with inputted credentials i e argocd cluster add CONTEXT in cluster namespace YOUR NAMESPACE Note Argo CD CRDs are not included into namespace install yaml https github com argoproj argo cd blob master manifests namespace install yaml and have to be installed separately The CRD manifests are located in the manifests crds https github com argoproj argo cd blob master manifests crds directory Use the following command to install them kubectl apply k https github com argoproj argo cd manifests crds ref stable High Availability High Availability installation is recommended for production use This bundle includes the same components but tuned for high availability and resiliency ha install yaml https github com argoproj argo cd blob master manifests ha install yaml the same as install yaml but with multiple replicas for supported components ha namespace install yaml https github com argoproj argo cd blob master manifests ha namespace install yaml the same as namespace install yaml but with multiple replicas for supported components Core The Argo CD Core installation is primarily used to deploy Argo CD in headless mode This type of installation is most suitable for cluster administrators who independently use Argo CD and don t need multi tenancy features This installation includes fewer components and is easier to setup The bundle does not include the API server or UI and installs the lightweight non HA version of each component Installation manifest is available at core install yaml https github com argoproj argo cd blob master manifests core install yaml For more details about Argo CD Core please refer to the official documentation core md Kustomize The Argo CD manifests can also be installed using Kustomize It is recommended to include the manifest as a remote resource and apply additional customizations using Kustomize patches yaml apiVersion kustomize config k8s io v1beta1 kind Kustomization namespace argocd resources https raw githubusercontent com argoproj argo cd v2 7 2 manifests install yaml For an example of this see the kustomization yaml https github com argoproj argoproj deployments blob master argocd kustomization yaml used to deploy the Argoproj CI CD infrastructure https github com argoproj argoproj deployments argoproj deployments Installing Argo CD in a Custom Namespace If you want to install Argo CD in a namespace other than the default argocd you can use Kustomize to apply a patch that updates the ClusterRoleBinding to reference the correct namespace for the ServiceAccount This ensures that the necessary permissions are correctly set in your custom namespace Below is an example of how to configure your kustomization yaml to install Argo CD in a custom namespace yaml apiVersion kustomize config k8s io v1beta1 kind Kustomization namespace your custom namespace resources https raw githubusercontent com argoproj argo cd v2 7 2 manifests install yaml patches patch op replace path subjects 0 namespace value your custom namespace target kind ClusterRoleBinding This patch ensures that the ClusterRoleBinding correctly maps to the ServiceAccount in your custom namespace preventing any permission related issues during the deployment Helm The Argo CD can be installed using Helm https helm sh The Helm chart is currently community maintained and available at argo helm charts argo cd https github com argoproj argo helm tree main charts argo cd Supported versions For detailed information regarding Argo CD s version support policy please refer to the Release Process and Cadence documentation https argo cd readthedocs io en stable developer guide release process and cadence Tested versions The following table shows the versions of Kubernetes that are tested with each version of Argo CD docs operator manual tested kubernetes versions md
argocd Both protocols are exposed by the argocd server service object on the following ports Ingress Configuration 443 gRPC HTTPS Argo CD API server runs both a gRPC server used by the CLI as well as a HTTP HTTPS server used by the UI 80 HTTP redirects to HTTPS There are several ways how Ingress can be configured
# Ingress Configuration Argo CD API server runs both a gRPC server (used by the CLI), as well as a HTTP/HTTPS server (used by the UI). Both protocols are exposed by the argocd-server service object on the following ports: * 443 - gRPC/HTTPS * 80 - HTTP (redirects to HTTPS) There are several ways how Ingress can be configured. ## [Ambassador](https://www.getambassador.io/) The Ambassador Edge Stack can be used as a Kubernetes ingress controller with [automatic TLS termination](https://www.getambassador.io/docs/latest/topics/running/tls/#host) and routing capabilities for both the CLI and the UI. The API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). Given the `argocd` CLI includes the port number in the request `host` header, 2 Mappings are required. Note: Disabling TLS in not required if you are using grpc-web ### Option 1: Mapping CRD for Host-based Routing ```yaml apiVersion: getambassador.io/v2 kind: Mapping metadata: name: argocd-server-ui namespace: argocd spec: host: argocd.example.com prefix: / service: https://argocd-server:443 --- apiVersion: getambassador.io/v2 kind: Mapping metadata: name: argocd-server-cli namespace: argocd spec: # NOTE: the port must be ignored if you have strip_matching_host_port enabled on envoy host: argocd.example.com:443 prefix: / service: argocd-server:80 regex_headers: Content-Type: "^application/grpc.*$" grpc: true ``` Login with the `argocd` CLI: ```shell argocd login <host> ``` ### Option 2: Mapping CRD for Path-based Routing The API server must be configured to be available under a non-root path (e.g. `/argo-cd`). Edit the `argocd-server` deployment to add the `--rootpath=/argo-cd` flag to the argocd-server command. ```yaml apiVersion: getambassador.io/v2 kind: Mapping metadata: name: argocd-server namespace: argocd spec: prefix: /argo-cd rewrite: /argo-cd service: https://argocd-server:443 ``` Example of `argocd-cmd-params-cm` configmap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cmd-params-cm app.kubernetes.io/part-of: argocd data: ## Server properties # Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from / (default "/") server.basehref: "/argo-cd" # Used if Argo CD is running behind reverse proxy under subpath different from / server.rootpath: "/argo-cd" ``` Login with the `argocd` CLI using the extra `--grpc-web-root-path` flag for non-root paths. ```shell argocd login <host>:<port> --grpc-web-root-path /argo-cd ``` ## [Contour](https://projectcontour.io/) The Contour ingress controller can terminate TLS ingress traffic at the edge. The Argo CD API server should be run with TLS disabled. Edit the `argocd-server` Deployment to add the `--insecure` flag to the argocd-server container command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). It is also possible to provide an internal-only ingress path and an external-only ingress path by deploying two instances of Contour: one behind a private-subnet LoadBalancer service and one behind a public-subnet LoadBalancer service. The private Contour deployment will pick up Ingresses annotated with `kubernetes.io/ingress.class: contour-internal` and the public Contour deployment will pick up Ingresses annotated with `kubernetes.io/ingress.class: contour-external`. This provides the opportunity to deploy the Argo CD UI privately but still allow for SSO callbacks to succeed. ### Private Argo CD UI with Multiple Ingress Objects and BYO Certificate Since Contour Ingress supports only a single protocol per Ingress object, define three Ingress objects. One for private HTTP/HTTPS, one for private gRPC, and one for public HTTPS SSO callbacks. Internal HTTP/HTTPS Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-http annotations: kubernetes.io/ingress.class: contour-internal ingress.kubernetes.io/force-ssl-redirect: "true" spec: rules: - host: internal.path.to.argocd.io http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: http tls: - hosts: - internal.path.to.argocd.io secretName: your-certificate-name ``` Internal gRPC Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-grpc annotations: kubernetes.io/ingress.class: contour-internal spec: rules: - host: grpc-internal.path.to.argocd.io http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https tls: - hosts: - grpc-internal.path.to.argocd.io secretName: your-certificate-name ``` External HTTPS SSO Callback Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-external-callback-http annotations: kubernetes.io/ingress.class: contour-external ingress.kubernetes.io/force-ssl-redirect: "true" spec: rules: - host: external.path.to.argocd.io http: paths: - path: /api/dex/callback pathType: Prefix backend: service: name: argocd-server port: name: http tls: - hosts: - external.path.to.argocd.io secretName: your-certificate-name ``` The argocd-server Service needs to be annotated with `projectcontour.io/upstream-protocol.h2c: "https,443"` to wire up the gRPC protocol proxying. The API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). Contour httpproxy CRD: Using a contour httpproxy CRD allows you to use the same hostname for the GRPC and REST api. ```yaml apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: argocd-server namespace: argocd spec: ingressClassName: contour virtualhost: fqdn: path.to.argocd.io tls: secretName: wildcard-tls routes: - conditions: - prefix: / - header: name: Content-Type contains: application/grpc services: - name: argocd-server port: 80 protocol: h2c # allows for unencrypted http2 connections timeoutPolicy: response: 1h idle: 600s idleConnection: 600s - conditions: - prefix: / services: - name: argocd-server port: 80 ``` ## [kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx) ### Option 1: SSL-Passthrough Argo CD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a challenge when attempting to define a single nginx ingress object and rule for the argocd-service, since the `nginx.ingress.kubernetes.io/backend-protocol` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol) accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS). In order to expose the Argo CD API server with a single ingress rule and hostname, the `nginx.ingress.kubernetes.io/ssl-passthrough` [annotation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough) must be used to passthrough TLS connections and terminate TLS at the Argo CD API server. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: ingressClassName: nginx rules: - host: argocd.example.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https ``` The above rule terminates TLS at the Argo CD API server, which detects the protocol being used, and responds appropriately. Note that the `nginx.ingress.kubernetes.io/ssl-passthrough` annotation requires that the `--enable-ssl-passthrough` flag be added to the command line arguments to `nginx-ingress-controller`. #### SSL-Passthrough with cert-manager and Let's Encrypt ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/ssl-passthrough: "true" # If you encounter a redirect loop or are getting a 307 response code # then you need to force the nginx ingress to connect to the backend using HTTPS. # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" spec: ingressClassName: nginx rules: - host: argocd.example.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https tls: - hosts: - argocd.example.com secretName: argocd-server-tls # as expected by argocd-server ``` ### Option 2: SSL Termination at Ingress Controller An alternative approach is to perform the SSL termination at the Ingress. Since an `ingress-nginx` Ingress supports only a single protocol per Ingress object, two Ingress objects need to be defined using the `nginx.ingress.kubernetes.io/backend-protocol` annotation, one for HTTP/HTTPS and the other for gRPC. Each ingress will be for a different domain (`argocd.example.com` and `grpc.argocd.example.com`). This requires that the Ingress resources use different TLS `secretName`s to avoid unexpected behavior. HTTP/HTTPS Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-http-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTP" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: http host: argocd.example.com tls: - hosts: - argocd.example.com secretName: argocd-ingress-http ``` gRPC Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-grpc-ingress namespace: argocd annotations: nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https host: grpc.argocd.example.com tls: - hosts: - grpc.argocd.example.com secretName: argocd-ingress-grpc ``` The API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). The obvious disadvantage to this approach is that this technique requires two separate hostnames for the API server -- one for gRPC and the other for HTTP/HTTPS. However it allows TLS termination to happen at the ingress controller. ## [Traefik (v3.0)](https://docs.traefik.io/) Traefik can be used as an edge router and provide [TLS](https://docs.traefik.io/user-guides/grpc/) termination within the same deployment. It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections _on the same port_ meaning you do not require multiple hosts or paths. The API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command or set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). ### IngressRoute CRD ```yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: argocd-server namespace: argocd spec: entryPoints: - websecure routes: - kind: Rule match: Host(`argocd.example.com`) priority: 10 services: - name: argocd-server port: 80 - kind: Rule match: Host(`argocd.example.com`) && Header(`Content-Type`, `application/grpc`) priority: 11 services: - name: argocd-server port: 80 scheme: h2c tls: certResolver: default ``` ## AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode) AWS ALBs can be used as an L7 Load Balancer for both UI and gRPC traffic, whereas Classic ELBs and NLBs can be used as L4 Load Balancers for both. When using an ALB, you'll want to create a second service for argocd-server. This is necessary because we need to tell the ALB to send the GRPC traffic to a different target group then the UI traffic, since the backend protocol is HTTP2 instead of HTTP1. ```yaml apiVersion: v1 kind: Service metadata: annotations: alb.ingress.kubernetes.io/backend-protocol-version: GRPC # This tells AWS to send traffic from the ALB using GRPC. Plain HTTP2 can be used, but the health checks wont be available because argo currently downgrade non-grpc calls to HTTP1 labels: app: argogrpc name: argogrpc namespace: argocd spec: ports: - name: "443" port: 443 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server sessionAffinity: None type: NodePort ``` Once we create this service, we can configure the Ingress to conditionally route all `application/grpc` traffic to the new HTTP2 backend, using the `alb.ingress.kubernetes.io/conditions` annotation, as seen below. Note: The value after the . in the condition annotation _must_ be the same name as the service that you want traffic to route to - and will be applied on any path with a matching serviceName. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/backend-protocol: HTTPS # Use this annotation (which must match a service name) to route traffic to HTTP2 backends. alb.ingress.kubernetes.io/conditions.argogrpc: | [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}] alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' name: argocd namespace: argocd spec: rules: - host: argocd.argoproj.io http: paths: - path: / backend: service: name: argogrpc # The grpc service must be placed before the argocd-server for the listening rules to be created in the correct order port: number: 443 pathType: Prefix - path: / backend: service: name: argocd-server port: number: 443 pathType: Prefix tls: - hosts: - argocd.argoproj.io ``` ## [Istio](https://www.istio.io) You can put Argo CD behind Istio using following configurations. Here we will achieve both serving Argo CD behind istio and using subpath on Istio First we need to make sure that we can run Argo CD with subpath (ie /argocd). For this we have used install.yaml from argocd project as is ```bash curl -kLs -o install.yaml https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ``` save following file as kustomization.yml ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./install.yaml patches: - path: ./patch.yml ``` And following lines as patch.yml ```yaml # Use --insecure so Ingress can send traffic with HTTP # --bashref /argocd is the subpath like https://IP/argocd # env was added because of https://github.com/argoproj/argo-cd/issues/3572 error --- apiVersion: apps/v1 kind: Deployment metadata: name: argocd-server spec: template: spec: containers: - args: - /usr/local/bin/argocd-server - --staticassets - /shared/app - --redis - argocd-redis:6379 - --insecure - --basehref - /argocd - --rootpath - /argocd name: argocd-server env: - name: ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT value: "0" ``` After that install Argo CD (there should be only 3 yml file defined above in current directory ) ```bash kubectl apply -k ./ -n argocd --wait=true ``` Be sure you create secret for Istio ( in our case secretname is argocd-server-tls on argocd Namespace). After that we create Istio Resources ```yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: argocd-gateway namespace: argocd spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" tls: httpsRedirect: true - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: credentialName: argocd-server-tls maxProtocolVersion: TLSV1_3 minProtocolVersion: TLSV1_2 mode: SIMPLE cipherSuites: - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-SHA - AES128-GCM-SHA256 - AES128-SHA - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-AES256-SHA - AES256-GCM-SHA384 - AES256-SHA --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: argocd-virtualservice namespace: argocd spec: hosts: - "*" gateways: - argocd-gateway http: - match: - uri: prefix: /argocd route: - destination: host: argocd-server port: number: 80 ``` And now we can browse http:///argocd (it will be rewritten to https:///argocd ## Google Cloud load balancers with Kubernetes Ingress You can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects. For this we will need these five objects: - A Service - A BackendConfig - A FrontendConfig - A secret with your SSL certificate - An Ingress for GKE If you need detail for all the options available for these Google integrations, you can check the [Google docs on configuring Ingress features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) ### Disable internal TLS First, to avoid internal redirection loops from HTTP to HTTPS, the API server should be run with TLS disabled. Edit the `--insecure` flag in the `argocd-server` command of the argocd-server deployment, or simply set `server.insecure: "true"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands/additional-configuration-method.md). ### Creating a service Now you need an externally accessible service. This is practically the same as the internal service Argo CD has, but with Google Cloud annotations. Note that this service is annotated to use a [Network Endpoint Group](https://cloud.google.com/load-balancing/docs/negs) (NEG) to allow your load balancer to send traffic directly to your pods without using kube-proxy, so remove the `neg` annotation if that's not what you want. The service: ```yaml apiVersion: v1 kind: Service metadata: name: argocd-server namespace: argocd annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"ports": {"http":"argocd-backend-config"}}' spec: type: ClusterIP ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app.kubernetes.io/name: argocd-server ``` ### Creating a BackendConfig See that previous service referencing a backend config called `argocd-backend-config`? So lets deploy it using this yaml: ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: argocd-backend-config namespace: argocd spec: healthCheck: checkIntervalSec: 30 timeoutSec: 5 healthyThreshold: 1 unhealthyThreshold: 2 type: HTTP requestPath: /healthz port: 8080 ``` It uses the same health check as the pods. ### Creating a FrontendConfig Now we can deploy a frontend config with an HTTP to HTTPS redirect: ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: argocd-frontend-config namespace: argocd spec: redirectToHttps: enabled: true ``` --- !!! note The next two steps (the certificate secret and the Ingress) are described supposing that you manage the certificate yourself, and you have the certificate and key files for it. In the case that your certificate is Google-managed, fix the next two steps using the [guide to use a Google-managed SSL certificate](https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_google-managed_certificate). --- ### Creating a certificate secret We need now to create a secret with the SSL certificate we want in our load balancer. It's as easy as executing this command on the path you have your certificate keys stored: ``` kubectl -n argocd create secret tls secret-yourdomain-com \ --cert cert-file.crt --key key-file.key ``` ### Creating an Ingress And finally, to top it all, our Ingress. Note the reference to our frontend config, the service, and to the certificate secret. --- !!! note GKE clusters running versions earlier than `1.21.3-gke.1600`, [the only supported value for the pathType field](https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress#creating_an_ingress) is `ImplementationSpecific`. So you must check your GKE cluster's version. You need to use different YAML depending on the version. --- If you use the version earlier than `1.21.3-gke.1600`, you should use the following Ingress resource: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd namespace: argocd annotations: networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config spec: tls: - secretName: secret-example-com rules: - host: argocd.example.com http: paths: - pathType: ImplementationSpecific path: "/*" # "*" is needed. Without this, the UI Javascript and CSS will not load properly backend: service: name: argocd-server port: number: 80 ``` If you use the version `1.21.3-gke.1600` or later, you should use the following Ingress resource: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd namespace: argocd annotations: networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config spec: tls: - secretName: secret-example-com rules: - host: argocd.example.com http: paths: - pathType: Prefix path: "/" backend: service: name: argocd-server port: number: 80 ``` As you may know already, it can take some minutes to deploy the load balancer and become ready to accept connections. Once it's ready, get the public IP address for your Load Balancer, go to your DNS server (Google or third party) and point your domain or subdomain (i.e. argocd.example.com) to that IP address. You can get that IP address describing the Ingress object like this: ``` kubectl -n argocd describe ingresses argocd | grep Address ``` Once the DNS change is propagated, you're ready to use Argo with your Google Cloud Load Balancer ## Authenticating through multiple layers of authenticating reverse proxies Argo CD endpoints may be protected by one or more reverse proxies layers, in that case, you can provide additional headers through the `argocd` CLI `--header` parameter to authenticate through those layers. ```shell $ argocd login <host>:<port> --header 'x-token1:foo' --header 'x-token2:bar' # can be repeated multiple times $ argocd login <host>:<port> --header 'x-token1:foo,x-token2:bar' # headers can also be comma separated ``` ## ArgoCD Server and UI Root Path (v1.5.3) Argo CD server and UI can be configured to be available under a non-root path (e.g. `/argo-cd`). To do this, add the `--rootpath` flag into the `argocd-server` deployment command: ```yaml spec: template: spec: name: argocd-server containers: - command: - /argocd-server - --repo-server - argocd-repo-server:8081 - --rootpath - /argo-cd ``` NOTE: The flag `--rootpath` changes both API Server and UI base URL. Example nginx.conf: ``` worker_processes 1; events { worker_connections 1024; } http { sendfile on; server { listen 443; location /argo-cd/ { proxy_pass https://localhost:8080/argo-cd/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; # buffering should be disabled for api/v1/stream/applications to support chunked response proxy_buffering off; } } } ``` Flag ```--grpc-web-root-path ``` is used to provide a non-root path (e.g. /argo-cd) ```shell $ argocd login <host>:<port> --grpc-web-root-path /argo-cd ``` ## UI Base Path If the Argo CD UI is available under a non-root path (e.g. `/argo-cd` instead of `/`) then the UI path should be configured in the API server. To configure the UI path add the `--basehref` flag into the `argocd-server` deployment command: ```yaml spec: template: spec: name: argocd-server containers: - command: - /argocd-server - --repo-server - argocd-repo-server:8081 - --basehref - /argo-cd ``` NOTE: The flag `--basehref` only changes the UI base URL. The API server will keep using the `/` path so you need to add a URL rewrite rule to the proxy config. Example nginx.conf with URL rewrite: ``` worker_processes 1; events { worker_connections 1024; } http { sendfile on; server { listen 443; location /argo-cd { rewrite /argo-cd/(.*) /$1 break; proxy_pass https://localhost:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; # buffering should be disabled for api/v1/stream/applications to support chunked response proxy_buffering off; } } } ```
argocd
Ingress Configuration Argo CD API server runs both a gRPC server used by the CLI as well as a HTTP HTTPS server used by the UI Both protocols are exposed by the argocd server service object on the following ports 443 gRPC HTTPS 80 HTTP redirects to HTTPS There are several ways how Ingress can be configured Ambassador https www getambassador io The Ambassador Edge Stack can be used as a Kubernetes ingress controller with automatic TLS termination https www getambassador io docs latest topics running tls host and routing capabilities for both the CLI and the UI The API server should be run with TLS disabled Edit the argocd server deployment to add the insecure flag to the argocd server command or simply set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md Given the argocd CLI includes the port number in the request host header 2 Mappings are required Note Disabling TLS in not required if you are using grpc web Option 1 Mapping CRD for Host based Routing yaml apiVersion getambassador io v2 kind Mapping metadata name argocd server ui namespace argocd spec host argocd example com prefix service https argocd server 443 apiVersion getambassador io v2 kind Mapping metadata name argocd server cli namespace argocd spec NOTE the port must be ignored if you have strip matching host port enabled on envoy host argocd example com 443 prefix service argocd server 80 regex headers Content Type application grpc grpc true Login with the argocd CLI shell argocd login host Option 2 Mapping CRD for Path based Routing The API server must be configured to be available under a non root path e g argo cd Edit the argocd server deployment to add the rootpath argo cd flag to the argocd server command yaml apiVersion getambassador io v2 kind Mapping metadata name argocd server namespace argocd spec prefix argo cd rewrite argo cd service https argocd server 443 Example of argocd cmd params cm configmap yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm namespace argocd labels app kubernetes io name argocd cmd params cm app kubernetes io part of argocd data Server properties Value for base href in index html Used if Argo CD is running behind reverse proxy under subpath different from default server basehref argo cd Used if Argo CD is running behind reverse proxy under subpath different from server rootpath argo cd Login with the argocd CLI using the extra grpc web root path flag for non root paths shell argocd login host port grpc web root path argo cd Contour https projectcontour io The Contour ingress controller can terminate TLS ingress traffic at the edge The Argo CD API server should be run with TLS disabled Edit the argocd server Deployment to add the insecure flag to the argocd server container command or simply set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md It is also possible to provide an internal only ingress path and an external only ingress path by deploying two instances of Contour one behind a private subnet LoadBalancer service and one behind a public subnet LoadBalancer service The private Contour deployment will pick up Ingresses annotated with kubernetes io ingress class contour internal and the public Contour deployment will pick up Ingresses annotated with kubernetes io ingress class contour external This provides the opportunity to deploy the Argo CD UI privately but still allow for SSO callbacks to succeed Private Argo CD UI with Multiple Ingress Objects and BYO Certificate Since Contour Ingress supports only a single protocol per Ingress object define three Ingress objects One for private HTTP HTTPS one for private gRPC and one for public HTTPS SSO callbacks Internal HTTP HTTPS Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server http annotations kubernetes io ingress class contour internal ingress kubernetes io force ssl redirect true spec rules host internal path to argocd io http paths path pathType Prefix backend service name argocd server port name http tls hosts internal path to argocd io secretName your certificate name Internal gRPC Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server grpc annotations kubernetes io ingress class contour internal spec rules host grpc internal path to argocd io http paths path pathType Prefix backend service name argocd server port name https tls hosts grpc internal path to argocd io secretName your certificate name External HTTPS SSO Callback Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server external callback http annotations kubernetes io ingress class contour external ingress kubernetes io force ssl redirect true spec rules host external path to argocd io http paths path api dex callback pathType Prefix backend service name argocd server port name http tls hosts external path to argocd io secretName your certificate name The argocd server Service needs to be annotated with projectcontour io upstream protocol h2c https 443 to wire up the gRPC protocol proxying The API server should then be run with TLS disabled Edit the argocd server deployment to add the insecure flag to the argocd server command or simply set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md Contour httpproxy CRD Using a contour httpproxy CRD allows you to use the same hostname for the GRPC and REST api yaml apiVersion projectcontour io v1 kind HTTPProxy metadata name argocd server namespace argocd spec ingressClassName contour virtualhost fqdn path to argocd io tls secretName wildcard tls routes conditions prefix header name Content Type contains application grpc services name argocd server port 80 protocol h2c allows for unencrypted http2 connections timeoutPolicy response 1h idle 600s idleConnection 600s conditions prefix services name argocd server port 80 kubernetes ingress nginx https github com kubernetes ingress nginx Option 1 SSL Passthrough Argo CD serves multiple protocols gRPC HTTPS on the same port 443 this provides a challenge when attempting to define a single nginx ingress object and rule for the argocd service since the nginx ingress kubernetes io backend protocol annotation https kubernetes github io ingress nginx user guide nginx configuration annotations backend protocol accepts only a single value for the backend protocol e g HTTP HTTPS GRPC GRPCS In order to expose the Argo CD API server with a single ingress rule and hostname the nginx ingress kubernetes io ssl passthrough annotation https kubernetes github io ingress nginx user guide nginx configuration annotations ssl passthrough must be used to passthrough TLS connections and terminate TLS at the Argo CD API server yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server ingress namespace argocd annotations nginx ingress kubernetes io force ssl redirect true nginx ingress kubernetes io ssl passthrough true spec ingressClassName nginx rules host argocd example com http paths path pathType Prefix backend service name argocd server port name https The above rule terminates TLS at the Argo CD API server which detects the protocol being used and responds appropriately Note that the nginx ingress kubernetes io ssl passthrough annotation requires that the enable ssl passthrough flag be added to the command line arguments to nginx ingress controller SSL Passthrough with cert manager and Let s Encrypt yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server ingress namespace argocd annotations cert manager io cluster issuer letsencrypt prod nginx ingress kubernetes io ssl passthrough true If you encounter a redirect loop or are getting a 307 response code then you need to force the nginx ingress to connect to the backend using HTTPS nginx ingress kubernetes io backend protocol HTTPS spec ingressClassName nginx rules host argocd example com http paths path pathType Prefix backend service name argocd server port name https tls hosts argocd example com secretName argocd server tls as expected by argocd server Option 2 SSL Termination at Ingress Controller An alternative approach is to perform the SSL termination at the Ingress Since an ingress nginx Ingress supports only a single protocol per Ingress object two Ingress objects need to be defined using the nginx ingress kubernetes io backend protocol annotation one for HTTP HTTPS and the other for gRPC Each ingress will be for a different domain argocd example com and grpc argocd example com This requires that the Ingress resources use different TLS secretName s to avoid unexpected behavior HTTP HTTPS Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server http ingress namespace argocd annotations nginx ingress kubernetes io force ssl redirect true nginx ingress kubernetes io backend protocol HTTP spec ingressClassName nginx rules http paths path pathType Prefix backend service name argocd server port name http host argocd example com tls hosts argocd example com secretName argocd ingress http gRPC Ingress yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd server grpc ingress namespace argocd annotations nginx ingress kubernetes io backend protocol GRPC spec ingressClassName nginx rules http paths path pathType Prefix backend service name argocd server port name https host grpc argocd example com tls hosts grpc argocd example com secretName argocd ingress grpc The API server should then be run with TLS disabled Edit the argocd server deployment to add the insecure flag to the argocd server command or simply set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md The obvious disadvantage to this approach is that this technique requires two separate hostnames for the API server one for gRPC and the other for HTTP HTTPS However it allows TLS termination to happen at the ingress controller Traefik v3 0 https docs traefik io Traefik can be used as an edge router and provide TLS https docs traefik io user guides grpc termination within the same deployment It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections on the same port meaning you do not require multiple hosts or paths The API server should be run with TLS disabled Edit the argocd server deployment to add the insecure flag to the argocd server command or set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md IngressRoute CRD yaml apiVersion traefik io v1alpha1 kind IngressRoute metadata name argocd server namespace argocd spec entryPoints websecure routes kind Rule match Host argocd example com priority 10 services name argocd server port 80 kind Rule match Host argocd example com Header Content Type application grpc priority 11 services name argocd server port 80 scheme h2c tls certResolver default AWS Application Load Balancers ALBs And Classic ELB HTTP Mode AWS ALBs can be used as an L7 Load Balancer for both UI and gRPC traffic whereas Classic ELBs and NLBs can be used as L4 Load Balancers for both When using an ALB you ll want to create a second service for argocd server This is necessary because we need to tell the ALB to send the GRPC traffic to a different target group then the UI traffic since the backend protocol is HTTP2 instead of HTTP1 yaml apiVersion v1 kind Service metadata annotations alb ingress kubernetes io backend protocol version GRPC This tells AWS to send traffic from the ALB using GRPC Plain HTTP2 can be used but the health checks wont be available because argo currently downgrade non grpc calls to HTTP1 labels app argogrpc name argogrpc namespace argocd spec ports name 443 port 443 protocol TCP targetPort 8080 selector app kubernetes io name argocd server sessionAffinity None type NodePort Once we create this service we can configure the Ingress to conditionally route all application grpc traffic to the new HTTP2 backend using the alb ingress kubernetes io conditions annotation as seen below Note The value after the in the condition annotation must be the same name as the service that you want traffic to route to and will be applied on any path with a matching serviceName yaml apiVersion networking k8s io v1 kind Ingress metadata annotations alb ingress kubernetes io backend protocol HTTPS Use this annotation which must match a service name to route traffic to HTTP2 backends alb ingress kubernetes io conditions argogrpc field http header httpHeaderConfig httpHeaderName Content Type values application grpc alb ingress kubernetes io listen ports HTTPS 443 name argocd namespace argocd spec rules host argocd argoproj io http paths path backend service name argogrpc The grpc service must be placed before the argocd server for the listening rules to be created in the correct order port number 443 pathType Prefix path backend service name argocd server port number 443 pathType Prefix tls hosts argocd argoproj io Istio https www istio io You can put Argo CD behind Istio using following configurations Here we will achieve both serving Argo CD behind istio and using subpath on Istio First we need to make sure that we can run Argo CD with subpath ie argocd For this we have used install yaml from argocd project as is bash curl kLs o install yaml https raw githubusercontent com argoproj argo cd stable manifests install yaml save following file as kustomization yml yaml apiVersion kustomize config k8s io v1beta1 kind Kustomization resources install yaml patches path patch yml And following lines as patch yml yaml Use insecure so Ingress can send traffic with HTTP bashref argocd is the subpath like https IP argocd env was added because of https github com argoproj argo cd issues 3572 error apiVersion apps v1 kind Deployment metadata name argocd server spec template spec containers args usr local bin argocd server staticassets shared app redis argocd redis 6379 insecure basehref argocd rootpath argocd name argocd server env name ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT value 0 After that install Argo CD there should be only 3 yml file defined above in current directory bash kubectl apply k n argocd wait true Be sure you create secret for Istio in our case secretname is argocd server tls on argocd Namespace After that we create Istio Resources yaml apiVersion networking istio io v1alpha3 kind Gateway metadata name argocd gateway namespace argocd spec selector istio ingressgateway servers port number 80 name http protocol HTTP hosts tls httpsRedirect true port number 443 name https protocol HTTPS hosts tls credentialName argocd server tls maxProtocolVersion TLSV1 3 minProtocolVersion TLSV1 2 mode SIMPLE cipherSuites ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES128 SHA AES128 GCM SHA256 AES128 SHA ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA AES256 SHA AES256 GCM SHA384 AES256 SHA apiVersion networking istio io v1alpha3 kind VirtualService metadata name argocd virtualservice namespace argocd spec hosts gateways argocd gateway http match uri prefix argocd route destination host argocd server port number 80 And now we can browse http argocd it will be rewritten to https argocd Google Cloud load balancers with Kubernetes Ingress You can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects For this we will need these five objects A Service A BackendConfig A FrontendConfig A secret with your SSL certificate An Ingress for GKE If you need detail for all the options available for these Google integrations you can check the Google docs on configuring Ingress features https cloud google com kubernetes engine docs how to ingress features Disable internal TLS First to avoid internal redirection loops from HTTP to HTTPS the API server should be run with TLS disabled Edit the insecure flag in the argocd server command of the argocd server deployment or simply set server insecure true in the argocd cmd params cm ConfigMap as described here server commands additional configuration method md Creating a service Now you need an externally accessible service This is practically the same as the internal service Argo CD has but with Google Cloud annotations Note that this service is annotated to use a Network Endpoint Group https cloud google com load balancing docs negs NEG to allow your load balancer to send traffic directly to your pods without using kube proxy so remove the neg annotation if that s not what you want The service yaml apiVersion v1 kind Service metadata name argocd server namespace argocd annotations cloud google com neg ingress true cloud google com backend config ports http argocd backend config spec type ClusterIP ports name http port 80 protocol TCP targetPort 8080 selector app kubernetes io name argocd server Creating a BackendConfig See that previous service referencing a backend config called argocd backend config So lets deploy it using this yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name argocd backend config namespace argocd spec healthCheck checkIntervalSec 30 timeoutSec 5 healthyThreshold 1 unhealthyThreshold 2 type HTTP requestPath healthz port 8080 It uses the same health check as the pods Creating a FrontendConfig Now we can deploy a frontend config with an HTTP to HTTPS redirect yaml apiVersion networking gke io v1beta1 kind FrontendConfig metadata name argocd frontend config namespace argocd spec redirectToHttps enabled true note The next two steps the certificate secret and the Ingress are described supposing that you manage the certificate yourself and you have the certificate and key files for it In the case that your certificate is Google managed fix the next two steps using the guide to use a Google managed SSL certificate https cloud google com kubernetes engine docs how to managed certs creating an ingress with a google managed certificate Creating a certificate secret We need now to create a secret with the SSL certificate we want in our load balancer It s as easy as executing this command on the path you have your certificate keys stored kubectl n argocd create secret tls secret yourdomain com cert cert file crt key key file key Creating an Ingress And finally to top it all our Ingress Note the reference to our frontend config the service and to the certificate secret note GKE clusters running versions earlier than 1 21 3 gke 1600 the only supported value for the pathType field https cloud google com kubernetes engine docs how to load balance ingress creating an ingress is ImplementationSpecific So you must check your GKE cluster s version You need to use different YAML depending on the version If you use the version earlier than 1 21 3 gke 1600 you should use the following Ingress resource yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd namespace argocd annotations networking gke io v1beta1 FrontendConfig argocd frontend config spec tls secretName secret example com rules host argocd example com http paths pathType ImplementationSpecific path is needed Without this the UI Javascript and CSS will not load properly backend service name argocd server port number 80 If you use the version 1 21 3 gke 1600 or later you should use the following Ingress resource yaml apiVersion networking k8s io v1 kind Ingress metadata name argocd namespace argocd annotations networking gke io v1beta1 FrontendConfig argocd frontend config spec tls secretName secret example com rules host argocd example com http paths pathType Prefix path backend service name argocd server port number 80 As you may know already it can take some minutes to deploy the load balancer and become ready to accept connections Once it s ready get the public IP address for your Load Balancer go to your DNS server Google or third party and point your domain or subdomain i e argocd example com to that IP address You can get that IP address describing the Ingress object like this kubectl n argocd describe ingresses argocd grep Address Once the DNS change is propagated you re ready to use Argo with your Google Cloud Load Balancer Authenticating through multiple layers of authenticating reverse proxies Argo CD endpoints may be protected by one or more reverse proxies layers in that case you can provide additional headers through the argocd CLI header parameter to authenticate through those layers shell argocd login host port header x token1 foo header x token2 bar can be repeated multiple times argocd login host port header x token1 foo x token2 bar headers can also be comma separated ArgoCD Server and UI Root Path v1 5 3 Argo CD server and UI can be configured to be available under a non root path e g argo cd To do this add the rootpath flag into the argocd server deployment command yaml spec template spec name argocd server containers command argocd server repo server argocd repo server 8081 rootpath argo cd NOTE The flag rootpath changes both API Server and UI base URL Example nginx conf worker processes 1 events worker connections 1024 http sendfile on server listen 443 location argo cd proxy pass https localhost 8080 argo cd proxy redirect off proxy set header Host host proxy set header X Real IP remote addr proxy set header X Forwarded For proxy add x forwarded for proxy set header X Forwarded Host server name buffering should be disabled for api v1 stream applications to support chunked response proxy buffering off Flag grpc web root path is used to provide a non root path e g argo cd shell argocd login host port grpc web root path argo cd UI Base Path If the Argo CD UI is available under a non root path e g argo cd instead of then the UI path should be configured in the API server To configure the UI path add the basehref flag into the argocd server deployment command yaml spec template spec name argocd server containers command argocd server repo server argocd repo server 8081 basehref argo cd NOTE The flag basehref only changes the UI base URL The API server will keep using the path so you need to add a URL rewrite rule to the proxy config Example nginx conf with URL rewrite worker processes 1 events worker connections 1024 http sendfile on server listen 443 location argo cd rewrite argo cd 1 break proxy pass https localhost 8080 proxy redirect off proxy set header Host host proxy set header X Real IP remote addr proxy set header X Forwarded For proxy add x forwarded for proxy set header X Forwarded Host server name buffering should be disabled for api v1 stream applications to support chunked response proxy buffering off
argocd Observed generation is equal to desired generation specific types of Kubernetes resources surfaced to the overall Application health status as a whole The following checks are made for Deployment ReplicaSet StatefulSet DaemonSet Resource Health Argo CD provides built in health assessment for several standard Kubernetes types which is then Overview Number of updated replicas equals the number of desired replicas
# Resource Health ## Overview Argo CD provides built-in health assessment for several standard Kubernetes types, which is then surfaced to the overall Application health status as a whole. The following checks are made for specific types of Kubernetes resources: ### Deployment, ReplicaSet, StatefulSet, DaemonSet * Observed generation is equal to desired generation. * Number of **updated** replicas equals the number of desired replicas. ### Service * If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`. ### Ingress * The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`. ### Job * If job `.spec.suspended` is set to 'true', then the job and app health will be marked as suspended. ### PersistentVolumeClaim * The `status.phase` is `Bound` ### Argocd App The health assessment of `argoproj.io/Application` CRD has been removed in argocd 1.8 (see [#3781](https://github.com/argoproj/argo-cd/issues/3781) for more information). You might need to restore it if you are using app-of-apps pattern and orchestrating synchronization using sync waves. Add the following resource customization in `argocd-cm` ConfigMap: ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: resource.customizations.health.argoproj.io_Application: | hs = {} hs.status = "Progressing" hs.message = "" if obj.status ~= nil then if obj.status.health ~= nil then hs.status = obj.status.health.status if obj.status.health.message ~= nil then hs.message = obj.status.health.message end end end return hs ``` ## Custom Health Checks Argo CD supports custom health checks written in [Lua](https://www.lua.org/). This is useful if you: * Are affected by known issues where your `Ingress` or `StatefulSet` resources are stuck in `Progressing` state because of bug in your resource controller. * Have a custom resource for which Argo CD does not have a built-in health check. There are two ways to configure a custom health check. The next two sections describe those ways. ### Way 1. Define a Custom Health Check in `argocd-cm` ConfigMap Custom health checks can be defined in ```yaml resource.customizations.health.<group>_<kind>: | ``` field of `argocd-cm`. If you are using argocd-operator, this is overridden by [the argocd-operator resourceCustomizations](https://argocd-operator.readthedocs.io/en/latest/reference/argocd/#resource-customizations). The following example demonstrates a health check for `cert-manager.io/Certificate`. ```yaml data: resource.customizations.health.cert-manager.io_Certificate: | hs = {} if obj.status ~= nil then if obj.status.conditions ~= nil then for i, condition in ipairs(obj.status.conditions) do if condition.type == "Ready" and condition.status == "False" then hs.status = "Degraded" hs.message = condition.message return hs end if condition.type == "Ready" and condition.status == "True" then hs.status = "Healthy" hs.message = condition.message return hs end end end end hs.status = "Progressing" hs.message = "Waiting for certificate" return hs ``` In order to prevent duplication of custom health checks for potentially multiple resources, it is also possible to specify a wildcard in the resource kind, and anywhere in the resource group, like this: ```yaml resource.customizations: | ec2.aws.crossplane.io/*: health.lua: | ... ``` ```yaml # If a key _begins_ with a wildcard, please ensure that the GVK key is quoted. resource.customizations: | "*.aws.crossplane.io/*": health.lua: | ... ``` !!!important Please, note that wildcards are only supported when using the `resource.customizations` key, the `resource.customizations.health.<group>_<kind>` style keys do not work since wildcards (`*`) are not supported in Kubernetes configmap keys. The `obj` is a global variable which contains the resource. The script must return an object with status and optional message field. The custom health check might return one of the following health statuses: * `Healthy` - the resource is healthy * `Progressing` - the resource is not healthy yet but still making progress and might be healthy soon * `Degraded` - the resource is degraded * `Suspended` - the resource is suspended and waiting for some external event to resume (e.g. suspended CronJob or paused Deployment) By default, health typically returns a `Progressing` status. NOTE: As a security measure, access to the standard Lua libraries will be disabled by default. Admins can control access by setting `resource.customizations.useOpenLibs.<group>_<kind>`. In the following example, standard libraries are enabled for health check of `cert-manager.io/Certificate`. ```yaml data: resource.customizations.useOpenLibs.cert-manager.io_Certificate: true resource.customizations.health.cert-manager.io_Certificate: | # Lua standard libraries are enabled for this script ``` ### Way 2. Contribute a Custom Health Check A health check can be bundled into Argo CD. Custom health check scripts are located in the `resource_customizations` directory of [https://github.com/argoproj/argo-cd](https://github.com/argoproj/argo-cd). This must have the following directory structure: ``` argo-cd |-- resource_customizations | |-- your.crd.group.io # CRD group | | |-- MyKind # Resource kind | | | |-- health.lua # Health check | | | |-- health_test.yaml # Test inputs and expected results | | | +-- testdata # Directory with test resource YAML definitions ``` Each health check must have tests defined in `health_test.yaml` file. The `health_test.yaml` is a YAML file with the following structure: ```yaml tests: - healthStatus: status: ExpectedStatus message: Expected message inputPath: testdata/test-resource-definition.yaml ``` To test the implemented custom health checks, run `go test -v ./util/lua/`. The [PR#1139](https://github.com/argoproj/argo-cd/pull/1139) is an example of Cert Manager CRDs custom health check. Please note that bundled health checks with wildcards are not supported. ## Overriding Go-Based Health Checks Health checks for some resources were [hardcoded as Go code](https://github.com/argoproj/gitops-engine/tree/master/pkg/health) because Lua support was introduced later. Also, the logic of health checks for some resources were too complex, so it was easier to implement it in Go. It is possible to override health checks for built-in resource. Argo will prefer the configured health check over the Go-based built-in check. The following resources have Go-based health checks: * PersistentVolumeClaim * Pod * Service * apiregistration.k8s.io/APIService * apps/DaemonSet * apps/Deployment * apps/ReplicaSet * apps/StatefulSet * argoproj.io/Workflow * autoscaling/HorizontalPodAutoscaler * batch/Job * extensions/Ingress * networking.k8s.io/Ingress ## Health Checks An Argo CD App's health is inferred from the health of its immediate child resources (the resources represented in source control). The App health will be the worst health of its immediate child sources. The priority of most to least healthy statuses is: `Healthy`, `Suspended`, `Progressing`, `Missing`, `Degraded`, `Unknown`. So, for example, if an App has a `Missing` resource and a `Degraded` resource, the App's health will be `Missing`. But the health of a resource is not inherited from child resources - it is calculated using only information about the resource itself. A resource's status field may or may not contain information about the health of a child resource, and the resource's health check may or may not take that information into account. The lack of inheritance is by design. A resource's health can't be inferred from its children because the health of a child resource may not be relevant to the health of the parent resource. For example, a Deployment's health is not necessarily affected by the health of its Pods. ``` App (healthy) └── Deployment (healthy) └── ReplicaSet (healthy) └── Pod (healthy) └── ReplicaSet (unhealthy) └── Pod (unhealthy) ``` If you want the health of a child resource to affect the health of its parent, you need to configure the parent's health check to take the child's health into account. Since only the parent resource's state is available to the health check, the parent resource's controller needs to make the child resource's health available in the parent resource's status field. ``` App (healthy) └── CustomResource (healthy) <- This resource's health check needs to be fixed to mark the App as unhealthy └── CustomChildResource (unhealthy) ``` ## Ignoring Child Resource Health Check in Applications To ignore the health check of an immediate child resource within an Application, set the annotation `argocd.argoproj.io/ignore-healthcheck` to `true`. For example: ```yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: argocd.argoproj.io/ignore-healthcheck: "true" ``` By doing this, the health status of the Deployment will not affect the health of its parent Application
argocd
Resource Health Overview Argo CD provides built in health assessment for several standard Kubernetes types which is then surfaced to the overall Application health status as a whole The following checks are made for specific types of Kubernetes resources Deployment ReplicaSet StatefulSet DaemonSet Observed generation is equal to desired generation Number of updated replicas equals the number of desired replicas Service If service type is of type LoadBalancer the status loadBalancer ingress list is non empty with at least one value for hostname or IP Ingress The status loadBalancer ingress list is non empty with at least one value for hostname or IP Job If job spec suspended is set to true then the job and app health will be marked as suspended PersistentVolumeClaim The status phase is Bound Argocd App The health assessment of argoproj io Application CRD has been removed in argocd 1 8 see 3781 https github com argoproj argo cd issues 3781 for more information You might need to restore it if you are using app of apps pattern and orchestrating synchronization using sync waves Add the following resource customization in argocd cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data resource customizations health argoproj io Application hs hs status Progressing hs message if obj status nil then if obj status health nil then hs status obj status health status if obj status health message nil then hs message obj status health message end end end return hs Custom Health Checks Argo CD supports custom health checks written in Lua https www lua org This is useful if you Are affected by known issues where your Ingress or StatefulSet resources are stuck in Progressing state because of bug in your resource controller Have a custom resource for which Argo CD does not have a built in health check There are two ways to configure a custom health check The next two sections describe those ways Way 1 Define a Custom Health Check in argocd cm ConfigMap Custom health checks can be defined in yaml resource customizations health group kind field of argocd cm If you are using argocd operator this is overridden by the argocd operator resourceCustomizations https argocd operator readthedocs io en latest reference argocd resource customizations The following example demonstrates a health check for cert manager io Certificate yaml data resource customizations health cert manager io Certificate hs if obj status nil then if obj status conditions nil then for i condition in ipairs obj status conditions do if condition type Ready and condition status False then hs status Degraded hs message condition message return hs end if condition type Ready and condition status True then hs status Healthy hs message condition message return hs end end end end hs status Progressing hs message Waiting for certificate return hs In order to prevent duplication of custom health checks for potentially multiple resources it is also possible to specify a wildcard in the resource kind and anywhere in the resource group like this yaml resource customizations ec2 aws crossplane io health lua yaml If a key begins with a wildcard please ensure that the GVK key is quoted resource customizations aws crossplane io health lua important Please note that wildcards are only supported when using the resource customizations key the resource customizations health group kind style keys do not work since wildcards are not supported in Kubernetes configmap keys The obj is a global variable which contains the resource The script must return an object with status and optional message field The custom health check might return one of the following health statuses Healthy the resource is healthy Progressing the resource is not healthy yet but still making progress and might be healthy soon Degraded the resource is degraded Suspended the resource is suspended and waiting for some external event to resume e g suspended CronJob or paused Deployment By default health typically returns a Progressing status NOTE As a security measure access to the standard Lua libraries will be disabled by default Admins can control access by setting resource customizations useOpenLibs group kind In the following example standard libraries are enabled for health check of cert manager io Certificate yaml data resource customizations useOpenLibs cert manager io Certificate true resource customizations health cert manager io Certificate Lua standard libraries are enabled for this script Way 2 Contribute a Custom Health Check A health check can be bundled into Argo CD Custom health check scripts are located in the resource customizations directory of https github com argoproj argo cd https github com argoproj argo cd This must have the following directory structure argo cd resource customizations your crd group io CRD group MyKind Resource kind health lua Health check health test yaml Test inputs and expected results testdata Directory with test resource YAML definitions Each health check must have tests defined in health test yaml file The health test yaml is a YAML file with the following structure yaml tests healthStatus status ExpectedStatus message Expected message inputPath testdata test resource definition yaml To test the implemented custom health checks run go test v util lua The PR 1139 https github com argoproj argo cd pull 1139 is an example of Cert Manager CRDs custom health check Please note that bundled health checks with wildcards are not supported Overriding Go Based Health Checks Health checks for some resources were hardcoded as Go code https github com argoproj gitops engine tree master pkg health because Lua support was introduced later Also the logic of health checks for some resources were too complex so it was easier to implement it in Go It is possible to override health checks for built in resource Argo will prefer the configured health check over the Go based built in check The following resources have Go based health checks PersistentVolumeClaim Pod Service apiregistration k8s io APIService apps DaemonSet apps Deployment apps ReplicaSet apps StatefulSet argoproj io Workflow autoscaling HorizontalPodAutoscaler batch Job extensions Ingress networking k8s io Ingress Health Checks An Argo CD App s health is inferred from the health of its immediate child resources the resources represented in source control The App health will be the worst health of its immediate child sources The priority of most to least healthy statuses is Healthy Suspended Progressing Missing Degraded Unknown So for example if an App has a Missing resource and a Degraded resource the App s health will be Missing But the health of a resource is not inherited from child resources it is calculated using only information about the resource itself A resource s status field may or may not contain information about the health of a child resource and the resource s health check may or may not take that information into account The lack of inheritance is by design A resource s health can t be inferred from its children because the health of a child resource may not be relevant to the health of the parent resource For example a Deployment s health is not necessarily affected by the health of its Pods App healthy Deployment healthy ReplicaSet healthy Pod healthy ReplicaSet unhealthy Pod unhealthy If you want the health of a child resource to affect the health of its parent you need to configure the parent s health check to take the child s health into account Since only the parent resource s state is available to the health check the parent resource s controller needs to make the child resource s health available in the parent resource s status field App healthy CustomResource healthy This resource s health check needs to be fixed to mark the App as unhealthy CustomChildResource unhealthy Ignoring Child Resource Health Check in Applications To ignore the health check of an immediate child resource within an Application set the annotation argocd argoproj io ignore healthcheck to true For example yaml apiVersion apps v1 kind Deployment metadata annotations argocd argoproj io ignore healthcheck true By doing this the health status of the Deployment will not affect the health of its parent Application
argocd compliance https www pcisecuritystandards org requirements The following are some security topics and implementation details of Argo CD JWTs Username password bearer tokens are not used for authentication The JWT is obtained managed Authentication Argo CD has undergone rigorous internal security reviews and penetration testing to satisfy PCI Security Authentication to Argo CD API server is performed exclusively using
# Security Argo CD has undergone rigorous internal security reviews and penetration testing to satisfy [PCI compliance](https://www.pcisecuritystandards.org) requirements. The following are some security topics and implementation details of Argo CD. ## Authentication Authentication to Argo CD API server is performed exclusively using [JSON Web Tokens](https://jwt.io) (JWTs). Username/password bearer tokens are not used for authentication. The JWT is obtained/managed in one of the following ways: 1. For the local `admin` user, a username/password is exchanged for a JWT using the `/api/v1/session` endpoint. This token is signed & issued by the Argo CD API server itself and it expires after 24 hours (this token used not to expire, see [CVE-2021-26921](https://github.com/argoproj/argo-cd/security/advisories/GHSA-9h6w-j7w4-jr52)). When the admin password is updated, all existing admin JWT tokens are immediately revoked. The password is stored as a bcrypt hash in the [`argocd-secret`](https://github.com/argoproj/argo-cd/blob/master/manifests/base/config/argocd-secret.yaml) Secret. 2. For Single Sign-On users, the user completes an OAuth2 login flow to the configured OIDC identity provider (either delegated through the bundled Dex provider, or directly to a self-managed OIDC provider). This JWT is signed & issued by the IDP, and expiration and revocation is handled by the provider. Dex tokens expire after 24 hours. 3. Automation tokens are generated for a project using the `/api/v1/projects/{project}/roles/{role}/token` endpoint, and are signed & issued by Argo CD. These tokens are limited in scope and privilege, and can only be used to manage application resources in the project which it belongs to. Project JWTs have a configurable expiration and can be immediately revoked by deleting the JWT reference ID from the project role. ## Authorization Authorization is performed by iterating the list of group membership in a user's JWT groups claims, and comparing each group against the roles/rules in the [RBAC](./rbac.md) policy. Any matched rule permits access to the API request. ## TLS All network communication is performed over TLS including service-to-service communication between the three components (argocd-server, argocd-repo-server, argocd-application-controller). The Argo CD API server can enforce the use of TLS 1.2 using the flag: `--tlsminversion 1.2`. Communication with Redis is performed over plain HTTP by default. TLS can be setup with command line arguments. ## Git & Helm Repositories Git and helm repositories are managed by a stand-alone service, called the repo-server. The repo-server does not carry any Kubernetes privileges and does not store credentials to any services (including git). The repo-server is responsible for cloning repositories which have been permitted and trusted by Argo CD operators, and generating Kubernetes manifests at a given path in the repository. For performance and bandwidth efficiency, the repo-server maintains local clones of these repositories so that subsequent commits to the repository are efficiently downloaded. There are security considerations when configuring git repositories that Argo CD is permitted to deploy from. In short, gaining unauthorized write access to a git repository trusted by Argo CD will have serious security implications outlined below. ### Unauthorized Deployments Since Argo CD deploys the Kubernetes resources defined in git, an attacker with access to a trusted git repo would be able to affect the Kubernetes resources which are deployed. For example, an attacker could update the deployment manifest deploy malicious container images to the environment, or delete resources in git causing them to be pruned in the live environment. ### Tool command invocation In addition to raw YAML, Argo CD natively supports two popular Kubernetes config management tools, helm and kustomize. When rendering manifests, Argo CD executes these config management tools (i.e. `helm template`, `kustomize build`) to generate the manifests. It is possible that an attacker with write access to a trusted git repository may construct malicious helm charts or kustomizations that attempt to read files out-of-tree. This includes adjacent git repos, as well as files on the repo-server itself. Whether or not this is a risk to your organization depends on if the contents in the git repos are sensitive in nature. By default, the repo-server itself does not contain sensitive information, but might be configured with Config Management Plugins which do (e.g. decryption keys). If such plugins are used, extreme care must be taken to ensure the repository contents can be trusted at all times. Optionally the built-in config management tools might be individually disabled. If you know that your users will not need a certain config management tool, it's advisable to disable that tool. See [Tool Detection](../user-guide/tool_detection.md) for more information. ### Remote bases and helm chart dependencies Argo CD's repository allow-list only restricts the initial repository which is cloned. However, both kustomize and helm contain features to reference and follow *additional* repositories (e.g. kustomize remote bases, helm chart dependencies), of which might not be in the repository allow-list. Argo CD operators must understand that users with write access to trusted git repositories could reference other remote git repositories containing Kubernetes resources not easily searchable or auditable in the configured git repositories. ## Sensitive Information ### Secrets Argo CD never returns sensitive data from its API, and redacts all sensitive data in API payloads and logs. This includes: * cluster credentials * Git credentials * OAuth2 client secrets * Kubernetes Secret values ### External Cluster Credentials To manage external clusters, Argo CD stores the credentials of the external cluster as a Kubernetes Secret in the argocd namespace. This secret contains the K8s API bearer token associated with the `argocd-manager` ServiceAccount created during `argocd cluster add`, along with connection options to that API server (TLS configuration/certs, AWS role-arn, etc...). The information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD services. To rotate the bearer token used by Argo CD, the token can be deleted (e.g. using kubectl) which causes Kubernetes to generate a new secret with a new bearer token. The new token can be re-inputted to Argo CD by re-running `argocd cluster add`. Run the following commands against the *_managed_* cluster: ```bash # run using a kubeconfig for the externally managed cluster kubectl delete secret argocd-manager-token-XXXXXX -n kube-system argocd cluster add CONTEXTNAME ``` !!! note Kubernetes 1.24 [stopped automatically creating tokens for Service Accounts](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#no-really-you-must-read-this-before-you-upgrade). [Starting in Argo CD 2.4](https://github.com/argoproj/argo-cd/pull/9546), `argocd cluster add` creates a ServiceAccount _and_ a non-expiring Service Account token Secret when adding 1.24 clusters. In the future, Argo CD will [add support for the Kubernetes TokenRequest API](https://github.com/argoproj/argo-cd/issues/9610) to avoid using long-lived tokens. To revoke Argo CD's access to a managed cluster, delete the RBAC artifacts against the *_managed_* cluster, and remove the cluster entry from Argo CD: ```bash # run using a kubeconfig for the externally managed cluster kubectl delete sa argocd-manager -n kube-system kubectl delete clusterrole argocd-manager-role kubectl delete clusterrolebinding argocd-manager-role-binding argocd cluster rm https://your-kubernetes-cluster-addr ``` <!-- markdownlint-disable MD027 --> > NOTE: for AWS EKS clusters, the [get-token](https://docs.aws.amazon.com/cli/latest/reference/eks/get-token.html) command is used to authenticate to the external cluster, which uses IAM roles in lieu of locally stored tokens, so token rotation is not needed, and revocation is handled through IAM. <!-- markdownlint-enable MD027 --> ## Cluster RBAC By default, Argo CD uses a [clusteradmin level role](https://github.com/argoproj/argo-cd/blob/master/manifests/base/application-controller-roles/argocd-application-controller-role.yaml) in order to: 1. watch & operate on cluster state 2. deploy resources to the cluster Although Argo CD requires cluster-wide **_read_** privileges to resources in the managed cluster to function properly, it does not necessarily need full **_write_** privileges to the cluster. The ClusterRole used by argocd-server and argocd-application-controller can be modified such that write privileges are limited to only the namespaces and resources that you wish Argo CD to manage. To fine-tune privileges of externally managed clusters, edit the ClusterRole of the `argocd-manager-role` ```bash # run using a kubeconfig for the externally managed cluster kubectl edit clusterrole argocd-manager-role ``` To fine-tune privileges which Argo CD has against its own cluster (i.e. `https://kubernetes.default.svc`), edit the following cluster roles where Argo CD is running in: ```bash # run using a kubeconfig to the cluster Argo CD is running in kubectl edit clusterrole argocd-server kubectl edit clusterrole argocd-application-controller ``` !!! tip If you want to deny Argo CD access to a kind of resource then add it as an [excluded resource](declarative-setup.md#resource-exclusion). ## Auditing As a GitOps deployment tool, the Git commit history provides a natural audit log of what changes were made to application configuration, when they were made, and by whom. However, this audit log only applies to what happened in Git and does not necessarily correlate one-to-one with events that happen in a cluster. For example, User A could have made multiple commits to application manifests, but User B could have just only synced those changes to the cluster sometime later. To complement the Git revision history, Argo CD emits Kubernetes Events of application activity, indicating the responsible actor when applicable. For example: ```bash $ kubectl get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 1m 1m 1 guestbook.157f7c5edd33aeac Application Normal ResourceCreated argocd-server admin created application 1m 1m 1 guestbook.157f7c5f0f747acf Application Normal ResourceUpdated argocd-application-controller Updated sync status: -> OutOfSync 1m 1m 1 guestbook.157f7c5f0fbebbff Application Normal ResourceUpdated argocd-application-controller Updated health status: -> Missing 1m 1m 1 guestbook.157f7c6069e14f4d Application Normal OperationStarted argocd-server admin initiated sync to HEAD (8a1cb4a02d3538e54907c827352f66f20c3d7b0d) 1m 1m 1 guestbook.157f7c60a55a81a8 Application Normal OperationCompleted argocd-application-controller Sync operation to 8a1cb4a02d3538e54907c827352f66f20c3d7b0d succeeded 1m 1m 1 guestbook.157f7c60af1ccae2 Application Normal ResourceUpdated argocd-application-controller Updated sync status: OutOfSync -> Synced 1m 1m 1 guestbook.157f7c60af5bc4f0 Application Normal ResourceUpdated argocd-application-controller Updated health status: Missing -> Progressing 1m 1m 1 guestbook.157f7c651990e848 Application Normal ResourceUpdated argocd-application-controller Updated health status: Progressing -> Healthy ``` These events can be then be persisted for longer periods of time using other tools as [Event Exporter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/event-exporter) or [Event Router](https://github.com/heptiolabs/eventrouter). ## WebHook Payloads Payloads from webhook events are considered untrusted. Argo CD only examines the payload to infer the involved applications of the webhook event (e.g. which repo was modified), then refreshes the related application for reconciliation. This refresh is the same refresh which occurs regularly at three minute intervals, just fast-tracked by the webhook event. ## Logging ### Security field Security-related logs are tagged with a `security` field to make them easier to find, analyze, and report on. | Level | Friendly Level | Description | Example | |-------|----------------|---------------------------------------------------------------------------------------------------|---------------------------------------------| | 1 | Low | Unexceptional, non-malicious events | Successful access | | 2 | Medium | Could indicate malicious events, but has a high likelihood of being user/system error | Access denied | | 3 | High | Likely malicious events but one that had no side effects or was blocked | Out of bounds symlinks in repo | | 4 | Critical | Any malicious or exploitable event that had a side effect | Secrets being left behind on the filesystem | | 5 | Emergency | Unmistakably malicious events that should NEVER occur accidentally and indicates an active attack | Brute forcing of accounts | Where applicable, a `CWE` field is also added specifying the [Common Weakness Enumeration](https://cwe.mitre.org/index.html) number. !!! warning Please be aware that not all security logs are comprehensively tagged yet and these examples are not necessarily implemented. ### API Logs Argo CD logs payloads of most API requests except request that are considered sensitive, such as `/cluster.ClusterService/Create`, `/session.SessionService/Create` etc. The full list of method can be found in [server/server.go](https://github.com/argoproj/argo-cd/blob/abba8dddce8cd897ba23320e3715690f465b4a95/server/server.go#L516). Argo CD does not log IP addresses of clients requesting API endpoints, since the API server is typically behind a proxy. Instead, it is recommended to configure IP addresses logging in the proxy server that sits in front of the API server. ## ApplicationSets Argo CD's ApplicationSets feature has its own [security considerations](./applicationset/Security.md). Be aware of those issues before using ApplicationSets. ## Limiting Directory App Memory Usage > >2.2.10, 2.1.16, >2.3.5 Directory-type Applications (those whose source is raw JSON or YAML files) can consume significant [repo-server](architecture.md#repository-server) memory, depending on the size and structure of the YAML files. To avoid over-using memory in the repo-server (potentially causing a crash and denial of service), set the `reposerver.max.combined.directory.manifests.size` config option in [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml). This option limits the combined size of all JSON or YAML files in an individual app. Note that the in-memory representation of a manifest may be as much as 300x the size of the manifest on disk. Also note that the limit is per Application. If manifests are generated for multiple applications at once, memory usage will be higher. **Example:** Suppose your repo-server has a 10G memory limit, and you have ten Applications which use raw JSON or YAML files. To calculate the max safe combined file size per Application, divide 10G by 300 * 10 Apps (300 being the worst-case memory growth factor for the manifests). ``` 10G / 300 * 10 = 3M ``` So a reasonably safe configuration for this setup would be a 3M limit per app. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm data: reposerver.max.combined.directory.manifests.size: '3M' ``` The 300x ratio assumes a maliciously-crafted manifest file. If you only want to protect against accidental excessive memory use, it is probably safe to use a smaller ratio. Keep in mind that if a malicious user can create additional Applications, they can increase the total memory usage. Grant [App creation privileges](rbac.md) carefully.
argocd
Security Argo CD has undergone rigorous internal security reviews and penetration testing to satisfy PCI compliance https www pcisecuritystandards org requirements The following are some security topics and implementation details of Argo CD Authentication Authentication to Argo CD API server is performed exclusively using JSON Web Tokens https jwt io JWTs Username password bearer tokens are not used for authentication The JWT is obtained managed in one of the following ways 1 For the local admin user a username password is exchanged for a JWT using the api v1 session endpoint This token is signed issued by the Argo CD API server itself and it expires after 24 hours this token used not to expire see CVE 2021 26921 https github com argoproj argo cd security advisories GHSA 9h6w j7w4 jr52 When the admin password is updated all existing admin JWT tokens are immediately revoked The password is stored as a bcrypt hash in the argocd secret https github com argoproj argo cd blob master manifests base config argocd secret yaml Secret 2 For Single Sign On users the user completes an OAuth2 login flow to the configured OIDC identity provider either delegated through the bundled Dex provider or directly to a self managed OIDC provider This JWT is signed issued by the IDP and expiration and revocation is handled by the provider Dex tokens expire after 24 hours 3 Automation tokens are generated for a project using the api v1 projects project roles role token endpoint and are signed issued by Argo CD These tokens are limited in scope and privilege and can only be used to manage application resources in the project which it belongs to Project JWTs have a configurable expiration and can be immediately revoked by deleting the JWT reference ID from the project role Authorization Authorization is performed by iterating the list of group membership in a user s JWT groups claims and comparing each group against the roles rules in the RBAC rbac md policy Any matched rule permits access to the API request TLS All network communication is performed over TLS including service to service communication between the three components argocd server argocd repo server argocd application controller The Argo CD API server can enforce the use of TLS 1 2 using the flag tlsminversion 1 2 Communication with Redis is performed over plain HTTP by default TLS can be setup with command line arguments Git Helm Repositories Git and helm repositories are managed by a stand alone service called the repo server The repo server does not carry any Kubernetes privileges and does not store credentials to any services including git The repo server is responsible for cloning repositories which have been permitted and trusted by Argo CD operators and generating Kubernetes manifests at a given path in the repository For performance and bandwidth efficiency the repo server maintains local clones of these repositories so that subsequent commits to the repository are efficiently downloaded There are security considerations when configuring git repositories that Argo CD is permitted to deploy from In short gaining unauthorized write access to a git repository trusted by Argo CD will have serious security implications outlined below Unauthorized Deployments Since Argo CD deploys the Kubernetes resources defined in git an attacker with access to a trusted git repo would be able to affect the Kubernetes resources which are deployed For example an attacker could update the deployment manifest deploy malicious container images to the environment or delete resources in git causing them to be pruned in the live environment Tool command invocation In addition to raw YAML Argo CD natively supports two popular Kubernetes config management tools helm and kustomize When rendering manifests Argo CD executes these config management tools i e helm template kustomize build to generate the manifests It is possible that an attacker with write access to a trusted git repository may construct malicious helm charts or kustomizations that attempt to read files out of tree This includes adjacent git repos as well as files on the repo server itself Whether or not this is a risk to your organization depends on if the contents in the git repos are sensitive in nature By default the repo server itself does not contain sensitive information but might be configured with Config Management Plugins which do e g decryption keys If such plugins are used extreme care must be taken to ensure the repository contents can be trusted at all times Optionally the built in config management tools might be individually disabled If you know that your users will not need a certain config management tool it s advisable to disable that tool See Tool Detection user guide tool detection md for more information Remote bases and helm chart dependencies Argo CD s repository allow list only restricts the initial repository which is cloned However both kustomize and helm contain features to reference and follow additional repositories e g kustomize remote bases helm chart dependencies of which might not be in the repository allow list Argo CD operators must understand that users with write access to trusted git repositories could reference other remote git repositories containing Kubernetes resources not easily searchable or auditable in the configured git repositories Sensitive Information Secrets Argo CD never returns sensitive data from its API and redacts all sensitive data in API payloads and logs This includes cluster credentials Git credentials OAuth2 client secrets Kubernetes Secret values External Cluster Credentials To manage external clusters Argo CD stores the credentials of the external cluster as a Kubernetes Secret in the argocd namespace This secret contains the K8s API bearer token associated with the argocd manager ServiceAccount created during argocd cluster add along with connection options to that API server TLS configuration certs AWS role arn etc The information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD services To rotate the bearer token used by Argo CD the token can be deleted e g using kubectl which causes Kubernetes to generate a new secret with a new bearer token The new token can be re inputted to Argo CD by re running argocd cluster add Run the following commands against the managed cluster bash run using a kubeconfig for the externally managed cluster kubectl delete secret argocd manager token XXXXXX n kube system argocd cluster add CONTEXTNAME note Kubernetes 1 24 stopped automatically creating tokens for Service Accounts https github com kubernetes kubernetes blob master CHANGELOG CHANGELOG 1 24 md no really you must read this before you upgrade Starting in Argo CD 2 4 https github com argoproj argo cd pull 9546 argocd cluster add creates a ServiceAccount and a non expiring Service Account token Secret when adding 1 24 clusters In the future Argo CD will add support for the Kubernetes TokenRequest API https github com argoproj argo cd issues 9610 to avoid using long lived tokens To revoke Argo CD s access to a managed cluster delete the RBAC artifacts against the managed cluster and remove the cluster entry from Argo CD bash run using a kubeconfig for the externally managed cluster kubectl delete sa argocd manager n kube system kubectl delete clusterrole argocd manager role kubectl delete clusterrolebinding argocd manager role binding argocd cluster rm https your kubernetes cluster addr markdownlint disable MD027 NOTE for AWS EKS clusters the get token https docs aws amazon com cli latest reference eks get token html command is used to authenticate to the external cluster which uses IAM roles in lieu of locally stored tokens so token rotation is not needed and revocation is handled through IAM markdownlint enable MD027 Cluster RBAC By default Argo CD uses a clusteradmin level role https github com argoproj argo cd blob master manifests base application controller roles argocd application controller role yaml in order to 1 watch operate on cluster state 2 deploy resources to the cluster Although Argo CD requires cluster wide read privileges to resources in the managed cluster to function properly it does not necessarily need full write privileges to the cluster The ClusterRole used by argocd server and argocd application controller can be modified such that write privileges are limited to only the namespaces and resources that you wish Argo CD to manage To fine tune privileges of externally managed clusters edit the ClusterRole of the argocd manager role bash run using a kubeconfig for the externally managed cluster kubectl edit clusterrole argocd manager role To fine tune privileges which Argo CD has against its own cluster i e https kubernetes default svc edit the following cluster roles where Argo CD is running in bash run using a kubeconfig to the cluster Argo CD is running in kubectl edit clusterrole argocd server kubectl edit clusterrole argocd application controller tip If you want to deny Argo CD access to a kind of resource then add it as an excluded resource declarative setup md resource exclusion Auditing As a GitOps deployment tool the Git commit history provides a natural audit log of what changes were made to application configuration when they were made and by whom However this audit log only applies to what happened in Git and does not necessarily correlate one to one with events that happen in a cluster For example User A could have made multiple commits to application manifests but User B could have just only synced those changes to the cluster sometime later To complement the Git revision history Argo CD emits Kubernetes Events of application activity indicating the responsible actor when applicable For example bash kubectl get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 1m 1m 1 guestbook 157f7c5edd33aeac Application Normal ResourceCreated argocd server admin created application 1m 1m 1 guestbook 157f7c5f0f747acf Application Normal ResourceUpdated argocd application controller Updated sync status OutOfSync 1m 1m 1 guestbook 157f7c5f0fbebbff Application Normal ResourceUpdated argocd application controller Updated health status Missing 1m 1m 1 guestbook 157f7c6069e14f4d Application Normal OperationStarted argocd server admin initiated sync to HEAD 8a1cb4a02d3538e54907c827352f66f20c3d7b0d 1m 1m 1 guestbook 157f7c60a55a81a8 Application Normal OperationCompleted argocd application controller Sync operation to 8a1cb4a02d3538e54907c827352f66f20c3d7b0d succeeded 1m 1m 1 guestbook 157f7c60af1ccae2 Application Normal ResourceUpdated argocd application controller Updated sync status OutOfSync Synced 1m 1m 1 guestbook 157f7c60af5bc4f0 Application Normal ResourceUpdated argocd application controller Updated health status Missing Progressing 1m 1m 1 guestbook 157f7c651990e848 Application Normal ResourceUpdated argocd application controller Updated health status Progressing Healthy These events can be then be persisted for longer periods of time using other tools as Event Exporter https github com GoogleCloudPlatform k8s stackdriver tree master event exporter or Event Router https github com heptiolabs eventrouter WebHook Payloads Payloads from webhook events are considered untrusted Argo CD only examines the payload to infer the involved applications of the webhook event e g which repo was modified then refreshes the related application for reconciliation This refresh is the same refresh which occurs regularly at three minute intervals just fast tracked by the webhook event Logging Security field Security related logs are tagged with a security field to make them easier to find analyze and report on Level Friendly Level Description Example 1 Low Unexceptional non malicious events Successful access 2 Medium Could indicate malicious events but has a high likelihood of being user system error Access denied 3 High Likely malicious events but one that had no side effects or was blocked Out of bounds symlinks in repo 4 Critical Any malicious or exploitable event that had a side effect Secrets being left behind on the filesystem 5 Emergency Unmistakably malicious events that should NEVER occur accidentally and indicates an active attack Brute forcing of accounts Where applicable a CWE field is also added specifying the Common Weakness Enumeration https cwe mitre org index html number warning Please be aware that not all security logs are comprehensively tagged yet and these examples are not necessarily implemented API Logs Argo CD logs payloads of most API requests except request that are considered sensitive such as cluster ClusterService Create session SessionService Create etc The full list of method can be found in server server go https github com argoproj argo cd blob abba8dddce8cd897ba23320e3715690f465b4a95 server server go L516 Argo CD does not log IP addresses of clients requesting API endpoints since the API server is typically behind a proxy Instead it is recommended to configure IP addresses logging in the proxy server that sits in front of the API server ApplicationSets Argo CD s ApplicationSets feature has its own security considerations applicationset Security md Be aware of those issues before using ApplicationSets Limiting Directory App Memory Usage 2 2 10 2 1 16 2 3 5 Directory type Applications those whose source is raw JSON or YAML files can consume significant repo server architecture md repository server memory depending on the size and structure of the YAML files To avoid over using memory in the repo server potentially causing a crash and denial of service set the reposerver max combined directory manifests size config option in argocd cmd params cm argocd cmd params cm yaml This option limits the combined size of all JSON or YAML files in an individual app Note that the in memory representation of a manifest may be as much as 300x the size of the manifest on disk Also note that the limit is per Application If manifests are generated for multiple applications at once memory usage will be higher Example Suppose your repo server has a 10G memory limit and you have ten Applications which use raw JSON or YAML files To calculate the max safe combined file size per Application divide 10G by 300 10 Apps 300 being the worst case memory growth factor for the manifests 10G 300 10 3M So a reasonably safe configuration for this setup would be a 3M limit per app yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm data reposerver max combined directory manifests size 3M The 300x ratio assumes a maliciously crafted manifest file If you only want to protect against accidental excessive memory use it is probably safe to use a smaller ratio Keep in mind that if a malicious user can create additional Applications they can increase the total memory usage Grant App creation privileges rbac md carefully
argocd warning For ApplicationSets with a templated field If the field in your ApplicationSet is templated developers may be able to create Applications under Projects with excessive permissions Git generators are often used to make it easier for non admin developers to create Applications in the case of git generators PRs must require admin approval The Git generator contains two subtypes the Git directory generator and Git file generator Git generator does not support Signature Verification For ApplicationSets with a templated field Git Generator
# Git Generator The Git generator contains two subtypes: the Git directory generator, and Git file generator. !!! warning Git generators are often used to make it easier for (non-admin) developers to create Applications. If the `project` field in your ApplicationSet is templated, developers may be able to create Applications under Projects with excessive permissions. For ApplicationSets with a templated `project` field, [the source of truth _must_ be controlled by admins](./Security.md#templated-project-field) - in the case of git generators, PRs must require admin approval. - Git generator does not support Signature Verification For ApplicationSets with a templated `project` field. ## Git Generator: Directories The Git directory generator, one of two subtypes of the Git generator, generates parameters using the directory structure of a specified Git repository. Suppose you have a Git repository with the following directory structure: ``` ├── argo-workflows │ ├── kustomization.yaml │ └── namespace-install.yaml └── prometheus-operator ├── Chart.yaml ├── README.md ├── requirements.yaml └── values.yaml ``` This repository contains two directories, one for each of the workloads to deploy: - an Argo Workflow controller kustomization YAML file - a Prometheus Operator Helm chart We can deploy both workloads, using this example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD directories: - path: applicationset/examples/git-generator-directory/cluster-addons/* template: metadata: name: '' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' syncPolicy: syncOptions: - CreateNamespace=true ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory).*) The generator parameters are: - ``: The directory paths within the Git repository that match the `path` wildcard. - ``: The directory paths within the Git repository that match the `path` wildcard, split into array elements (`n` - array index) - ``: For any directory path within the Git repository that matches the `path` wildcard, the right-most path name is extracted (e.g. `/directory/directory2` would produce `directory2`). - ``: This field is the same as `path.basename` with unsupported characters replaced with `-` (e.g. a `path` of `/directory/directory_2`, and `path.basename` of `directory_2` would produce `directory-2` here). **Note**: The right-most path name always becomes ``. For example, for `- path: /one/two/three/four`, `` is `four`. **Note**: If the `pathParamPrefix` option is specified, all `path`-related parameter names above will be prefixed with the specified value and a dot separator. E.g., if `pathParamPrefix` is `myRepo`, then the generated parameter name would be `.myRepo.path` instead of `.path`. Using this option is necessary in a Matrix generator where both child generators are Git generators (to avoid conflicts when merging the child generators’ items). Whenever a new Helm chart/Kustomize YAML/Application/plain subdirectory is added to the Git repository, the ApplicationSet controller will detect this change and automatically deploy the resulting manifests within new `Application` resources. As with other generators, clusters *must* already be defined within Argo CD, in order to generate Applications for them. ### Exclude directories The Git directory generator will automatically exclude directories that begin with `.` (such as `.git`). The Git directory generator also supports an `exclude` option in order to exclude directories in the repository from being scanned by the ApplicationSet controller: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD directories: - path: applicationset/examples/git-generator-directory/excludes/cluster-addons/* - path: applicationset/examples/git-generator-directory/excludes/cluster-addons/exclude-helm-guestbook exclude: true template: metadata: name: '' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/excludes).*) This example excludes the `exclude-helm-guestbook` directory from the list of directories scanned for this `ApplicationSet` resource. !!! note "Exclude rules have higher priority than include rules" If a directory matches at least one `exclude` pattern, it will be excluded. Or, said another way, *exclude rules take precedence over include rules.* As a corollary, which directories are included/excluded is not affected by the order of `path`s in the `directories` field list (because, as above, exclude rules always take precedence over include rules). For example, with these directories: ``` . └── d ├── e ├── f └── g ``` Say you want to include `/d/e`, but exclude `/d/f` and `/d/g`. This will *not* work: ```yaml - path: /d/e exclude: false - path: /d/* exclude: true ``` Why? Because the exclude `/d/*` exclude rule will take precedence over the `/d/e` include rule. When the `/d/e` path in the Git repository is processed by the ApplicationSet controller, the controller detects that at least one exclude rule is matched, and thus that directory should not be scanned. You would instead need to do: ```yaml - path: /d/* - path: /d/f exclude: true - path: /d/g exclude: true ``` Or, a shorter way (using [path.Match](https://golang.org/pkg/path/#Match) syntax) would be: ```yaml - path: /d/* - path: /d/[fg] exclude: true ``` ### Root Of Git Repo The Git directory generator can be configured to deploy from the root of the git repository by providing `'*'` as the `path`. To exclude directories, you only need to put the name/[path.Match](https://golang.org/pkg/path/#Match) of the directory you do not want to deploy. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/example/example-repo.git revision: HEAD directories: - path: '*' - path: donotdeploy exclude: true template: metadata: name: '' spec: project: "my-project" source: repoURL: https://github.com/example/example-repo.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' ``` ### Pass additional key-value pairs via `values` field You may pass additional, arbitrary string key-value pairs via the `values` field of the git directory generator. Values added via the `values` field are added as `values.(field)`. In this example, a `cluster` parameter value is passed. It is interpolated from the `branch` and `path` variable, to then be used to determine the destination namespace. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/example/example-repo.git revision: HEAD directories: - path: '*' values: cluster: '-' template: metadata: name: '' spec: project: "my-project" source: repoURL: https://github.com/example/example-repo.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' ``` !!! note The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it. In `values` we can also interpolate all fields set by the git directory generator as mentioned above. ## Git Generator: Files The Git file generator is the second subtype of the Git generator. The Git file generator generates parameters using the contents of JSON/YAML files found within a specified repository. Suppose you have a Git repository with the following directory structure: ``` ├── apps │ └── guestbook │ ├── guestbook-ui-deployment.yaml │ ├── guestbook-ui-svc.yaml │ └── kustomization.yaml ├── cluster-config │ └── engineering │ ├── dev │ │ └── config.json │ └── prod │ └── config.json └── git-generator-files.yaml ``` The directories are: - `guestbook` contains the Kubernetes resources for a simple guestbook application - `cluster-config` contains JSON/YAML files describing the individual engineering clusters: one for `dev` and one for `prod`. - `git-generator-files.yaml` is the example `ApplicationSet` resource that deploys `guestbook` to the specified clusters. The `config.json` files contain information describing the cluster (along with extra sample data): ```json { "aws_account": "123456", "asset_id": "11223344", "cluster": { "owner": "[email protected]", "name": "engineering-dev", "address": "https://1.2.3.4" } } ``` Git commits containing changes to the `config.json` files are automatically discovered by the Git generator, and the contents of those files are parsed and converted into template parameters. Here are the parameters generated for the above JSON: ```text aws_account: 123456 asset_id: 11223344 cluster.owner: [email protected] cluster.name: engineering-dev cluster.address: https://1.2.3.4 ``` And the generated parameters for all discovered `config.json` files will be substituted into ApplicationSet template: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD files: - path: "applicationset/examples/git-generator-files-discovery/cluster-config/**/config.json" template: metadata: name: '-guestbook' spec: project: default source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: "applicationset/examples/git-generator-files-discovery/apps/guestbook" destination: server: '' namespace: guestbook ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-files-discovery).*) Any `config.json` files found under the `cluster-config` directory will be parameterized based on the `path` wildcard pattern specified. Within each file JSON fields are flattened into key/value pairs, with this ApplicationSet example using the `cluster.address` and `cluster.name` parameters in the template. As with other generators, clusters *must* already be defined within Argo CD, in order to generate Applications for them. In addition to the flattened key/value pairs from the configuration file, the following generator parameters are provided: - ``: The path to the directory containing matching configuration file within the Git repository. Example: `/clusters/clusterA`, if the config file was `/clusters/clusterA/config.json` - ``: The path to the matching configuration file within the Git repository, split into array elements (`n` - array index). Example: `index .path.segments 0: clusters`, `index .path.segments 1: clusterA` - ``: Basename of the path to the directory containing the configuration file (e.g. `clusterA`, with the above example.) - ``: This field is the same as `.path.basename` with unsupported characters replaced with `-` (e.g. a `path` of `/directory/directory_2`, and `.path.basename` of `directory_2` would produce `directory-2` here). - ``: The matched filename. e.g., `config.json` in the above example. - ``: The matched filename with unsupported characters replaced with `-`. **Note**: The right-most *directory* name always becomes ``. For example, from `- path: /one/two/three/four/config.json`, `` will be `four`. The filename can always be accessed using ``. **Note**: If the `pathParamPrefix` option is specified, all `path`-related parameter names above will be prefixed with the specified value and a dot separator. E.g., if `pathParamPrefix` is `myRepo`, then the generated parameter name would be `myRepo.path` instead of `path`. Using this option is necessary in a Matrix generator where both child generators are Git generators (to avoid conflicts when merging the child generators’ items). **Note**: The default behavior of the Git file generator is very greedy. Please see [Git File Generator Globbing](./Generators-Git-File-Globbing.md) for more information. ### Pass additional key-value pairs via `values` field You may pass additional, arbitrary string key-value pairs via the `values` field of the git files generator. Values added via the `values` field are added as `values.(field)`. In this example, a `base_dir` parameter value is passed. It is interpolated from `path` segments, to then be used to determine the source path. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD files: - path: "applicationset/examples/git-generator-files-discovery/cluster-config/**/config.json" values: base_dir: "//" template: metadata: name: '-guestbook' spec: project: default source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: "/apps/guestbook" destination: server: '' namespace: guestbook ``` !!! note The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it. In `values` we can also interpolate all fields set by the git files generator as mentioned above. ## Webhook Configuration When using a Git generator, ApplicationSet polls Git repositories every three minutes to detect changes. To eliminate this delay from polling, the ApplicationSet webhook server can be configured to receive webhook events. ApplicationSet supports Git webhook notifications from GitHub and GitLab. The following explains how to configure a Git webhook for GitHub, but the same process should be applicable to other providers. !!! note The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source. ### 1. Create the webhook in the Git provider In your Git provider, navigate to the settings page where webhooks can be configured. The payload URL configured in the Git provider should use the `/api/webhook` endpoint of your ApplicationSet instance (e.g. `https://applicationset.example.com/api/webhook`). If you wish to use a shared secret, input an arbitrary value in the secret. This value will be used when configuring the webhook in the next step. ![Add Webhook](../../assets/applicationset/webhook-config.png "Add Webhook") !!! note When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks ### 2. Configure ApplicationSet with the webhook secret (Optional) Configuring a webhook shared secret is optional, since ApplicationSet will still refresh applications generated by Git generators, even with unauthenticated webhook events. This is safe to do since the contents of webhook payloads are considered untrusted, and will only result in a refresh of the application (a process which already occurs at three-minute intervals). If ApplicationSet is publicly accessible, then configuring a webhook secret is recommended to prevent a DDoS attack. In the `argocd-secret` Kubernetes secret, include the Git provider's webhook secret configured in step 1. Edit the Argo CD Kubernetes secret: ```bash kubectl edit secret argocd-secret -n argocd ``` TIP: for ease of entering secrets, Kubernetes supports inputting secrets in the `stringData` field, which saves you the trouble of base64 encoding the values and copying it to the `data` field. Simply copy the shared webhook secret created in step 1, to the corresponding GitHub/GitLab/BitBucket key under the `stringData` field: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret namespace: argocd type: Opaque data: ... stringData: # github webhook secret webhook.github.secret: shhhh! it's a github secret # gitlab webhook secret webhook.gitlab.secret: shhhh! it's a gitlab secret ``` After saving, please restart the ApplicationSet pod for the changes to take effect. ## Repository credentials for ApplicationSets If your [ApplicationSets](index.md) uses a repository where you need credentials to be able to access it, you need to add the repository as a "non project scoped" repository. - When doing that through the UI, set this to a **blank** value in the dropdown menu. - When doing that through the CLI, make sure you **DO NOT** supply the parameter `--project` ([argocd repo add docs](../../user-guide/commands/argocd_repo_add.md)) - When doing that declaratively, make sure you **DO NOT** have `project:` defined under `stringData:` ([complete yaml example](../argocd-repositories-yaml.md))
argocd
Git Generator The Git generator contains two subtypes the Git directory generator and Git file generator warning Git generators are often used to make it easier for non admin developers to create Applications If the project field in your ApplicationSet is templated developers may be able to create Applications under Projects with excessive permissions For ApplicationSets with a templated project field the source of truth must be controlled by admins Security md templated project field in the case of git generators PRs must require admin approval Git generator does not support Signature Verification For ApplicationSets with a templated project field Git Generator Directories The Git directory generator one of two subtypes of the Git generator generates parameters using the directory structure of a specified Git repository Suppose you have a Git repository with the following directory structure argo workflows kustomization yaml namespace install yaml prometheus operator Chart yaml README md requirements yaml values yaml This repository contains two directories one for each of the workloads to deploy an Argo Workflow controller kustomization YAML file a Prometheus Operator Helm chart We can deploy both workloads using this example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git revision HEAD directories path applicationset examples git generator directory cluster addons template metadata name spec project my project source repoURL https github com argoproj argo cd git targetRevision HEAD path destination server https kubernetes default svc namespace syncPolicy syncOptions CreateNamespace true The full example can be found here https github com argoproj argo cd tree master applicationset examples git generator directory The generator parameters are The directory paths within the Git repository that match the path wildcard The directory paths within the Git repository that match the path wildcard split into array elements n array index For any directory path within the Git repository that matches the path wildcard the right most path name is extracted e g directory directory2 would produce directory2 This field is the same as path basename with unsupported characters replaced with e g a path of directory directory 2 and path basename of directory 2 would produce directory 2 here Note The right most path name always becomes For example for path one two three four is four Note If the pathParamPrefix option is specified all path related parameter names above will be prefixed with the specified value and a dot separator E g if pathParamPrefix is myRepo then the generated parameter name would be myRepo path instead of path Using this option is necessary in a Matrix generator where both child generators are Git generators to avoid conflicts when merging the child generators items Whenever a new Helm chart Kustomize YAML Application plain subdirectory is added to the Git repository the ApplicationSet controller will detect this change and automatically deploy the resulting manifests within new Application resources As with other generators clusters must already be defined within Argo CD in order to generate Applications for them Exclude directories The Git directory generator will automatically exclude directories that begin with such as git The Git directory generator also supports an exclude option in order to exclude directories in the repository from being scanned by the ApplicationSet controller yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git revision HEAD directories path applicationset examples git generator directory excludes cluster addons path applicationset examples git generator directory excludes cluster addons exclude helm guestbook exclude true template metadata name spec project my project source repoURL https github com argoproj argo cd git targetRevision HEAD path destination server https kubernetes default svc namespace The full example can be found here https github com argoproj argo cd tree master applicationset examples git generator directory excludes This example excludes the exclude helm guestbook directory from the list of directories scanned for this ApplicationSet resource note Exclude rules have higher priority than include rules If a directory matches at least one exclude pattern it will be excluded Or said another way exclude rules take precedence over include rules As a corollary which directories are included excluded is not affected by the order of path s in the directories field list because as above exclude rules always take precedence over include rules For example with these directories d e f g Say you want to include d e but exclude d f and d g This will not work yaml path d e exclude false path d exclude true Why Because the exclude d exclude rule will take precedence over the d e include rule When the d e path in the Git repository is processed by the ApplicationSet controller the controller detects that at least one exclude rule is matched and thus that directory should not be scanned You would instead need to do yaml path d path d f exclude true path d g exclude true Or a shorter way using path Match https golang org pkg path Match syntax would be yaml path d path d fg exclude true Root Of Git Repo The Git directory generator can be configured to deploy from the root of the git repository by providing as the path To exclude directories you only need to put the name path Match https golang org pkg path Match of the directory you do not want to deploy yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com example example repo git revision HEAD directories path path donotdeploy exclude true template metadata name spec project my project source repoURL https github com example example repo git targetRevision HEAD path destination server https kubernetes default svc namespace Pass additional key value pairs via values field You may pass additional arbitrary string key value pairs via the values field of the git directory generator Values added via the values field are added as values field In this example a cluster parameter value is passed It is interpolated from the branch and path variable to then be used to determine the destination namespace yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com example example repo git revision HEAD directories path values cluster template metadata name spec project my project source repoURL https github com example example repo git targetRevision HEAD path destination server https kubernetes default svc namespace note The values prefix is always prepended to values provided via generators git values field Ensure you include this prefix in the parameter name within the template when using it In values we can also interpolate all fields set by the git directory generator as mentioned above Git Generator Files The Git file generator is the second subtype of the Git generator The Git file generator generates parameters using the contents of JSON YAML files found within a specified repository Suppose you have a Git repository with the following directory structure apps guestbook guestbook ui deployment yaml guestbook ui svc yaml kustomization yaml cluster config engineering dev config json prod config json git generator files yaml The directories are guestbook contains the Kubernetes resources for a simple guestbook application cluster config contains JSON YAML files describing the individual engineering clusters one for dev and one for prod git generator files yaml is the example ApplicationSet resource that deploys guestbook to the specified clusters The config json files contain information describing the cluster along with extra sample data json aws account 123456 asset id 11223344 cluster owner cluster admin company com name engineering dev address https 1 2 3 4 Git commits containing changes to the config json files are automatically discovered by the Git generator and the contents of those files are parsed and converted into template parameters Here are the parameters generated for the above JSON text aws account 123456 asset id 11223344 cluster owner cluster admin company com cluster name engineering dev cluster address https 1 2 3 4 And the generated parameters for all discovered config json files will be substituted into ApplicationSet template yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git revision HEAD files path applicationset examples git generator files discovery cluster config config json template metadata name guestbook spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD path applicationset examples git generator files discovery apps guestbook destination server namespace guestbook The full example can be found here https github com argoproj argo cd tree master applicationset examples git generator files discovery Any config json files found under the cluster config directory will be parameterized based on the path wildcard pattern specified Within each file JSON fields are flattened into key value pairs with this ApplicationSet example using the cluster address and cluster name parameters in the template As with other generators clusters must already be defined within Argo CD in order to generate Applications for them In addition to the flattened key value pairs from the configuration file the following generator parameters are provided The path to the directory containing matching configuration file within the Git repository Example clusters clusterA if the config file was clusters clusterA config json The path to the matching configuration file within the Git repository split into array elements n array index Example index path segments 0 clusters index path segments 1 clusterA Basename of the path to the directory containing the configuration file e g clusterA with the above example This field is the same as path basename with unsupported characters replaced with e g a path of directory directory 2 and path basename of directory 2 would produce directory 2 here The matched filename e g config json in the above example The matched filename with unsupported characters replaced with Note The right most directory name always becomes For example from path one two three four config json will be four The filename can always be accessed using Note If the pathParamPrefix option is specified all path related parameter names above will be prefixed with the specified value and a dot separator E g if pathParamPrefix is myRepo then the generated parameter name would be myRepo path instead of path Using this option is necessary in a Matrix generator where both child generators are Git generators to avoid conflicts when merging the child generators items Note The default behavior of the Git file generator is very greedy Please see Git File Generator Globbing Generators Git File Globbing md for more information Pass additional key value pairs via values field You may pass additional arbitrary string key value pairs via the values field of the git files generator Values added via the values field are added as values field In this example a base dir parameter value is passed It is interpolated from path segments to then be used to determine the source path yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook namespace argocd spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git revision HEAD files path applicationset examples git generator files discovery cluster config config json values base dir template metadata name guestbook spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD path apps guestbook destination server namespace guestbook note The values prefix is always prepended to values provided via generators git values field Ensure you include this prefix in the parameter name within the template when using it In values we can also interpolate all fields set by the git files generator as mentioned above Webhook Configuration When using a Git generator ApplicationSet polls Git repositories every three minutes to detect changes To eliminate this delay from polling the ApplicationSet webhook server can be configured to receive webhook events ApplicationSet supports Git webhook notifications from GitHub and GitLab The following explains how to configure a Git webhook for GitHub but the same process should be applicable to other providers note The ApplicationSet controller webhook does not use the same webhook as the API server as defined here webhook md ApplicationSet exposes a webhook server as a service of type ClusterIP An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source 1 Create the webhook in the Git provider In your Git provider navigate to the settings page where webhooks can be configured The payload URL configured in the Git provider should use the api webhook endpoint of your ApplicationSet instance e g https applicationset example com api webhook If you wish to use a shared secret input an arbitrary value in the secret This value will be used when configuring the webhook in the next step Add Webhook assets applicationset webhook config png Add Webhook note When creating the webhook in GitHub the Content type needs to be set to application json The default value application x www form urlencoded is not supported by the library used to handle the hooks 2 Configure ApplicationSet with the webhook secret Optional Configuring a webhook shared secret is optional since ApplicationSet will still refresh applications generated by Git generators even with unauthenticated webhook events This is safe to do since the contents of webhook payloads are considered untrusted and will only result in a refresh of the application a process which already occurs at three minute intervals If ApplicationSet is publicly accessible then configuring a webhook secret is recommended to prevent a DDoS attack In the argocd secret Kubernetes secret include the Git provider s webhook secret configured in step 1 Edit the Argo CD Kubernetes secret bash kubectl edit secret argocd secret n argocd TIP for ease of entering secrets Kubernetes supports inputting secrets in the stringData field which saves you the trouble of base64 encoding the values and copying it to the data field Simply copy the shared webhook secret created in step 1 to the corresponding GitHub GitLab BitBucket key under the stringData field yaml apiVersion v1 kind Secret metadata name argocd secret namespace argocd type Opaque data stringData github webhook secret webhook github secret shhhh it s a github secret gitlab webhook secret webhook gitlab secret shhhh it s a gitlab secret After saving please restart the ApplicationSet pod for the changes to take effect Repository credentials for ApplicationSets If your ApplicationSets index md uses a repository where you need credentials to be able to access it you need to add the repository as a non project scoped repository When doing that through the UI set this to a blank value in the dropdown menu When doing that through the CLI make sure you DO NOT supply the parameter project argocd repo add docs user guide commands argocd repo add md When doing that declaratively make sure you DO NOT have project defined under stringData complete yaml example argocd repositories yaml md
argocd Templates Template fields An Argo CD Application is created by combining the parameters from the generator with fields of the template via and from that a concrete resource is produced and applied to the cluster The template fields of the ApplicationSet are used to generate Argo CD resources ApplicationSet is using but will be soon deprecated in favor of Go Template
# Templates The template fields of the ApplicationSet `spec` are used to generate Argo CD `Application` resources. ApplicationSet is using [fasttemplate](https://github.com/valyala/fasttemplate) but will be soon deprecated in favor of Go Template. ## Template fields An Argo CD Application is created by combining the parameters from the generator with fields of the template (via ``), and from that a concrete `Application` resource is produced and applied to the cluster. Here is the template subfield from a Cluster generator: ```yaml # (...) template: metadata: name: '-guestbook' spec: source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/ destination: server: '' namespace: guestbook ``` For details on all available parameters (like `.name`, `.nameNormalized`, etc.) please refer to the [Cluster Generator docs](./Generators-Cluster.md). The template subfields correspond directly to [the spec of an Argo CD `Application` resource](../../declarative-setup/#applications): - `project` refers to the [Argo CD Project](../../user-guide/projects.md) in use (`default` may be used here to utilize the default Argo CD Project) - `source` defines from which Git repository to extract the desired Application manifests - **repoURL**: URL of the repository (eg `https://github.com/argoproj/argocd-example-apps.git`) - **targetRevision**: Revision (tag/branch/commit) of the repository (eg `HEAD`) - **path**: Path within the repository where Kubernetes manifests (and/or Helm, Kustomize, Jsonnet resources) are located - `destination`: Defines which Kubernetes cluster/namespace to deploy to - **name**: Name of the cluster (within Argo CD) to deploy to - **server**: API Server URL for the cluster (Example: `https://kubernetes.default.svc`) - **namespace**: Target namespace in which to deploy the manifests from `source` (Example: `my-app-namespace`) Note: - Referenced clusters must already be defined in Argo CD, for the ApplicationSet controller to use them - Only **one** of `name` or `server` may be specified: if both are specified, an error is returned. - Signature Verification does not work with the templated `project` field when using git generator. The `metadata` field of template may also be used to set an Application `name`, or to add labels or annotations to the Application. While the ApplicationSet spec provides a basic form of templating, it is not intended to replace the full-fledged configuration management capabilities of tools such as Kustomize, Helm, or Jsonnet. ### Deploying ApplicationSet resources as part of a Helm chart ApplicationSet uses the same templating notation as Helm (`{{}}`). If the ApplicationSet templates aren't written as Helm string literals, Helm will throw an error like `function "cluster" not defined`. To avoid that error, write the template as a Helm string literal. For example: ```yaml metadata: name: '`}}-guestbook' ``` This _only_ applies if you use Helm to deploy your ApplicationSet resources. ## Generator templates In addition to specifying a template within the `.spec.template` of the `ApplicationSet` resource, templates may also be specified within generators. This is useful for overriding the values of the `spec`-level template. The generator's `template` field takes precedence over the `spec`'s template fields: - If both templates contain the same field, the generator's field value will be used. - If only one of those templates' fields has a value, that value will be used. Generator templates can thus be thought of as patches against the outer `spec`-level template fields. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: generators: - list: elements: - cluster: engineering-dev url: https://kubernetes.default.svc template: metadata: {} spec: project: "default" source: targetRevision: HEAD repoURL: https://github.com/argoproj/argo-cd.git # New path value is generated here: path: 'applicationset/examples/template-override/-override' destination: {} template: metadata: name: '-guestbook' spec: project: "default" source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD # This 'default' value is not used: it is replaced by the generator's template path, above path: applicationset/examples/template-override/default destination: server: '' namespace: guestbook ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/template-override).*) In this example, the ApplicationSet controller will generate an `Application` resource using the `path` generated by the List generator, rather than the `path` value defined in `.spec.template`. ## Template Patch Templating is only available on string type. However, some use cases may require applying templating on other types. Example: - Conditionally set the automated sync policy. - Conditionally switch prune boolean to `true`. - Add multiple helm value files from a list. The `templatePatch` feature enables advanced templating, with support for `json` and `yaml`. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: goTemplate: true generators: - list: elements: - cluster: engineering-dev url: https://kubernetes.default.svc autoSync: true prune: true valueFiles: - values.large.yaml - values.debug.yaml template: metadata: name: '-deployment' spec: project: "default" source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/ destination: server: '' namespace: guestbook templatePatch: | spec: source: helm: valueFiles: - syncPolicy: automated: prune: ``` !!! important The `templatePatch` can apply arbitrary changes to the template. If parameters include untrustworthy user input, it may be possible to inject malicious changes into the template. It is recommended to use `templatePatch` only with trusted input or to carefully escape the input before using it in the template. Piping input to `toJson` should help prevent, for example, a user from successfully injecting a string with newlines. The `spec.project` field is not supported in `templatePatch`. If you need to change the project, you can use the `spec.project` field in the `template` field. !!! important When writing a `templatePatch`, you're crafting a patch. So, if the patch includes an empty `spec: # nothing in here`, it will effectively clear out existing fields. See [#17040](https://github.com/argoproj/argo-cd/issues/17040) for an example of this behavior.
argocd
Templates The template fields of the ApplicationSet spec are used to generate Argo CD Application resources ApplicationSet is using fasttemplate https github com valyala fasttemplate but will be soon deprecated in favor of Go Template Template fields An Argo CD Application is created by combining the parameters from the generator with fields of the template via and from that a concrete Application resource is produced and applied to the cluster Here is the template subfield from a Cluster generator yaml template metadata name guestbook spec source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook destination server namespace guestbook For details on all available parameters like name nameNormalized etc please refer to the Cluster Generator docs Generators Cluster md The template subfields correspond directly to the spec of an Argo CD Application resource declarative setup applications project refers to the Argo CD Project user guide projects md in use default may be used here to utilize the default Argo CD Project source defines from which Git repository to extract the desired Application manifests repoURL URL of the repository eg https github com argoproj argocd example apps git targetRevision Revision tag branch commit of the repository eg HEAD path Path within the repository where Kubernetes manifests and or Helm Kustomize Jsonnet resources are located destination Defines which Kubernetes cluster namespace to deploy to name Name of the cluster within Argo CD to deploy to server API Server URL for the cluster Example https kubernetes default svc namespace Target namespace in which to deploy the manifests from source Example my app namespace Note Referenced clusters must already be defined in Argo CD for the ApplicationSet controller to use them Only one of name or server may be specified if both are specified an error is returned Signature Verification does not work with the templated project field when using git generator The metadata field of template may also be used to set an Application name or to add labels or annotations to the Application While the ApplicationSet spec provides a basic form of templating it is not intended to replace the full fledged configuration management capabilities of tools such as Kustomize Helm or Jsonnet Deploying ApplicationSet resources as part of a Helm chart ApplicationSet uses the same templating notation as Helm If the ApplicationSet templates aren t written as Helm string literals Helm will throw an error like function cluster not defined To avoid that error write the template as a Helm string literal For example yaml metadata name guestbook This only applies if you use Helm to deploy your ApplicationSet resources Generator templates In addition to specifying a template within the spec template of the ApplicationSet resource templates may also be specified within generators This is useful for overriding the values of the spec level template The generator s template field takes precedence over the spec s template fields If both templates contain the same field the generator s field value will be used If only one of those templates fields has a value that value will be used Generator templates can thus be thought of as patches against the outer spec level template fields yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec generators list elements cluster engineering dev url https kubernetes default svc template metadata spec project default source targetRevision HEAD repoURL https github com argoproj argo cd git New path value is generated here path applicationset examples template override override destination template metadata name guestbook spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD This default value is not used it is replaced by the generator s template path above path applicationset examples template override default destination server namespace guestbook The full example can be found here https github com argoproj argo cd tree master applicationset examples template override In this example the ApplicationSet controller will generate an Application resource using the path generated by the List generator rather than the path value defined in spec template Template Patch Templating is only available on string type However some use cases may require applying templating on other types Example Conditionally set the automated sync policy Conditionally switch prune boolean to true Add multiple helm value files from a list The templatePatch feature enables advanced templating with support for json and yaml yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec goTemplate true generators list elements cluster engineering dev url https kubernetes default svc autoSync true prune true valueFiles values large yaml values debug yaml template metadata name deployment spec project default source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook destination server namespace guestbook templatePatch spec source helm valueFiles syncPolicy automated prune important The templatePatch can apply arbitrary changes to the template If parameters include untrustworthy user input it may be possible to inject malicious changes into the template It is recommended to use templatePatch only with trusted input or to carefully escape the input before using it in the template Piping input to toJson should help prevent for example a user from successfully injecting a string with newlines The spec project field is not supported in templatePatch If you need to change the project you can use the spec project field in the template field important When writing a templatePatch you re crafting a patch So if the patch includes an empty spec nothing in here it will effectively clear out existing fields See 17040 https github com argoproj argo cd issues 17040 for an example of this behavior
argocd You can write in any language You can use it in a sidecar or standalone deployment Plugin Generator Simple a plugin just responds to RPC HTTP requests Plugins allow you to provide your own generator You can get your plugin running today no need to wait 3 5 months for review approval merge and an Argo software release You can combine it with Matrix or Merge
# Plugin Generator Plugins allow you to provide your own generator. - You can write in any language - Simple: a plugin just responds to RPC HTTP requests. - You can use it in a sidecar, or standalone deployment. - You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software release. - You can combine it with Matrix or Merge. To start working on your own plugin, you can generate a new repository based on the example [applicationset-hello-plugin](https://github.com/argoproj-labs/applicationset-hello-plugin). ## Simple example Using a generator plugin without combining it with Matrix or Merge. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myplugin spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - plugin: # Specify the configMap where the plugin configuration is located. configMapRef: name: my-plugin # You can pass arbitrary parameters to the plugin. `input.parameters` is a map, but values may be any type. # These parameters will also be available on the generator's output under the `generator.input.parameters` key. input: parameters: key1: "value1" key2: "value2" list: ["list", "of", "values"] boolean: true map: key1: "value1" key2: "value2" key3: "value3" # You can also attach arbitrary values to the generator's output under the `values` key. These values will be # available in templates under the `values` key. values: value1: something # When using a Plugin generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes. requeueAfterSeconds: 30 template: metadata: name: myplugin annotations: example.from.input.parameters: "" example.from.values: "" # The plugin determines what else it produces. example.from.plugin.output: "" ``` - `configMapRef.name`: A `ConfigMap` name containing the plugin configuration to use for RPC call. - `input.parameters`: Input parameters included in the RPC call to the plugin. (Optional) !!! note The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git. The goal is to be complementary in specific contexts. For example, when using one of the PullRequest generators, it's impossible to retrieve parameters related to the CI (only the commit hash is available), which limits the possibilities. By using a plugin, it's possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator. ### Add a ConfigMap to configure the access of the plugin ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-plugin namespace: argocd data: token: "$plugin.myplugin.token" # Alternatively $<some_K8S_secret>:plugin.myplugin.token baseUrl: "http://myplugin.plugin-ns.svc.cluster.local." requestTimeout: "60" ``` - `token`: Pre-shared token used to authenticate HTTP request (points to the right key you created in the `argocd-secret` Secret) - `baseUrl`: BaseUrl of the k8s service exposing your plugin in the cluster. - `requestTimeout`: Timeout of the request to the plugin in seconds (default: 30) ### Store credentials ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret namespace: argocd labels: app.kubernetes.io/name: argocd-secret app.kubernetes.io/part-of: argocd type: Opaque data: # ... # The secret value must be base64 encoded **once**. # this value corresponds to: `printf "strong-password" | base64`. plugin.myplugin.token: "c3Ryb25nLXBhc3N3b3Jk" # ... ``` #### Alternative If you want to store sensitive data in **another** Kubernetes `Secret`, instead of `argocd-secret`, ArgoCD knows how to check the keys under `data` in your Kubernetes `Secret` for a corresponding key whenever a value in a configmap starts with `$`, then your Kubernetes `Secret` name and `:` (colon) followed by the key name. Syntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>` > NOTE: Secret must have label `app.kubernetes.io/part-of: argocd` ##### Example `another-secret`: ```yaml apiVersion: v1 kind: Secret metadata: name: another-secret namespace: argocd labels: app.kubernetes.io/part-of: argocd type: Opaque data: # ... # Store client secret like below. # The secret value must be base64 encoded **once**. # This value corresponds to: `printf "strong-password" | base64`. plugin.myplugin.token: "c3Ryb25nLXBhc3N3b3Jk" ``` ### HTTP server #### A Simple Python Plugin You can deploy it either as a sidecar or as a standalone deployment (the latter is recommended). In the example, the token is stored in a file at this location : `/var/run/argo/token` ``` strong-password ``` ```python import json from http.server import BaseHTTPRequestHandler, HTTPServer with open("/var/run/argo/token") as f: plugin_token = f.read().strip() class Plugin(BaseHTTPRequestHandler): def args(self): return json.loads(self.rfile.read(int(self.headers.get('Content-Length')))) def reply(self, reply): self.send_response(200) self.end_headers() self.wfile.write(json.dumps(reply).encode("UTF-8")) def forbidden(self): self.send_response(403) self.end_headers() def unsupported(self): self.send_response(404) self.end_headers() def do_POST(self): if self.headers.get("Authorization") != "Bearer " + plugin_token: self.forbidden() if self.path == '/api/v1/getparams.execute': args = self.args() self.reply({ "output": { "parameters": [ { "key1": "val1", "key2": "val2" }, { "key1": "val2", "key2": "val2" } ] } }) else: self.unsupported() if __name__ == '__main__': httpd = HTTPServer(('', 4355), Plugin) httpd.serve_forever() ``` Execute getparams with curl : ``` curl http://localhost:4355/api/v1/getparams.execute -H "Authorization: Bearer strong-password" -d \ '{ "applicationSetName": "fake-appset", "input": { "parameters": { "param1": "value1" } } }' ``` Some things to note here: - You only need to implement the calls `/api/v1/getparams.execute` - You should check that the `Authorization` header contains the same bearer value as `/var/run/argo/token`. Return 403 if not - The input parameters are included in the request body and can be accessed using the `input.parameters` variable. - The output must always be a list of object maps nested under the `output.parameters` key in a map. - `generator.input.parameters` and `values` are reserved keys. If present in the plugin output, these keys will be overwritten by the contents of the `input.parameters` and `values` keys in the ApplicationSet's plugin generator spec. ## With matrix and pull request example In the following example, the plugin implementation is returning a set of image digests for the given branch. The returned list contains only one item corresponding to the latest built image for the branch. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: fb-matrix spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - matrix: generators: - pullRequest: github: ... requeueAfterSeconds: 30 - plugin: configMapRef: name: cm-plugin input: parameters: branch: "" # provided by generator pull request values: branchLink: "https://git.example.com/org/repo/tree/" template: metadata: name: "fb-matrix-" spec: source: repoURL: "https://github.com/myorg/myrepo.git" targetRevision: "HEAD" path: charts/my-chart helm: releaseName: fb-matrix- valueFiles: - values.yaml values: | front: image: myregistry:@ # digestFront is generated by the plugin back: image: myregistry:@ # digestBack is generated by the plugin project: default syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true destination: server: https://kubernetes.default.svc namespace: "" info: - name: Link to the Application's branch value: "" ``` To illustrate : - The generator pullRequest would return, for example, 2 branches: `feature-branch-1` and `feature-branch-2`. - The generator plugin would then perform 2 requests as follows : ```shell curl http://localhost:4355/api/v1/getparams.execute -H "Authorization: Bearer strong-password" -d \ '{ "applicationSetName": "fb-matrix", "input": { "parameters": { "branch": "feature-branch-1" } } }' ``` Then, ```shell curl http://localhost:4355/api/v1/getparams.execute -H "Authorization: Bearer strong-password" -d \ '{ "applicationSetName": "fb-matrix", "input": { "parameters": { "branch": "feature-branch-2" } } }' ``` For each call, it would return a unique result such as : ```json { "output": { "parameters": [ { "digestFront": "sha256:a3f18c17771cc1051b790b453a0217b585723b37f14b413ad7c5b12d4534d411", "digestBack": "sha256:4411417d614d5b1b479933b7420079671facd434fd42db196dc1f4cc55ba13ce" } ] } } ``` Then, ```json { "output": { "parameters": [ { "digestFront": "sha256:7c20b927946805124f67a0cb8848a8fb1344d16b4d0425d63aaa3f2427c20497", "digestBack": "sha256:e55e7e40700bbab9e542aba56c593cb87d680cefdfba3dd2ab9cfcb27ec384c2" } ] } } ``` In this example, by combining the two, you ensure that one or more pull requests are available and that the generated tag has been properly generated. This wouldn't have been possible with just a commit hash because a hash alone does not certify the success of the build.
argocd
Plugin Generator Plugins allow you to provide your own generator You can write in any language Simple a plugin just responds to RPC HTTP requests You can use it in a sidecar or standalone deployment You can get your plugin running today no need to wait 3 5 months for review approval merge and an Argo software release You can combine it with Matrix or Merge To start working on your own plugin you can generate a new repository based on the example applicationset hello plugin https github com argoproj labs applicationset hello plugin Simple example Using a generator plugin without combining it with Matrix or Merge yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myplugin spec goTemplate true goTemplateOptions missingkey error generators plugin Specify the configMap where the plugin configuration is located configMapRef name my plugin You can pass arbitrary parameters to the plugin input parameters is a map but values may be any type These parameters will also be available on the generator s output under the generator input parameters key input parameters key1 value1 key2 value2 list list of values boolean true map key1 value1 key2 value2 key3 value3 You can also attach arbitrary values to the generator s output under the values key These values will be available in templates under the values key values value1 something When using a Plugin generator the ApplicationSet controller polls every requeueAfterSeconds interval defaulting to every 30 minutes to detect changes requeueAfterSeconds 30 template metadata name myplugin annotations example from input parameters example from values The plugin determines what else it produces example from plugin output configMapRef name A ConfigMap name containing the plugin configuration to use for RPC call input parameters Input parameters included in the RPC call to the plugin Optional note The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git The goal is to be complementary in specific contexts For example when using one of the PullRequest generators it s impossible to retrieve parameters related to the CI only the commit hash is available which limits the possibilities By using a plugin it s possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator Add a ConfigMap to configure the access of the plugin yaml apiVersion v1 kind ConfigMap metadata name my plugin namespace argocd data token plugin myplugin token Alternatively some K8S secret plugin myplugin token baseUrl http myplugin plugin ns svc cluster local requestTimeout 60 token Pre shared token used to authenticate HTTP request points to the right key you created in the argocd secret Secret baseUrl BaseUrl of the k8s service exposing your plugin in the cluster requestTimeout Timeout of the request to the plugin in seconds default 30 Store credentials yaml apiVersion v1 kind Secret metadata name argocd secret namespace argocd labels app kubernetes io name argocd secret app kubernetes io part of argocd type Opaque data The secret value must be base64 encoded once this value corresponds to printf strong password base64 plugin myplugin token c3Ryb25nLXBhc3N3b3Jk Alternative If you want to store sensitive data in another Kubernetes Secret instead of argocd secret ArgoCD knows how to check the keys under data in your Kubernetes Secret for a corresponding key whenever a value in a configmap starts with then your Kubernetes Secret name and colon followed by the key name Syntax k8s secret name a key in that k8s secret NOTE Secret must have label app kubernetes io part of argocd Example another secret yaml apiVersion v1 kind Secret metadata name another secret namespace argocd labels app kubernetes io part of argocd type Opaque data Store client secret like below The secret value must be base64 encoded once This value corresponds to printf strong password base64 plugin myplugin token c3Ryb25nLXBhc3N3b3Jk HTTP server A Simple Python Plugin You can deploy it either as a sidecar or as a standalone deployment the latter is recommended In the example the token is stored in a file at this location var run argo token strong password python import json from http server import BaseHTTPRequestHandler HTTPServer with open var run argo token as f plugin token f read strip class Plugin BaseHTTPRequestHandler def args self return json loads self rfile read int self headers get Content Length def reply self reply self send response 200 self end headers self wfile write json dumps reply encode UTF 8 def forbidden self self send response 403 self end headers def unsupported self self send response 404 self end headers def do POST self if self headers get Authorization Bearer plugin token self forbidden if self path api v1 getparams execute args self args self reply output parameters key1 val1 key2 val2 key1 val2 key2 val2 else self unsupported if name main httpd HTTPServer 4355 Plugin httpd serve forever Execute getparams with curl curl http localhost 4355 api v1 getparams execute H Authorization Bearer strong password d applicationSetName fake appset input parameters param1 value1 Some things to note here You only need to implement the calls api v1 getparams execute You should check that the Authorization header contains the same bearer value as var run argo token Return 403 if not The input parameters are included in the request body and can be accessed using the input parameters variable The output must always be a list of object maps nested under the output parameters key in a map generator input parameters and values are reserved keys If present in the plugin output these keys will be overwritten by the contents of the input parameters and values keys in the ApplicationSet s plugin generator spec With matrix and pull request example In the following example the plugin implementation is returning a set of image digests for the given branch The returned list contains only one item corresponding to the latest built image for the branch yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name fb matrix spec goTemplate true goTemplateOptions missingkey error generators matrix generators pullRequest github requeueAfterSeconds 30 plugin configMapRef name cm plugin input parameters branch provided by generator pull request values branchLink https git example com org repo tree template metadata name fb matrix spec source repoURL https github com myorg myrepo git targetRevision HEAD path charts my chart helm releaseName fb matrix valueFiles values yaml values front image myregistry digestFront is generated by the plugin back image myregistry digestBack is generated by the plugin project default syncPolicy automated prune true selfHeal true syncOptions CreateNamespace true destination server https kubernetes default svc namespace info name Link to the Application s branch value To illustrate The generator pullRequest would return for example 2 branches feature branch 1 and feature branch 2 The generator plugin would then perform 2 requests as follows shell curl http localhost 4355 api v1 getparams execute H Authorization Bearer strong password d applicationSetName fb matrix input parameters branch feature branch 1 Then shell curl http localhost 4355 api v1 getparams execute H Authorization Bearer strong password d applicationSetName fb matrix input parameters branch feature branch 2 For each call it would return a unique result such as json output parameters digestFront sha256 a3f18c17771cc1051b790b453a0217b585723b37f14b413ad7c5b12d4534d411 digestBack sha256 4411417d614d5b1b479933b7420079671facd434fd42db196dc1f4cc55ba13ce Then json output parameters digestFront sha256 7c20b927946805124f67a0cb8848a8fb1344d16b4d0425d63aaa3f2427c20497 digestBack sha256 e55e7e40700bbab9e542aba56c593cb87d680cefdfba3dd2ab9cfcb27ec384c2 In this example by combining the two you ensure that one or more pull requests are available and that the generated tag has been properly generated This wouldn t have been possible with just a commit hash because a hash alone does not certify the success of the build
argocd Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues This feature is in the stage It is generally considered stable but there may be unhandled edge cases warning ApplicationSet in any namespace Introduction warning Beta Feature Since v2 8 0
# ApplicationSet in any namespace !!! warning "Beta Feature (Since v2.8.0)" This feature is in the [Beta](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#beta) stage. It is generally considered stable, but there may be unhandled edge cases. !!! warning Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues. ## Introduction As of version 2.8, Argo CD supports managing `ApplicationSet` resources in namespaces other than the control plane's namespace (which is usually `argocd`), but this feature has to be explicitly enabled and configured appropriately. Argo CD administrators can define a certain set of namespaces where `ApplicationSet` resources may be created, updated and reconciled in. As Applications generated by an ApplicationSet are generated in the same namespace as the ApplicationSet itself, this works in combination with [App in any namespace](../app-any-namespace.md). ## Prerequisites ### App in any namespace configured This feature needs [App in any namespace](../app-any-namespace.md) feature activated. The list of namespaces must be the same. ### Cluster-scoped Argo CD installation This feature can only be enabled and used when your Argo CD ApplicationSet controller is installed as a cluster-wide instance, so it has permissions to list and manipulate resources on a cluster scope. It will *not* work with an Argo CD installed in namespace-scoped mode. ### SCM Providers secrets consideration By allowing ApplicationSet in any namespace you must be aware that any secrets can be exfiltrated using `scmProvider` or `pullRequest` generators. This means if ApplicationSet controller is configured to allow namespace `appNs` and some user is allowed to create an ApplicationSet in `appNs` namespace, then the user can install a malicious Pod into the `appNs` namespace as described below and read out the content of the secret indirectly, thus exfiltrating the secret value. Here is an example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps namespace: appNs spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - scmProvider: gitea: # The Gitea owner to scan. owner: myorg # With this malicious setting, user can send all request to a Pod that will log incoming requests including headers with tokens api: http://my-service.appNs.svc.cluster.local # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false. allBranches: true # By changing this token reference, user can exfiltrate any secrets tokenRef: secretName: gitea-token key: token template: ``` In order to prevent the scenario above administrator must restrict the urls of the allowed SCM Providers (example: `https://git.mydomain.com/,https://gitlab.mydomain.com/`) by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_ALLOWED_SCM_PROVIDERS` to argocd-cmd-params-cm `applicationsetcontroller.allowed.scm.providers`. If another url is used, it will be rejected by the applicationset controller. For example: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm data: applicationsetcontroller.allowed.scm.providers: https://git.mydomain.com/,https://gitlab.mydomain.com/ ``` !!! note Please note url used in the `api` field of the `ApplicationSet` must match the url declared by the Administrator including the protocol !!! warning The allow-list only applies to SCM providers for which the user may configure a custom `api`. Where an SCM or PR generator does not accept a custom API URL, the provider is implicitly allowed. If you do not intend to allow users to use the SCM or PR generators, you can disable them entirely by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_SCM_PROVIDERS` to argocd-cmd-params-cm `applicationsetcontroller.enable.scm.providers` to `false`. #### `tokenRef` Restrictions It is **highly recommended** to enable SCM Providers secrets restrictions to avoid any secrets exfiltration. This recommendation applies even when AppSets-in-any-namespace is disabled, but is especially important when it is enabled, since non-Argo-admins may attempt to reference out-of-bounds secrets in the `argocd` namespace from an AppSet `tokenRef`. When this mode is enabled, the referenced secret must have a label `argocd.argoproj.io/secret-type` with value `scm-creds`. To enable this mode, set the `ARGOCD_APPLICATIONSET_CONTROLLER_TOKENREF_STRICT_MODE` environment variable to `true` in the `argocd-application-controller` deployment. You can do this by adding the following to your `argocd-cmd-paramscm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm data: applicationsetcontroller.tokenref.strict.mode: "true" ``` ### Overview In order for an ApplicationSet to be managed and reconciled outside the Argo CD's control plane namespace, two prerequisites must match: 1. The namespace list from which `argocd-applicationset-controller` can source `ApplicationSets` must be explicitly set using environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES` or alternatively using parameter `--applicationset-namespaces`. 2. The enabled namespaces must be entirely covered by the [App in any namespace](../app-any-namespace.md), otherwise the generated Applications generated outside the allowed Application namespaces won't be reconciled It can be achieved by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES` to argocd-cmd-params-cm `applicationsetcontroller.namespaces` `ApplicationSets` in different namespaces can be created and managed just like any other `ApplicationSet` in the `argocd` namespace previously, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc). ### Reconfigure Argo CD to allow certain namespaces #### Change workload startup parameters In order to enable this feature, the Argo CD administrator must reconfigure the and `argocd-applicationset-controller` workloads to add the `--applicationset-namespaces` parameter to the container's startup command. ### Safely template project As [App in any namespace](../app-any-namespace.md) is a prerequisite, it is possible to safely template project. Let's take an example with two teams and an infra project: ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: infra-project namespace: argocd spec: destinations: - namespace: '*' ``` ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: team-one-project namespace: argocd spec: sourceNamespaces: - team-one-cd ``` ```yaml kind: AppProject apiVersion: argoproj.io/v1alpha1 metadata: name: team-two-project namespace: argocd spec: sourceNamespaces: - team-two-cd ``` Creating following `ApplicationSet` generates two Applications `infra-escalation` and `team-two-escalation`. Both will be rejected as they are outside `argocd` namespace, therefore `sourceNamespaces` will be checked ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: team-one-product-one namespace: team-one-cd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: list: - name: infra project: infra-project - name: team-two project: team-two-project template: metadata: name: '-escalation' spec: project: "" ``` ### ApplicationSet names For the CLI, applicationSets are now referred to and displayed as in the format `<namespace>/<name>`. For backwards compatibility, if the namespace of the ApplicationSet is the control plane's namespace (i.e. `argocd`), the `<namespace>` can be omitted from the applicationset name when referring to it. For example, the application names `argocd/someappset` and `someappset` are semantically the same and refer to the same application in the CLI and the UI. ### Applicationsets RBAC The RBAC syntax for Application objects has been changed from `<project>/<applicationset>` to `<project>/<namespace>/<applicationset>` to accommodate the need to restrict access based on the source namespace of the Application to be managed. For backwards compatibility, Applications in the argocd namespace can still be referred to as `<project>/<applicationset>` in the RBAC policy rules. Wildcards do not make any distinction between project and applicationset namespaces yet. For example, the following RBAC rule would match any application belonging to project foo, regardless of the namespace it is created in: ``` p, somerole, applicationsets, get, foo/*, allow ``` If you want to restrict access to be granted only to `ApplicationSets` with project `foo` within namespace `bar`, the rule would need to be adapted as follows: ``` p, somerole, applicationsets, get, foo/bar/*, allow ``` ## Managing applicationSets in other namespaces ### Using the CLI You can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace. For example, to retrieve the `ApplicationSet` named `foo` in the namespace `bar`, you can use the following CLI command: ```shell argocd appset get foo/bar ``` Likewise, to manage this applicationSet, keep referring to it as `foo/bar`: ```bash # Delete the application argocd appset delete foo/bar ``` There is no change on the create command as it is using a file. You just need to add the namespace in the `metadata.namespace` field. As stated previously, for applicationSets in the Argo CD's control plane namespace, you can omit the namespace from the application name. ### Using the REST API If you are using the REST API, the namespace for `ApplicationSet` cannot be specified as the application name, and resources need to be specified using the optional `appNamespace` query parameter. For example, to work with the `ApplicationSet` resource named `foo` in the namespace `bar`, the request would look like follows: ```bash GET /api/v1/applicationsets/foo?appsetNamespace=bar ``` For other operations such as `POST` and `PUT`, the `appNamespace` parameter must be part of the request's payload. For `ApplicationSet` resources in the control plane namespace, this parameter can be omitted. ## Clusters secrets consideration By allowing ApplicationSet in any namespace you must be aware that clusters can be discovered and used. Example: Following will discover all clusters ```yaml spec: generators: - clusters: {} # Automatically use all clusters defined within Argo CD ``` If you don't want to allow users to discover all clusters with ApplicationSets from other namespaces you may consider deploying ArgoCD in namespace scope or use OPA rules
argocd
ApplicationSet in any namespace warning Beta Feature Since v2 8 0 This feature is in the Beta https github com argoproj argoproj blob main community feature status md beta stage It is generally considered stable but there may be unhandled edge cases warning Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues Introduction As of version 2 8 Argo CD supports managing ApplicationSet resources in namespaces other than the control plane s namespace which is usually argocd but this feature has to be explicitly enabled and configured appropriately Argo CD administrators can define a certain set of namespaces where ApplicationSet resources may be created updated and reconciled in As Applications generated by an ApplicationSet are generated in the same namespace as the ApplicationSet itself this works in combination with App in any namespace app any namespace md Prerequisites App in any namespace configured This feature needs App in any namespace app any namespace md feature activated The list of namespaces must be the same Cluster scoped Argo CD installation This feature can only be enabled and used when your Argo CD ApplicationSet controller is installed as a cluster wide instance so it has permissions to list and manipulate resources on a cluster scope It will not work with an Argo CD installed in namespace scoped mode SCM Providers secrets consideration By allowing ApplicationSet in any namespace you must be aware that any secrets can be exfiltrated using scmProvider or pullRequest generators This means if ApplicationSet controller is configured to allow namespace appNs and some user is allowed to create an ApplicationSet in appNs namespace then the user can install a malicious Pod into the appNs namespace as described below and read out the content of the secret indirectly thus exfiltrating the secret value Here is an example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps namespace appNs spec goTemplate true goTemplateOptions missingkey error generators scmProvider gitea The Gitea owner to scan owner myorg With this malicious setting user can send all request to a Pod that will log incoming requests including headers with tokens api http my service appNs svc cluster local If true scan every branch of every repository If false scan only the default branch Defaults to false allBranches true By changing this token reference user can exfiltrate any secrets tokenRef secretName gitea token key token template In order to prevent the scenario above administrator must restrict the urls of the allowed SCM Providers example https git mydomain com https gitlab mydomain com by setting the environment variable ARGOCD APPLICATIONSET CONTROLLER ALLOWED SCM PROVIDERS to argocd cmd params cm applicationsetcontroller allowed scm providers If another url is used it will be rejected by the applicationset controller For example yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm data applicationsetcontroller allowed scm providers https git mydomain com https gitlab mydomain com note Please note url used in the api field of the ApplicationSet must match the url declared by the Administrator including the protocol warning The allow list only applies to SCM providers for which the user may configure a custom api Where an SCM or PR generator does not accept a custom API URL the provider is implicitly allowed If you do not intend to allow users to use the SCM or PR generators you can disable them entirely by setting the environment variable ARGOCD APPLICATIONSET CONTROLLER ENABLE SCM PROVIDERS to argocd cmd params cm applicationsetcontroller enable scm providers to false tokenRef Restrictions It is highly recommended to enable SCM Providers secrets restrictions to avoid any secrets exfiltration This recommendation applies even when AppSets in any namespace is disabled but is especially important when it is enabled since non Argo admins may attempt to reference out of bounds secrets in the argocd namespace from an AppSet tokenRef When this mode is enabled the referenced secret must have a label argocd argoproj io secret type with value scm creds To enable this mode set the ARGOCD APPLICATIONSET CONTROLLER TOKENREF STRICT MODE environment variable to true in the argocd application controller deployment You can do this by adding the following to your argocd cmd paramscm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm data applicationsetcontroller tokenref strict mode true Overview In order for an ApplicationSet to be managed and reconciled outside the Argo CD s control plane namespace two prerequisites must match 1 The namespace list from which argocd applicationset controller can source ApplicationSets must be explicitly set using environment variable ARGOCD APPLICATIONSET CONTROLLER NAMESPACES or alternatively using parameter applicationset namespaces 2 The enabled namespaces must be entirely covered by the App in any namespace app any namespace md otherwise the generated Applications generated outside the allowed Application namespaces won t be reconciled It can be achieved by setting the environment variable ARGOCD APPLICATIONSET CONTROLLER NAMESPACES to argocd cmd params cm applicationsetcontroller namespaces ApplicationSets in different namespaces can be created and managed just like any other ApplicationSet in the argocd namespace previously either declaratively or through the Argo CD API e g using the CLI the web UI the REST API etc Reconfigure Argo CD to allow certain namespaces Change workload startup parameters In order to enable this feature the Argo CD administrator must reconfigure the and argocd applicationset controller workloads to add the applicationset namespaces parameter to the container s startup command Safely template project As App in any namespace app any namespace md is a prerequisite it is possible to safely template project Let s take an example with two teams and an infra project yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name infra project namespace argocd spec destinations namespace yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name team one project namespace argocd spec sourceNamespaces team one cd yaml kind AppProject apiVersion argoproj io v1alpha1 metadata name team two project namespace argocd spec sourceNamespaces team two cd Creating following ApplicationSet generates two Applications infra escalation and team two escalation Both will be rejected as they are outside argocd namespace therefore sourceNamespaces will be checked yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name team one product one namespace team one cd spec goTemplate true goTemplateOptions missingkey error generators list name infra project infra project name team two project team two project template metadata name escalation spec project ApplicationSet names For the CLI applicationSets are now referred to and displayed as in the format namespace name For backwards compatibility if the namespace of the ApplicationSet is the control plane s namespace i e argocd the namespace can be omitted from the applicationset name when referring to it For example the application names argocd someappset and someappset are semantically the same and refer to the same application in the CLI and the UI Applicationsets RBAC The RBAC syntax for Application objects has been changed from project applicationset to project namespace applicationset to accommodate the need to restrict access based on the source namespace of the Application to be managed For backwards compatibility Applications in the argocd namespace can still be referred to as project applicationset in the RBAC policy rules Wildcards do not make any distinction between project and applicationset namespaces yet For example the following RBAC rule would match any application belonging to project foo regardless of the namespace it is created in p somerole applicationsets get foo allow If you want to restrict access to be granted only to ApplicationSets with project foo within namespace bar the rule would need to be adapted as follows p somerole applicationsets get foo bar allow Managing applicationSets in other namespaces Using the CLI You can use all existing Argo CD CLI commands for managing applications in other namespaces exactly as you would use the CLI to manage applications in the control plane s namespace For example to retrieve the ApplicationSet named foo in the namespace bar you can use the following CLI command shell argocd appset get foo bar Likewise to manage this applicationSet keep referring to it as foo bar bash Delete the application argocd appset delete foo bar There is no change on the create command as it is using a file You just need to add the namespace in the metadata namespace field As stated previously for applicationSets in the Argo CD s control plane namespace you can omit the namespace from the application name Using the REST API If you are using the REST API the namespace for ApplicationSet cannot be specified as the application name and resources need to be specified using the optional appNamespace query parameter For example to work with the ApplicationSet resource named foo in the namespace bar the request would look like follows bash GET api v1 applicationsets foo appsetNamespace bar For other operations such as POST and PUT the appNamespace parameter must be part of the request s payload For ApplicationSet resources in the control plane namespace this parameter can be omitted Clusters secrets consideration By allowing ApplicationSet in any namespace you must be aware that clusters can be discovered and used Example Following will discover all clusters yaml spec generators clusters Automatically use all clusters defined within Argo CD If you don t want to allow users to discover all clusters with ApplicationSets from other namespaces you may consider deploying ArgoCD in namespace scope or use OPA rules
argocd kind ApplicationSet The List generator generates parameters based on an arbitrary list of key value pairs as long as the values are string values In this example we re targeting a local cluster named apiVersion argoproj io v1alpha1 namespace argocd List Generator spec metadata yaml name guestbook
# List Generator The List generator generates parameters based on an arbitrary list of key/value pairs (as long as the values are string values). In this example, we're targeting a local cluster named `engineering-dev`: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - list: elements: - cluster: engineering-dev url: https://kubernetes.default.svc # - cluster: engineering-prod # url: https://kubernetes.default.svc template: metadata: name: '-guestbook' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: applicationset/examples/list-generator/guestbook/ destination: server: '' namespace: guestbook ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/list-generator).*) In this example, the List generator passes the `url` and `cluster` fields as parameters into the template. If we wanted to add a second environment, we could uncomment the second element and the ApplicationSet controller would automatically target it with the defined application. With the ApplicationSet v0.1.0 release, one could *only* specify `url` and `cluster` element fields (plus arbitrary `values`). As of ApplicationSet v0.2.0, any key/value `element` pair is supported (which is also fully backwards compatible with the v0.1.0 form): ```yaml spec: generators: - list: elements: # v0.1.0 form - requires cluster/url keys: - cluster: engineering-dev url: https://kubernetes.default.svc values: additional: value # v0.2.0+ form - does not require cluster/URL keys # (but they are still supported). - staging: "true" gitRepo: https://kubernetes.default.svc # (...) ``` !!! note "Clusters must be predefined in Argo CD" These clusters *must* already be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD (for instance, it does not have the credentials to do so). ## Dynamically generated elements The List generator can also dynamically generate its elements based on a yaml/json it gets from a previous generator like git by combining the two with a matrix generator. In this example we are using the matrix generator with a git followed by a list generator and pass the content of a file in git as input to the `elementsYaml` field of the list generator: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: elements-yaml namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - matrix: generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD files: - path: applicationset/examples/list-generator/list-elementsYaml-example.yaml - list: elementsYaml: "" template: metadata: name: '' spec: project: default syncPolicy: automated: selfHeal: true syncOptions: - CreateNamespace=true sources: - chart: '' repoURL: '' targetRevision: '' helm: releaseName: '' destination: server: https://kubernetes.default.svc namespace: '' ``` where `list-elementsYaml-example.yaml` content is: ```yaml key: components: - name: component1 chart: podinfo version: "6.3.2" releaseName: component1 repoUrl: "https://stefanprodan.github.io/podinfo" namespace: component1 - name: component2 chart: podinfo version: "6.3.3" releaseName: component2 repoUrl: "ghcr.io/stefanprodan/charts" namespace: component2 ```
argocd
List Generator The List generator generates parameters based on an arbitrary list of key value pairs as long as the values are string values In this example we re targeting a local cluster named engineering dev yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook namespace argocd spec goTemplate true goTemplateOptions missingkey error generators list elements cluster engineering dev url https kubernetes default svc cluster engineering prod url https kubernetes default svc template metadata name guestbook spec project my project source repoURL https github com argoproj argo cd git targetRevision HEAD path applicationset examples list generator guestbook destination server namespace guestbook The full example can be found here https github com argoproj argo cd tree master applicationset examples list generator In this example the List generator passes the url and cluster fields as parameters into the template If we wanted to add a second environment we could uncomment the second element and the ApplicationSet controller would automatically target it with the defined application With the ApplicationSet v0 1 0 release one could only specify url and cluster element fields plus arbitrary values As of ApplicationSet v0 2 0 any key value element pair is supported which is also fully backwards compatible with the v0 1 0 form yaml spec generators list elements v0 1 0 form requires cluster url keys cluster engineering dev url https kubernetes default svc values additional value v0 2 0 form does not require cluster URL keys but they are still supported staging true gitRepo https kubernetes default svc note Clusters must be predefined in Argo CD These clusters must already be defined within Argo CD in order to generate applications for these values The ApplicationSet controller does not create clusters within Argo CD for instance it does not have the credentials to do so Dynamically generated elements The List generator can also dynamically generate its elements based on a yaml json it gets from a previous generator like git by combining the two with a matrix generator In this example we are using the matrix generator with a git followed by a list generator and pass the content of a file in git as input to the elementsYaml field of the list generator yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name elements yaml namespace argocd spec goTemplate true goTemplateOptions missingkey error generators matrix generators git repoURL https github com argoproj argo cd git revision HEAD files path applicationset examples list generator list elementsYaml example yaml list elementsYaml template metadata name spec project default syncPolicy automated selfHeal true syncOptions CreateNamespace true sources chart repoURL targetRevision helm releaseName destination server https kubernetes default svc namespace where list elementsYaml example yaml content is yaml key components name component1 chart podinfo version 6 3 2 releaseName component1 repoUrl https stefanprodan github io podinfo namespace component1 name component2 chart podinfo version 6 3 3 releaseName component2 repoUrl ghcr io stefanprodan charts namespace component2
argocd Cluster Generator name but normalized to contain only lowercase alphanumeric characters or For each cluster registered with Argo CD the Cluster generator produces parameters based on the list of items found within the cluster secret In Argo CD managed clusters in the Argo CD namespace The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters It automatically provides the following parameter values to the Application template for each cluster
# Cluster Generator In Argo CD, managed clusters [are stored within Secrets](../../declarative-setup/#clusters) in the Argo CD namespace. The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters. For each cluster registered with Argo CD, the Cluster generator produces parameters based on the list of items found within the cluster secret. It automatically provides the following parameter values to the Application template for each cluster: - `name` - `nameNormalized` *('name' but normalized to contain only lowercase alphanumeric characters, '-' or '.')* - `server` - `project` *(the Secret's 'project' field, if present; otherwise, it defaults to '')* - `metadata.labels.<key>` *(for each label in the Secret)* - `metadata.annotations.<key>` *(for each annotation in the Secret)* !!! note Use the `nameNormalized` parameter if your cluster name contains characters (such as underscores) that are not valid for Kubernetes resource names. This prevents rendering invalid Kubernetes resources with names like `my_cluster-app1`, and instead would convert them to `my-cluster-app1`. Within [Argo CD cluster Secrets](../../declarative-setup/#clusters) are data fields describing the cluster: ```yaml kind: Secret data: # Within Kubernetes these fields are actually encoded in Base64; they are decoded here for convenience. # (They are likewise decoded when passed as parameters by the Cluster generator) config: "{'tlsClientConfig':{'insecure':false}}" name: "in-cluster2" server: "https://kubernetes.default.svc" metadata: labels: argocd.argoproj.io/secret-type: cluster # (...) ``` The Cluster generator will automatically identify clusters defined with Argo CD, and extract the cluster data as parameters: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: {} # Automatically use all clusters defined within Argo CD template: metadata: name: '-guestbook' # 'name' field of the Secret spec: project: "my-project" source: repoURL: https://github.com/argoproj/argocd-example-apps/ targetRevision: HEAD path: guestbook destination: server: '' # 'server' field of the secret namespace: guestbook ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/cluster).*) In this example, the cluster secret's `name` and `server` fields are used to populate the `Application` resource `name` and `server` (which are then used to target that same cluster). ### Label selector A label selector may be used to narrow the scope of targeted clusters to only those matching a specific label: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: selector: matchLabels: staging: "true" # The cluster generator also supports matchExpressions. #matchExpressions: # - key: staging # operator: In # values: # - "true" template: # (...) ``` This would match an Argo CD cluster secret containing: ```yaml apiVersion: v1 kind: Secret data: # (... fields as above ...) metadata: labels: argocd.argoproj.io/secret-type: cluster staging: "true" # (...) ``` The cluster selector also supports set-based requirements, as used by [several core Kubernetes resources](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements). ### Deploying to the local cluster In Argo CD, the 'local cluster' is the cluster upon which Argo CD (and the ApplicationSet controller) is installed. This is to distinguish it from 'remote clusters', which are those that are added to Argo CD [declaratively](../../declarative-setup/#clusters) or via the [Argo CD CLI](../../getting_started.md/#5-register-a-cluster-to-deploy-apps-to-optional). The cluster generator will automatically target both local and non-local clusters, for every cluster that matches the cluster selector. If you wish to target only remote clusters with your Applications (e.g. you want to exclude the local cluster), then use a cluster selector with labels, for example: ```yaml spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster # The cluster generator also supports matchExpressions. #matchExpressions: # - key: staging # operator: In # values: # - "true" ``` This selector will not match the default local cluster, since the default local cluster does not have a Secret (and thus does not have the `argocd.argoproj.io/secret-type` label on that secret). Any cluster selector that selects on that label will automatically exclude the default local cluster. However, if you do wish to target both local and non-local clusters, while also using label matching, you can create a secret for the local cluster within the Argo CD web UI: 1. Within the Argo CD web UI, select *Settings*, then *Clusters*. 2. Select your local cluster, usually named `in-cluster`. 3. Click the *Edit* button, and change the *NAME* of the cluster to another value, for example `in-cluster-local`. Any other value here is fine. 4. Leave all other fields unchanged. 5. Click *Save*. These steps might seem counterintuitive, but the act of changing one of the default values for the local cluster causes the Argo CD Web UI to create a new secret for this cluster. In the Argo CD namespace, you should now see a Secret resource named `cluster-(cluster suffix)` with label `argocd.argoproj.io/secret-type": "cluster"`. You may also create a local [cluster secret declaratively](../../declarative-setup/#clusters), or with the CLI using `argocd cluster add "(context name)" --in-cluster`, rather than through the Web UI. ### Fetch clusters based on their K8s version There is also the possibility to fetch clusters based upon their Kubernetes version. To do this, the label `argocd.argoproj.io/auto-label-cluster-info` needs to be set to `true` on the cluster secret. Once that has been set, the controller will dynamically label the cluster secret with the Kubernetes version it is running on. To retrieve that value, you need to use the `argocd.argoproj.io/kubernetes-version`, as the example below demonstrates: ```yaml spec: goTemplate: true generators: - clusters: selector: matchLabels: argocd.argoproj.io/kubernetes-version: 1.28 # matchExpressions are also supported. #matchExpressions: # - key: argocd.argoproj.io/kubernetes-version # operator: In # values: # - "1.27" # - "1.28" ``` ### Pass additional key-value pairs via `values` field You may pass additional, arbitrary string key-value pairs via the `values` field of the cluster generator. Values added via the `values` field are added as `values.(field)` In this example, a `revision` parameter value is passed, based on matching labels on the cluster secret: ```yaml spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: selector: matchLabels: type: 'staging' # A key-value map for arbitrary parameters values: revision: HEAD # staging clusters use HEAD branch - clusters: selector: matchLabels: type: 'production' values: # production uses a different revision value, for 'stable' branch revision: stable template: metadata: name: '-guestbook' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argocd-example-apps/ # The cluster values field for each generator will be substituted here: targetRevision: '' path: guestbook destination: server: '' namespace: guestbook ``` In this example the `revision` value from the `generators.clusters` fields is passed into the template as `values.revision`, containing either `HEAD` or `stable` (based on which generator generated the set of parameters). !!! note The `values.` prefix is always prepended to values provided via `generators.clusters.values` field. Ensure you include this prefix in the parameter name within the `template` when using it. In `values` we can also interpolate the following parameter values (i.e. the same values as presented in the beginning of this page) - `name` - `nameNormalized` *('name' but normalized to contain only lowercase alphanumeric characters, '-' or '.')* - `server` - `metadata.labels.<key>` *(for each label in the Secret)* - `metadata.annotations.<key>` *(for each annotation in the Secret)* Extending the example above, we could do something like this: ```yaml spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: selector: matchLabels: type: 'staging' # A key-value map for arbitrary parameters values: # If `my-custom-annotation` is in your cluster secret, `revision` will be substituted with it. revision: '' clusterName: '' - clusters: selector: matchLabels: type: 'production' values: # production uses a different revision value, for 'stable' branch revision: stable clusterName: '' template: metadata: name: '-guestbook' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argocd-example-apps/ # The cluster values field for each generator will be substituted here: targetRevision: '' path: guestbook destination: # In this case this is equivalent to just using server: '' namespace: guestbook ``` ### Gather cluster information as a flat list You may sometimes need to gather your clusters information, without having to deploy one application per cluster found. For that, you can use the option `flatList` in the cluster generator. Here is an example of cluster generator using this option: ```yaml spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - clusters: selector: matchLabels: type: 'staging' flatList: true template: metadata: name: 'flat-list-guestbook' spec: project: "my-project" source: repoURL: https://github.com/argoproj/argocd-example-apps/ # The cluster values field for each generator will be substituted here: targetRevision: 'HEAD' path: helm-guestbook helm: values: | clusters: - name: destination: # In this case this is equivalent to just using server: 'my-cluster' namespace: guestbook ``` Given that you have two cluster secrets matching with names cluster1 and cluster2, this would generate the **single** following Application: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: flat-list-guestbook namespace: guestbook spec: project: "my-project" source: repoURL: https://github.com/argoproj/argocd-example-apps/ targetRevision: 'HEAD' path: helm-guestbook helm: values: | clusters: - name: cluster1 - name: cluster2 ``` In case you are using several cluster generators, each with the flatList option, one Application would be generated by cluster generator, as we can't simply merge values and templates that would potentially differ in each generator
argocd
Cluster Generator In Argo CD managed clusters are stored within Secrets declarative setup clusters in the Argo CD namespace The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters For each cluster registered with Argo CD the Cluster generator produces parameters based on the list of items found within the cluster secret It automatically provides the following parameter values to the Application template for each cluster name nameNormalized name but normalized to contain only lowercase alphanumeric characters or server project the Secret s project field if present otherwise it defaults to metadata labels key for each label in the Secret metadata annotations key for each annotation in the Secret note Use the nameNormalized parameter if your cluster name contains characters such as underscores that are not valid for Kubernetes resource names This prevents rendering invalid Kubernetes resources with names like my cluster app1 and instead would convert them to my cluster app1 Within Argo CD cluster Secrets declarative setup clusters are data fields describing the cluster yaml kind Secret data Within Kubernetes these fields are actually encoded in Base64 they are decoded here for convenience They are likewise decoded when passed as parameters by the Cluster generator config tlsClientConfig insecure false name in cluster2 server https kubernetes default svc metadata labels argocd argoproj io secret type cluster The Cluster generator will automatically identify clusters defined with Argo CD and extract the cluster data as parameters yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook namespace argocd spec goTemplate true goTemplateOptions missingkey error generators clusters Automatically use all clusters defined within Argo CD template metadata name guestbook name field of the Secret spec project my project source repoURL https github com argoproj argocd example apps targetRevision HEAD path guestbook destination server server field of the secret namespace guestbook The full example can be found here https github com argoproj argo cd tree master applicationset examples cluster In this example the cluster secret s name and server fields are used to populate the Application resource name and server which are then used to target that same cluster Label selector A label selector may be used to narrow the scope of targeted clusters to only those matching a specific label yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook namespace argocd spec goTemplate true goTemplateOptions missingkey error generators clusters selector matchLabels staging true The cluster generator also supports matchExpressions matchExpressions key staging operator In values true template This would match an Argo CD cluster secret containing yaml apiVersion v1 kind Secret data fields as above metadata labels argocd argoproj io secret type cluster staging true The cluster selector also supports set based requirements as used by several core Kubernetes resources https kubernetes io docs concepts overview working with objects labels resources that support set based requirements Deploying to the local cluster In Argo CD the local cluster is the cluster upon which Argo CD and the ApplicationSet controller is installed This is to distinguish it from remote clusters which are those that are added to Argo CD declaratively declarative setup clusters or via the Argo CD CLI getting started md 5 register a cluster to deploy apps to optional The cluster generator will automatically target both local and non local clusters for every cluster that matches the cluster selector If you wish to target only remote clusters with your Applications e g you want to exclude the local cluster then use a cluster selector with labels for example yaml spec goTemplate true goTemplateOptions missingkey error generators clusters selector matchLabels argocd argoproj io secret type cluster The cluster generator also supports matchExpressions matchExpressions key staging operator In values true This selector will not match the default local cluster since the default local cluster does not have a Secret and thus does not have the argocd argoproj io secret type label on that secret Any cluster selector that selects on that label will automatically exclude the default local cluster However if you do wish to target both local and non local clusters while also using label matching you can create a secret for the local cluster within the Argo CD web UI 1 Within the Argo CD web UI select Settings then Clusters 2 Select your local cluster usually named in cluster 3 Click the Edit button and change the NAME of the cluster to another value for example in cluster local Any other value here is fine 4 Leave all other fields unchanged 5 Click Save These steps might seem counterintuitive but the act of changing one of the default values for the local cluster causes the Argo CD Web UI to create a new secret for this cluster In the Argo CD namespace you should now see a Secret resource named cluster cluster suffix with label argocd argoproj io secret type cluster You may also create a local cluster secret declaratively declarative setup clusters or with the CLI using argocd cluster add context name in cluster rather than through the Web UI Fetch clusters based on their K8s version There is also the possibility to fetch clusters based upon their Kubernetes version To do this the label argocd argoproj io auto label cluster info needs to be set to true on the cluster secret Once that has been set the controller will dynamically label the cluster secret with the Kubernetes version it is running on To retrieve that value you need to use the argocd argoproj io kubernetes version as the example below demonstrates yaml spec goTemplate true generators clusters selector matchLabels argocd argoproj io kubernetes version 1 28 matchExpressions are also supported matchExpressions key argocd argoproj io kubernetes version operator In values 1 27 1 28 Pass additional key value pairs via values field You may pass additional arbitrary string key value pairs via the values field of the cluster generator Values added via the values field are added as values field In this example a revision parameter value is passed based on matching labels on the cluster secret yaml spec goTemplate true goTemplateOptions missingkey error generators clusters selector matchLabels type staging A key value map for arbitrary parameters values revision HEAD staging clusters use HEAD branch clusters selector matchLabels type production values production uses a different revision value for stable branch revision stable template metadata name guestbook spec project my project source repoURL https github com argoproj argocd example apps The cluster values field for each generator will be substituted here targetRevision path guestbook destination server namespace guestbook In this example the revision value from the generators clusters fields is passed into the template as values revision containing either HEAD or stable based on which generator generated the set of parameters note The values prefix is always prepended to values provided via generators clusters values field Ensure you include this prefix in the parameter name within the template when using it In values we can also interpolate the following parameter values i e the same values as presented in the beginning of this page name nameNormalized name but normalized to contain only lowercase alphanumeric characters or server metadata labels key for each label in the Secret metadata annotations key for each annotation in the Secret Extending the example above we could do something like this yaml spec goTemplate true goTemplateOptions missingkey error generators clusters selector matchLabels type staging A key value map for arbitrary parameters values If my custom annotation is in your cluster secret revision will be substituted with it revision clusterName clusters selector matchLabels type production values production uses a different revision value for stable branch revision stable clusterName template metadata name guestbook spec project my project source repoURL https github com argoproj argocd example apps The cluster values field for each generator will be substituted here targetRevision path guestbook destination In this case this is equivalent to just using server namespace guestbook Gather cluster information as a flat list You may sometimes need to gather your clusters information without having to deploy one application per cluster found For that you can use the option flatList in the cluster generator Here is an example of cluster generator using this option yaml spec goTemplate true goTemplateOptions missingkey error generators clusters selector matchLabels type staging flatList true template metadata name flat list guestbook spec project my project source repoURL https github com argoproj argocd example apps The cluster values field for each generator will be substituted here targetRevision HEAD path helm guestbook helm values clusters name destination In this case this is equivalent to just using server my cluster namespace guestbook Given that you have two cluster secrets matching with names cluster1 and cluster2 this would generate the single following Application yaml apiVersion argoproj io v1alpha1 kind Application metadata name flat list guestbook namespace guestbook spec project my project source repoURL https github com argoproj argocd example apps targetRevision HEAD path helm guestbook helm values clusters name cluster1 name cluster2 In case you are using several cluster generators each with the flatList option one Application would be generated by cluster generator as we can t simply merge values and templates that would potentially differ in each generator
argocd kind ApplicationSet apiVersion argoproj io v1alpha1 Pull Request Generator spec metadata yaml The Pull Request generator uses the API of an SCMaaS provider GitHub Gitea or Bitbucket Server to automatically discover open pull requests within a repository This fits well with the style of building a test environment when you create a pull request name myapps
# Pull Request Generator The Pull Request generator uses the API of an SCMaaS provider (GitHub, Gitea, or Bitbucket Server) to automatically discover open pull requests within a repository. This fits well with the style of building a test environment when you create a pull request. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: # When using a Pull Request generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes. requeueAfterSeconds: 1800 # See below for provider specific options. github: # ... ``` !!! note Know the security implications of PR generators in ApplicationSets. [Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets) to avoid leaking Secrets, and [only admins may create PRs](./Security.md#templated-project-field) if the `project` field of an ApplicationSet with a PR generator is templated, to avoid granting management of out-of-bounds resources. ## GitHub Specify the repository from which to fetch the GitHub Pull requests. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: github: # The GitHub organization or user. owner: myorg # The Github repository repo: myrepository # For GitHub Enterprise (optional) api: https://git.example.com/ # Reference to a Secret containing an access token. (optional) tokenRef: secretName: github-token key: token # (optional) use a GitHub App to access the API instead of a PAT. appSecretName: github-app-repo-creds # Labels is used to filter the PRs that you want to target. (optional) labels: - preview requeueAfterSeconds: 1800 template: # ... ``` * `owner`: Required name of the GitHub organization or user. * `repo`: Required name of the GitHub repository. * `api`: If using GitHub Enterprise, the URL to access it. (Optional) * `tokenRef`: A `Secret` name and key containing the GitHub access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional) * `labels`: Filter the PRs to those containing **all** of the labels listed. (Optional) * `appSecretName`: A `Secret` name containing a GitHub App secret in [repo-creds format][repo-creds]. [repo-creds]: ../declarative-setup.md#repository-credentials ## GitLab Specify the project from which to fetch the GitLab merge requests. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: gitlab: # The GitLab project ID. project: "12341234" # For self-hosted GitLab (optional) api: https://git.example.com/ # Reference to a Secret containing an access token. (optional) tokenRef: secretName: gitlab-token key: token # Labels is used to filter the MRs that you want to target. (optional) labels: - preview # MR state is used to filter MRs only with a certain state. (optional) pullRequestState: opened # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates. insecure: false # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional) caRef: configMapName: argocd-tls-certs-cm key: gitlab-ca requeueAfterSeconds: 1800 template: # ... ``` * `project`: Required project ID of the GitLab project. * `api`: If using self-hosted GitLab, the URL to access it. (Optional) * `tokenRef`: A `Secret` name and key containing the GitLab access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional) * `labels`: Labels is used to filter the MRs that you want to target. (Optional) * `pullRequestState`: PullRequestState is an additional MRs filter to get only those with a certain state. Default: "" (all states) * `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates. * `caRef`: Optional `ConfigMap` name and key containing the GitLab certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs. As a preferable alternative to setting `insecure` to true, you can configure self-signed TLS certificates for Gitlab by [mounting self-signed certificate to the applicationset controller](./Generators-SCM-Provider.md#self-signed-tls-certificates). ## Gitea Specify the repository from which to fetch the Gitea Pull requests. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: gitea: # The Gitea organization or user. owner: myorg # The Gitea repository repo: myrepository # The Gitea url to use api: https://gitea.mydomain.com/ # Reference to a Secret containing an access token. (optional) tokenRef: secretName: gitea-token key: token # many gitea deployments use TLS, but many are self-hosted and self-signed certificates insecure: true requeueAfterSeconds: 1800 template: # ... ``` * `owner`: Required name of the Gitea organization or user. * `repo`: Required name of the Gitea repository. * `api`: The url of the Gitea instance. * `tokenRef`: A `Secret` name and key containing the Gitea access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional) * `insecure`: `Allow for self-signed certificates, primarily for testing.` ## Bitbucket Server Fetch pull requests from a repo hosted on a Bitbucket Server (not the same as Bitbucket Cloud). ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: bitbucketServer: project: myproject repo: myrepository # URL of the Bitbucket Server. Required. api: https://mycompany.bitbucket.org # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken # authentication is required to access private repositories basicAuth: # The username to authenticate with username: myuser # Reference to a Secret containing the password or personal access token. passwordRef: secretName: mypassword key: password # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken # authentication is required to access private repositories bearerToken: # Reference to a Secret containing the bearer token. tokenRef: secretName: repotoken key: token # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates. insecure: true # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional) caRef: configMapName: argocd-tls-certs-cm key: bitbucket-ca # Labels are not supported by Bitbucket Server, so filtering by label is not possible. # Filter PRs using the source branch name. (optional) filters: - branchMatch: ".*-argocd" template: # ... ``` * `project`: Required name of the Bitbucket project * `repo`: Required name of the Bitbucket repository. * `api`: Required URL to access the Bitbucket REST API. For the example above, an API request would be made to `https://mycompany.bitbucket.org/rest/api/1.0/projects/myproject/repos/myrepository/pull-requests` * `branchMatch`: Optional regexp filter which should match the source branch name. This is an alternative to labels which are not supported by Bitbucket server. If you want to access a private repository, you must also provide the credentials for Basic auth (this is the only auth supported currently): * `username`: The username to authenticate with. It only needs read access to the relevant repo. * `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests. In case of Bitbucket App Token, go with `bearerToken` section. * `tokenRef`: A `Secret` name and key containing the app token to use for requests. In case self-signed BitBucket Server certificates, the following options can be usefully: * `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates. * `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs. ## Bitbucket Cloud Fetch pull requests from a repo hosted on a Bitbucket Cloud. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: bitbucket: # Workspace name where the repoistory is stored under. Required. owner: myproject # Repository slug. Required. repo: myrepository # URL of the Bitbucket Server. (optional) Will default to 'https://api.bitbucket.org/2.0'. api: https://api.bitbucket.org/2.0 # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken # authentication is required to access private repositories basicAuth: # The username to authenticate with username: myuser # Reference to a Secret containing the password or personal access token. passwordRef: secretName: mypassword key: password # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken # authentication is required to access private repositories bearerToken: # Reference to a Secret containing the bearer token. tokenRef: secretName: repotoken key: token # Labels are not supported by Bitbucket Cloud, so filtering by label is not possible. # Filter PRs using the source branch name. (optional) filters: - branchMatch: ".*-argocd" template: # ... ``` - `owner`: Required name of the Bitbucket workspace - `repo`: Required name of the Bitbucket repository. - `api`: Optional URL to access the Bitbucket REST API. For the example above, an API request would be made to `https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/pullrequests`. If not set, defaults to `https://api.bitbucket.org/2.0` - `branchMatch`: Optional regexp filter which should match the source branch name. This is an alternative to labels which are not supported by Bitbucket server. If you want to access a private repository, Argo CD will need credentials to access repository in Bitbucket Cloud. You can use Bitbucket App Password (generated per user, with access to whole workspace), or Bitbucket App Token (generated per repository, with access limited to repository scope only). If both App Password and App Token are defined, App Token will be used. To use Bitbucket App Password, use `basicAuth` section. - `username`: The username to authenticate with. It only needs read access to the relevant repo. - `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests. In case of Bitbucket App Token, go with `bearerToken` section. - `tokenRef`: A `Secret` name and key containing the app token to use for requests. ## Azure DevOps Specify the organization, project and repository from which you want to fetch pull requests. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: azuredevops: # Azure DevOps org to scan. Required. organization: myorg # Azure DevOps project name to scan. Required. project: myproject # Azure DevOps repo name to scan. Required. repo: myrepository # The Azure DevOps API URL to talk to. If blank, use https://dev.azure.com/. api: https://dev.azure.com/ # Reference to a Secret containing an access token. (optional) tokenRef: secretName: azure-devops-token key: token # Labels is used to filter the PRs that you want to target. (optional) labels: - preview requeueAfterSeconds: 1800 template: # ... ``` * `organization`: Required name of the Azure DevOps organization. * `project`: Required name of the Azure DevOps project. * `repo`: Required name of the Azure DevOps repository. * `api`: If using self-hosted Azure DevOps Repos, the URL to access it. (Optional) * `tokenRef`: A `Secret` name and key containing the Azure DevOps access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional) * `labels`: Filter the PRs to those containing **all** of the labels listed. (Optional) ## Filters Filters allow selecting which pull requests to generate for. Each filter can declare one or more conditions, all of which must pass. If multiple filters are present, any can match for a repository to be included. If no filters are specified, all pull requests will be processed. Currently, only a subset of filters is available when comparing with [SCM provider](Generators-SCM-Provider.md) filters. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: # ... # Include any pull request ending with "argocd". (optional) filters: - branchMatch: ".*-argocd" template: # ... ``` * `branchMatch`: A regexp matched against source branch names. * `targetBranchMatch`: A regexp matched against target branch names. [GitHub](#github) and [GitLab](#gitlab) also support a `labels` filter. ## Template As with all generators, several keys are available for replacement in the generated application. The following is a comprehensive Helm Application example; ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: # ... template: metadata: name: 'myapp--' spec: source: repoURL: 'https://github.com/myorg/myrepo.git' targetRevision: '' path: kubernetes/ helm: parameters: - name: "image.tag" value: "pull--" project: "my-project" destination: server: https://kubernetes.default.svc namespace: default ``` And, here is a robust Kustomize example; ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - pullRequest: # ... template: metadata: name: 'myapp--' spec: source: repoURL: 'https://github.com/myorg/myrepo.git' targetRevision: '' path: kubernetes/ kustomize: nameSuffix: '' commonLabels: app.kubernetes.io/instance: '-' images: - 'ghcr.io/myorg/myrepo:-' project: "my-project" destination: server: https://kubernetes.default.svc namespace: default ``` * `number`: The ID number of the pull request. * `title`: The title of the pull request. * `branch`: The name of the branch of the pull request head. * `branch_slug`: The branch name will be cleaned to be conform to the DNS label standard as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names), and truncated to 50 characters to give room to append/suffix-ing it with 13 more characters. * `target_branch`: The name of the target branch of the pull request. * `target_branch_slug`: The target branch name will be cleaned to be conform to the DNS label standard as defined in [RFC 1123](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names), and truncated to 50 characters to give room to append/suffix-ing it with 13 more characters. * `head_sha`: This is the SHA of the head of the pull request. * `head_short_sha`: This is the short SHA of the head of the pull request (8 characters long or the length of the head SHA if it's shorter). * `head_short_sha_7`: This is the short SHA of the head of the pull request (7 characters long or the length of the head SHA if it's shorter). * `labels`: The array of pull request labels. (Supported only for Go Template ApplicationSet manifests.) * `author`: The author/creator of the pull request. ## Webhook Configuration When using a Pull Request generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes. To eliminate this delay from polling, the ApplicationSet webhook server can be configured to receive webhook events, which will trigger Application generation by the Pull Request generator. The configuration is almost the same as the one described [in the Git generator](Generators-Git.md), but there is one difference: if you want to use the Pull Request Generator as well, additionally configure the following settings. !!! note The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](../webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source. ### Github webhook configuration In section 1, _"Create the webhook in the Git provider"_, add an event so that a webhook request will be sent when a pull request is created, closed, or label changed. Add Webhook URL with uri `/api/webhook` and select content-type as json ![Add Webhook URL](../../assets/applicationset/webhook-config-pullrequest-generator.png "Add Webhook URL") Select `Let me select individual events` and enable the checkbox for `Pull requests`. ![Add Webhook](../../assets/applicationset/webhook-config-pull-request.png "Add Webhook Pull Request") The Pull Request Generator will requeue when the next action occurs. - `opened` - `closed` - `reopened` - `labeled` - `unlabeled` - `synchronized` For more information about each event, please refer to the [official documentation](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads). ### Gitlab webhook configuration Enable checkbox for "Merge request events" in triggers list. ![Add Gitlab Webhook](../../assets/applicationset/webhook-config-merge-request-gitlab.png "Add Gitlab Merge request Webhook") The Pull Request Generator will requeue when the next action occurs. - `open` - `close` - `reopen` - `update` - `merge` For more information about each event, please refer to the [official documentation](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html#merge-request-events). ## Lifecycle An Application will be generated when a Pull Request is discovered when the configured criteria is met - i.e. for GitHub when a Pull Request matches the specified `labels` and/or `pullRequestState`. Application will be removed when a Pull Request no longer meets the specified criteria.
argocd
Pull Request Generator The Pull Request generator uses the API of an SCMaaS provider GitHub Gitea or Bitbucket Server to automatically discover open pull requests within a repository This fits well with the style of building a test environment when you create a pull request yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest When using a Pull Request generator the ApplicationSet controller polls every requeueAfterSeconds interval defaulting to every 30 minutes to detect changes requeueAfterSeconds 1800 See below for provider specific options github note Know the security implications of PR generators in ApplicationSets Only admins may create ApplicationSets Security md only admins may createupdatedelete applicationsets to avoid leaking Secrets and only admins may create PRs Security md templated project field if the project field of an ApplicationSet with a PR generator is templated to avoid granting management of out of bounds resources GitHub Specify the repository from which to fetch the GitHub Pull requests yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest github The GitHub organization or user owner myorg The Github repository repo myrepository For GitHub Enterprise optional api https git example com Reference to a Secret containing an access token optional tokenRef secretName github token key token optional use a GitHub App to access the API instead of a PAT appSecretName github app repo creds Labels is used to filter the PRs that you want to target optional labels preview requeueAfterSeconds 1800 template owner Required name of the GitHub organization or user repo Required name of the GitHub repository api If using GitHub Enterprise the URL to access it Optional tokenRef A Secret name and key containing the GitHub access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories Optional labels Filter the PRs to those containing all of the labels listed Optional appSecretName A Secret name containing a GitHub App secret in repo creds format repo creds repo creds declarative setup md repository credentials GitLab Specify the project from which to fetch the GitLab merge requests yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest gitlab The GitLab project ID project 12341234 For self hosted GitLab optional api https git example com Reference to a Secret containing an access token optional tokenRef secretName gitlab token key token Labels is used to filter the MRs that you want to target optional labels preview MR state is used to filter MRs only with a certain state optional pullRequestState opened If true skips validating the SCM provider s TLS certificate useful for self signed certificates insecure false Reference to a ConfigMap containing trusted CA certs useful for self signed certificates optional caRef configMapName argocd tls certs cm key gitlab ca requeueAfterSeconds 1800 template project Required project ID of the GitLab project api If using self hosted GitLab the URL to access it Optional tokenRef A Secret name and key containing the GitLab access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories Optional labels Labels is used to filter the MRs that you want to target Optional pullRequestState PullRequestState is an additional MRs filter to get only those with a certain state Default all states insecure By default false Skip checking the validity of the SCM s certificate useful for self signed TLS certificates caRef Optional ConfigMap name and key containing the GitLab certificates to trust useful for self signed TLS certificates Possibly reference the ArgoCD CM holding the trusted certs As a preferable alternative to setting insecure to true you can configure self signed TLS certificates for Gitlab by mounting self signed certificate to the applicationset controller Generators SCM Provider md self signed tls certificates Gitea Specify the repository from which to fetch the Gitea Pull requests yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest gitea The Gitea organization or user owner myorg The Gitea repository repo myrepository The Gitea url to use api https gitea mydomain com Reference to a Secret containing an access token optional tokenRef secretName gitea token key token many gitea deployments use TLS but many are self hosted and self signed certificates insecure true requeueAfterSeconds 1800 template owner Required name of the Gitea organization or user repo Required name of the Gitea repository api The url of the Gitea instance tokenRef A Secret name and key containing the Gitea access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories Optional insecure Allow for self signed certificates primarily for testing Bitbucket Server Fetch pull requests from a repo hosted on a Bitbucket Server not the same as Bitbucket Cloud yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest bitbucketServer project myproject repo myrepository URL of the Bitbucket Server Required api https mycompany bitbucket org Credentials for Basic authentication App Password Either basicAuth or bearerToken authentication is required to access private repositories basicAuth The username to authenticate with username myuser Reference to a Secret containing the password or personal access token passwordRef secretName mypassword key password Credentials for Bearer Token App Token authentication Either basicAuth or bearerToken authentication is required to access private repositories bearerToken Reference to a Secret containing the bearer token tokenRef secretName repotoken key token If true skips validating the SCM provider s TLS certificate useful for self signed certificates insecure true Reference to a ConfigMap containing trusted CA certs useful for self signed certificates optional caRef configMapName argocd tls certs cm key bitbucket ca Labels are not supported by Bitbucket Server so filtering by label is not possible Filter PRs using the source branch name optional filters branchMatch argocd template project Required name of the Bitbucket project repo Required name of the Bitbucket repository api Required URL to access the Bitbucket REST API For the example above an API request would be made to https mycompany bitbucket org rest api 1 0 projects myproject repos myrepository pull requests branchMatch Optional regexp filter which should match the source branch name This is an alternative to labels which are not supported by Bitbucket server If you want to access a private repository you must also provide the credentials for Basic auth this is the only auth supported currently username The username to authenticate with It only needs read access to the relevant repo passwordRef A Secret name and key containing the password or personal access token to use for requests In case of Bitbucket App Token go with bearerToken section tokenRef A Secret name and key containing the app token to use for requests In case self signed BitBucket Server certificates the following options can be usefully insecure By default false Skip checking the validity of the SCM s certificate useful for self signed TLS certificates caRef Optional ConfigMap name and key containing the BitBucket server certificates to trust useful for self signed TLS certificates Possibly reference the ArgoCD CM holding the trusted certs Bitbucket Cloud Fetch pull requests from a repo hosted on a Bitbucket Cloud yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest bitbucket Workspace name where the repoistory is stored under Required owner myproject Repository slug Required repo myrepository URL of the Bitbucket Server optional Will default to https api bitbucket org 2 0 api https api bitbucket org 2 0 Credentials for Basic authentication App Password Either basicAuth or bearerToken authentication is required to access private repositories basicAuth The username to authenticate with username myuser Reference to a Secret containing the password or personal access token passwordRef secretName mypassword key password Credentials for Bearer Token App Token authentication Either basicAuth or bearerToken authentication is required to access private repositories bearerToken Reference to a Secret containing the bearer token tokenRef secretName repotoken key token Labels are not supported by Bitbucket Cloud so filtering by label is not possible Filter PRs using the source branch name optional filters branchMatch argocd template owner Required name of the Bitbucket workspace repo Required name of the Bitbucket repository api Optional URL to access the Bitbucket REST API For the example above an API request would be made to https api bitbucket org 2 0 repositories workspace repo slug pullrequests If not set defaults to https api bitbucket org 2 0 branchMatch Optional regexp filter which should match the source branch name This is an alternative to labels which are not supported by Bitbucket server If you want to access a private repository Argo CD will need credentials to access repository in Bitbucket Cloud You can use Bitbucket App Password generated per user with access to whole workspace or Bitbucket App Token generated per repository with access limited to repository scope only If both App Password and App Token are defined App Token will be used To use Bitbucket App Password use basicAuth section username The username to authenticate with It only needs read access to the relevant repo passwordRef A Secret name and key containing the password or personal access token to use for requests In case of Bitbucket App Token go with bearerToken section tokenRef A Secret name and key containing the app token to use for requests Azure DevOps Specify the organization project and repository from which you want to fetch pull requests yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest azuredevops Azure DevOps org to scan Required organization myorg Azure DevOps project name to scan Required project myproject Azure DevOps repo name to scan Required repo myrepository The Azure DevOps API URL to talk to If blank use https dev azure com api https dev azure com Reference to a Secret containing an access token optional tokenRef secretName azure devops token key token Labels is used to filter the PRs that you want to target optional labels preview requeueAfterSeconds 1800 template organization Required name of the Azure DevOps organization project Required name of the Azure DevOps project repo Required name of the Azure DevOps repository api If using self hosted Azure DevOps Repos the URL to access it Optional tokenRef A Secret name and key containing the Azure DevOps access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories Optional labels Filter the PRs to those containing all of the labels listed Optional Filters Filters allow selecting which pull requests to generate for Each filter can declare one or more conditions all of which must pass If multiple filters are present any can match for a repository to be included If no filters are specified all pull requests will be processed Currently only a subset of filters is available when comparing with SCM provider Generators SCM Provider md filters yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest Include any pull request ending with argocd optional filters branchMatch argocd template branchMatch A regexp matched against source branch names targetBranchMatch A regexp matched against target branch names GitHub github and GitLab gitlab also support a labels filter Template As with all generators several keys are available for replacement in the generated application The following is a comprehensive Helm Application example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest template metadata name myapp spec source repoURL https github com myorg myrepo git targetRevision path kubernetes helm parameters name image tag value pull project my project destination server https kubernetes default svc namespace default And here is a robust Kustomize example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators pullRequest template metadata name myapp spec source repoURL https github com myorg myrepo git targetRevision path kubernetes kustomize nameSuffix commonLabels app kubernetes io instance images ghcr io myorg myrepo project my project destination server https kubernetes default svc namespace default number The ID number of the pull request title The title of the pull request branch The name of the branch of the pull request head branch slug The branch name will be cleaned to be conform to the DNS label standard as defined in RFC 1123 https kubernetes io docs concepts overview working with objects names dns label names and truncated to 50 characters to give room to append suffix ing it with 13 more characters target branch The name of the target branch of the pull request target branch slug The target branch name will be cleaned to be conform to the DNS label standard as defined in RFC 1123 https kubernetes io docs concepts overview working with objects names dns label names and truncated to 50 characters to give room to append suffix ing it with 13 more characters head sha This is the SHA of the head of the pull request head short sha This is the short SHA of the head of the pull request 8 characters long or the length of the head SHA if it s shorter head short sha 7 This is the short SHA of the head of the pull request 7 characters long or the length of the head SHA if it s shorter labels The array of pull request labels Supported only for Go Template ApplicationSet manifests author The author creator of the pull request Webhook Configuration When using a Pull Request generator the ApplicationSet controller polls every requeueAfterSeconds interval defaulting to every 30 minutes to detect changes To eliminate this delay from polling the ApplicationSet webhook server can be configured to receive webhook events which will trigger Application generation by the Pull Request generator The configuration is almost the same as the one described in the Git generator Generators Git md but there is one difference if you want to use the Pull Request Generator as well additionally configure the following settings note The ApplicationSet controller webhook does not use the same webhook as the API server as defined here webhook md ApplicationSet exposes a webhook server as a service of type ClusterIP An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source Github webhook configuration In section 1 Create the webhook in the Git provider add an event so that a webhook request will be sent when a pull request is created closed or label changed Add Webhook URL with uri api webhook and select content type as json Add Webhook URL assets applicationset webhook config pullrequest generator png Add Webhook URL Select Let me select individual events and enable the checkbox for Pull requests Add Webhook assets applicationset webhook config pull request png Add Webhook Pull Request The Pull Request Generator will requeue when the next action occurs opened closed reopened labeled unlabeled synchronized For more information about each event please refer to the official documentation https docs github com en developers webhooks and events webhooks webhook events and payloads Gitlab webhook configuration Enable checkbox for Merge request events in triggers list Add Gitlab Webhook assets applicationset webhook config merge request gitlab png Add Gitlab Merge request Webhook The Pull Request Generator will requeue when the next action occurs open close reopen update merge For more information about each event please refer to the official documentation https docs gitlab com ee user project integrations webhook events html merge request events Lifecycle An Application will be generated when a Pull Request is discovered when the configured criteria is met i e for GitHub when a Pull Request matches the specified labels and or pullRequestState Application will be removed when a Pull Request no longer meets the specified criteria
argocd The ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications for example preventing the controller from deleting child Applications Dry run prevent ApplicationSet from creating modifying or deleting all Applications These settings allow you to exert control over when and how changes are made to your Applications and to their corresponding cluster resources etc Here are some of the controller settings that may be modified to alter the ApplicationSet controller s resource handling behaviour Controlling if when the ApplicationSet controller modifies resources
# Controlling if/when the ApplicationSet controller modifies `Application` resources The ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications, for example, preventing the controller from deleting child Applications. These settings allow you to exert control over when, and how, changes are made to your Applications, and to their corresponding cluster resources (`Deployments`, `Services`, etc). Here are some of the controller settings that may be modified to alter the ApplicationSet controller's resource-handling behaviour. ## Dry run: prevent ApplicationSet from creating, modifying, or deleting all Applications To prevent the ApplicationSet controller from creating, modifying, or deleting any `Application` resources, you may enable `dry-run` mode. This essentially switches the controller into a "read only" mode, where the controller Reconcile loop will run, but no resources will be modified. To enable dry-run, add `--dryrun true` to the ApplicationSet Deployment's container launch parameters. See 'How to modify ApplicationSet container parameters' below for detailed steps on how to add this parameter to the controller. ## Managed Applications modification Policies The ApplicationSet controller supports a parameter `--policy`, which is specified on launch (within the controller Deployment container), and which restricts what types of modifications will be made to managed Argo CD `Application` resources. The `--policy` parameter takes four values: `sync`, `create-only`, `create-delete`, and `create-update`. (`sync` is the default, which is used if the `--policy` parameter is not specified; the other policies are described below). It is also possible to set this policy per ApplicationSet. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: # (...) syncPolicy: applicationsSync: create-only # create-update, create-delete sync ``` - Policy `create-only`: Prevents ApplicationSet controller from modifying or deleting Applications. **WARNING**: It doesn't prevent Application controller from deleting Applications according to [ownerReferences](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) when deleting ApplicationSet. - Policy `create-update`: Prevents ApplicationSet controller from deleting Applications. Update is allowed. **WARNING**: It doesn't prevent Application controller from deleting Applications according to [ownerReferences](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) when deleting ApplicationSet. - Policy `create-delete`: Prevents ApplicationSet controller from modifying Applications. Delete is allowed. - Policy `sync`: Update and Delete are allowed. If the controller parameter `--policy` is set, it takes precedence on the field `applicationsSync`. It is possible to allow per ApplicationSet sync policy by setting variable `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_POLICY_OVERRIDE` to argocd-cmd-params-cm `applicationsetcontroller.enable.policy.override` or directly with controller parameter `--enable-policy-override` (default to `false`). ### Policy - `create-only`: Prevent ApplicationSet controller from modifying and deleting Applications To allow the ApplicationSet controller to *create* `Application` resources, but prevent any further modification, such as *deletion*, or modification of Application fields, add this parameter in the ApplicationSet controller: **WARNING**: "*deletion*" indicates the case as the result of comparing generated Application between before and after, there are Applications which no longer exist. It doesn't indicate the case Applications are deleted according to ownerReferences to ApplicationSet. See [How to prevent Application controller from deleting Applications when deleting ApplicationSet](#how-to-prevent-application-controller-from-deleting-applications-when-deleting-applicationset) ``` --policy create-only ``` At ApplicationSet level ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: # (...) syncPolicy: applicationsSync: create-only ``` ### Policy - `create-update`: Prevent ApplicationSet controller from deleting Applications To allow the ApplicationSet controller to create or modify `Application` resources, but prevent Applications from being deleted, add the following parameter to the ApplicationSet controller `Deployment`: **WARNING**: "*deletion*" indicates the case as the result of comparing generated Application between before and after, there are Applications which no longer exist. It doesn't indicate the case Applications are deleted according to ownerReferences to ApplicationSet. See [How to prevent Application controller from deleting Applications when deleting ApplicationSet](#how-to-prevent-application-controller-from-deleting-applications-when-deleting-applicationset) ``` --policy create-update ``` This may be useful to users looking for additional protection against deletion of the Applications generated by the controller. At ApplicationSet level ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: # (...) syncPolicy: applicationsSync: create-update ``` ### How to prevent Application controller from deleting Applications when deleting ApplicationSet By default, `create-only` and `create-update` policy isn't effective against preventing deletion of Applications when deleting ApplicationSet. You must set the finalizer to ApplicationSet to prevent deletion in such case, and use background cascading deletion. If you use foreground cascading deletion, there's no guarantee to preserve applications. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: finalizers: - resources-finalizer.argocd.argoproj.io spec: # (...) ``` ## Ignore certain changes to Applications The ApplicationSet spec includes an `ignoreApplicationDifferences` field, which allows you to specify which fields of the ApplicationSet should be ignored when comparing Applications. The field supports multiple ignore rules. Each ignore rule may specify a list of either `jsonPointers` or `jqPathExpressions` to ignore. You may optionally also specify a `name` to apply the ignore rule to a specific Application, or omit the `name` to apply the ignore rule to all Applications. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: ignoreApplicationDifferences: - jsonPointers: - /spec/source/targetRevision - name: some-app jqPathExpressions: - .spec.source.helm.values ``` ### Allow temporarily toggling auto-sync One of the most common use cases for ignoring differences is to allow temporarily toggling auto-sync for an Application. For example, if you have an ApplicationSet that is configured to automatically sync Applications, you may want to temporarily disable auto-sync for a specific Application. You can do this by adding an ignore rule for the `spec.syncPolicy.automated` field. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: ignoreApplicationDifferences: - jsonPointers: - /spec/syncPolicy ``` ### Limitations of `ignoreApplicationDifferences` When an ApplicationSet is reconciled, the controller will compare the ApplicationSet spec with the spec of each Application that it manages. If there are any differences, the controller will generate a patch to update the Application to match the ApplicationSet spec. The generated patch is a MergePatch. According to the MergePatch documentation, "existing lists will be completely replaced by new lists" when there is a change to the list. This limits the effectiveness of `ignoreApplicationDifferences` when the ignored field is in a list. For example, if you have an application with multiple sources, and you want to ignore changes to the `targetRevision` of one of the sources, changes in other fields or in other sources will cause the entire `sources` list to be replaced, and the `targetRevision` field will be reset to the value defined in the ApplicationSet. For example, consider this ApplicationSet: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: ignoreApplicationDifferences: - jqPathExpressions: - .spec.sources[] | select(.repoURL == "https://git.example.com/org/repo1").targetRevision template: spec: sources: - repoURL: https://git.example.com/org/repo1 targetRevision: main - repoURL: https://git.example.com/org/repo2 targetRevision: main ``` You can freely change the `targetRevision` of the `repo1` source, and the ApplicationSet controller will not overwrite your change. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application spec: sources: - repoURL: https://git.example.com/org/repo1 targetRevision: fix/bug-123 - repoURL: https://git.example.com/org/repo2 targetRevision: main ``` However, if you change the `targetRevision` of the `repo2` source, the ApplicationSet controller will overwrite the entire `sources` field. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application spec: sources: - repoURL: https://git.example.com/org/repo1 targetRevision: main - repoURL: https://git.example.com/org/repo2 targetRevision: main ``` !!! note [Future improvements](https://github.com/argoproj/argo-cd/issues/15975) to the ApplicationSet controller may eliminate this problem. For example, the `ref` field might be made a merge key, allowing the ApplicationSet controller to generate and use a StrategicMergePatch instead of a MergePatch. You could then target a specific source by `ref`, ignore changes to a field in that source, and changes to other sources would not cause the ignored field to be overwritten. ## Prevent an `Application`'s child resources from being deleted, when the parent Application is deleted By default, when an `Application` resource is deleted by the ApplicationSet controller, all of the child resources of the Application will be deleted as well (such as, all of the Application's `Deployments`, `Services`, etc). To prevent an Application's child resources from being deleted when the parent Application is deleted, add the `preserveResourcesOnDeletion: true` field to the `syncPolicy` of the ApplicationSet: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: # (...) syncPolicy: preserveResourcesOnDeletion: true ``` More information on the specific behaviour of `preserveResourcesOnDeletion`, and deletion in ApplicationSet controller and Argo CD in general, can be found on the [Application Deletion](Application-Deletion.md) page. ## Prevent an Application's child resources from being modified Changes made to the ApplicationSet will propagate to the Applications managed by the ApplicationSet, and then Argo CD will propagate the Application changes to the underlying cluster resources (as per [Argo CD Integration](Argo-CD-Integration.md)). The propagation of Application changes to the cluster is managed by the [automated sync settings](../../user-guide/auto_sync.md), which are referenced in the ApplicationSet `template` field: - `spec.template.syncPolicy.automated`: If enabled, changes to Applications will automatically propagate to the cluster resources of the cluster. - Unset this within the ApplicationSet template to 'pause' updates to cluster resources managed by the `Application` resource. - `spec.template.syncPolicy.automated.prune`: By default, Automated sync will not delete resources when Argo CD detects the resource is no longer defined in Git. - For extra safety, set this to false to prevent unexpected changes to the backing Git repository from affecting cluster resources. ## How to modify ApplicationSet container launch parameters There are a couple of ways to modify the ApplicationSet container parameters, so as to enable the above settings. ### A) Use `kubectl edit` to modify the deployment on the cluster Edit the applicationset-controller `Deployment` resource on the cluster: ``` kubectl edit deployment/argocd-applicationset-controller -n argocd ``` Locate the `.spec.template.spec.containers[0].command` field, and add the required parameter(s): ```yaml spec: # (...) template: # (...) spec: containers: - command: - entrypoint.sh - argocd-applicationset-controller # Insert new parameters here, for example: # --policy create-only # (...) ``` Save and exit the editor. Wait for a new `Pod` to start containing the updated parameters. ### Or, B) Edit the `install.yaml` manifest for the ApplicationSet installation Rather than directly editing the cluster resource, you may instead choose to modify the installation YAML that is used to install the ApplicationSet controller: Applicable for applicationset versions less than 0.4.0. ```bash # Clone the repository git clone https://github.com/argoproj/applicationset # Checkout the version that corresponds to the one you have installed. git checkout "(version of applicationset)" # example: git checkout "0.1.0" cd applicationset/manifests # open 'install.yaml' in a text editor, make the same modifications to Deployment # as described in the previous section. # Apply the change to the cluster kubectl apply -n argocd -f install.yaml ``` ## Preserving changes made to an Applications annotations and labels !!! note The same behavior can be achieved on a per-app basis using the [`ignoreApplicationDifferences`](#ignore-certain-changes-to-applications) feature described above. However, preserved fields may be configured globally, a feature that is not yet available for `ignoreApplicationDifferences`. It is common practice in Kubernetes to store state in annotations, operators will often make use of this. To allow for this, it is possible to configure a list of annotations that the ApplicationSet should preserve when reconciling. For example, imagine that we have an Application created from an ApplicationSet, but a custom annotation and label has since been added (to the Application) that does not exist in the `ApplicationSet` resource: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: # This annotation and label exists only on this Application, and not in # the parent ApplicationSet template: annotations: my-custom-annotation: some-value labels: my-custom-label: some-value spec: # (...) ``` To preserve this annotation and label we can use the `preservedFields` property of the `ApplicationSet` like so: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: # (...) preservedFields: annotations: ["my-custom-annotation"] labels: ["my-custom-label"] ``` The ApplicationSet controller will leave this annotation and label as-is when reconciling, even though it is not defined in the metadata of the ApplicationSet itself. By default, the Argo CD notifications and the Argo CD refresh type annotations are also preserved. !!!note One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS` and `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS` respectively. ## Debugging unexpected changes to Applications When the ApplicationSet controller makes a change to an application, it logs the patch at the debug level. To see these logs, set the log level to debug in the `argocd-cmd-params-cm` ConfigMap in the `argocd` namespace: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm namespace: argocd data: applicationsetcontroller.log.level: debug ``` ## Previewing changes To preview changes that the ApplicationSet controller would make to Applications, you can create the AppSet in dry-run mode. This works whether the AppSet already exists or not. ```shell argocd appset create --dry-run ./appset.yaml -o json | jq -r '.status.resources[].name' ``` The dry-run will populate the returned ApplicationSet's status with the Applications which would be managed with the given config. You can compare to the existing Applications to see what would change.
argocd
Controlling if when the ApplicationSet controller modifies Application resources The ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications for example preventing the controller from deleting child Applications These settings allow you to exert control over when and how changes are made to your Applications and to their corresponding cluster resources Deployments Services etc Here are some of the controller settings that may be modified to alter the ApplicationSet controller s resource handling behaviour Dry run prevent ApplicationSet from creating modifying or deleting all Applications To prevent the ApplicationSet controller from creating modifying or deleting any Application resources you may enable dry run mode This essentially switches the controller into a read only mode where the controller Reconcile loop will run but no resources will be modified To enable dry run add dryrun true to the ApplicationSet Deployment s container launch parameters See How to modify ApplicationSet container parameters below for detailed steps on how to add this parameter to the controller Managed Applications modification Policies The ApplicationSet controller supports a parameter policy which is specified on launch within the controller Deployment container and which restricts what types of modifications will be made to managed Argo CD Application resources The policy parameter takes four values sync create only create delete and create update sync is the default which is used if the policy parameter is not specified the other policies are described below It is also possible to set this policy per ApplicationSet yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec syncPolicy applicationsSync create only create update create delete sync Policy create only Prevents ApplicationSet controller from modifying or deleting Applications WARNING It doesn t prevent Application controller from deleting Applications according to ownerReferences https kubernetes io docs concepts overview working with objects owners dependents when deleting ApplicationSet Policy create update Prevents ApplicationSet controller from deleting Applications Update is allowed WARNING It doesn t prevent Application controller from deleting Applications according to ownerReferences https kubernetes io docs concepts overview working with objects owners dependents when deleting ApplicationSet Policy create delete Prevents ApplicationSet controller from modifying Applications Delete is allowed Policy sync Update and Delete are allowed If the controller parameter policy is set it takes precedence on the field applicationsSync It is possible to allow per ApplicationSet sync policy by setting variable ARGOCD APPLICATIONSET CONTROLLER ENABLE POLICY OVERRIDE to argocd cmd params cm applicationsetcontroller enable policy override or directly with controller parameter enable policy override default to false Policy create only Prevent ApplicationSet controller from modifying and deleting Applications To allow the ApplicationSet controller to create Application resources but prevent any further modification such as deletion or modification of Application fields add this parameter in the ApplicationSet controller WARNING deletion indicates the case as the result of comparing generated Application between before and after there are Applications which no longer exist It doesn t indicate the case Applications are deleted according to ownerReferences to ApplicationSet See How to prevent Application controller from deleting Applications when deleting ApplicationSet how to prevent application controller from deleting applications when deleting applicationset policy create only At ApplicationSet level yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec syncPolicy applicationsSync create only Policy create update Prevent ApplicationSet controller from deleting Applications To allow the ApplicationSet controller to create or modify Application resources but prevent Applications from being deleted add the following parameter to the ApplicationSet controller Deployment WARNING deletion indicates the case as the result of comparing generated Application between before and after there are Applications which no longer exist It doesn t indicate the case Applications are deleted according to ownerReferences to ApplicationSet See How to prevent Application controller from deleting Applications when deleting ApplicationSet how to prevent application controller from deleting applications when deleting applicationset policy create update This may be useful to users looking for additional protection against deletion of the Applications generated by the controller At ApplicationSet level yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec syncPolicy applicationsSync create update How to prevent Application controller from deleting Applications when deleting ApplicationSet By default create only and create update policy isn t effective against preventing deletion of Applications when deleting ApplicationSet You must set the finalizer to ApplicationSet to prevent deletion in such case and use background cascading deletion If you use foreground cascading deletion there s no guarantee to preserve applications yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata finalizers resources finalizer argocd argoproj io spec Ignore certain changes to Applications The ApplicationSet spec includes an ignoreApplicationDifferences field which allows you to specify which fields of the ApplicationSet should be ignored when comparing Applications The field supports multiple ignore rules Each ignore rule may specify a list of either jsonPointers or jqPathExpressions to ignore You may optionally also specify a name to apply the ignore rule to a specific Application or omit the name to apply the ignore rule to all Applications yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec ignoreApplicationDifferences jsonPointers spec source targetRevision name some app jqPathExpressions spec source helm values Allow temporarily toggling auto sync One of the most common use cases for ignoring differences is to allow temporarily toggling auto sync for an Application For example if you have an ApplicationSet that is configured to automatically sync Applications you may want to temporarily disable auto sync for a specific Application You can do this by adding an ignore rule for the spec syncPolicy automated field yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec ignoreApplicationDifferences jsonPointers spec syncPolicy Limitations of ignoreApplicationDifferences When an ApplicationSet is reconciled the controller will compare the ApplicationSet spec with the spec of each Application that it manages If there are any differences the controller will generate a patch to update the Application to match the ApplicationSet spec The generated patch is a MergePatch According to the MergePatch documentation existing lists will be completely replaced by new lists when there is a change to the list This limits the effectiveness of ignoreApplicationDifferences when the ignored field is in a list For example if you have an application with multiple sources and you want to ignore changes to the targetRevision of one of the sources changes in other fields or in other sources will cause the entire sources list to be replaced and the targetRevision field will be reset to the value defined in the ApplicationSet For example consider this ApplicationSet yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec ignoreApplicationDifferences jqPathExpressions spec sources select repoURL https git example com org repo1 targetRevision template spec sources repoURL https git example com org repo1 targetRevision main repoURL https git example com org repo2 targetRevision main You can freely change the targetRevision of the repo1 source and the ApplicationSet controller will not overwrite your change yaml apiVersion argoproj io v1alpha1 kind Application spec sources repoURL https git example com org repo1 targetRevision fix bug 123 repoURL https git example com org repo2 targetRevision main However if you change the targetRevision of the repo2 source the ApplicationSet controller will overwrite the entire sources field yaml apiVersion argoproj io v1alpha1 kind Application spec sources repoURL https git example com org repo1 targetRevision main repoURL https git example com org repo2 targetRevision main note Future improvements https github com argoproj argo cd issues 15975 to the ApplicationSet controller may eliminate this problem For example the ref field might be made a merge key allowing the ApplicationSet controller to generate and use a StrategicMergePatch instead of a MergePatch You could then target a specific source by ref ignore changes to a field in that source and changes to other sources would not cause the ignored field to be overwritten Prevent an Application s child resources from being deleted when the parent Application is deleted By default when an Application resource is deleted by the ApplicationSet controller all of the child resources of the Application will be deleted as well such as all of the Application s Deployments Services etc To prevent an Application s child resources from being deleted when the parent Application is deleted add the preserveResourcesOnDeletion true field to the syncPolicy of the ApplicationSet yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec syncPolicy preserveResourcesOnDeletion true More information on the specific behaviour of preserveResourcesOnDeletion and deletion in ApplicationSet controller and Argo CD in general can be found on the Application Deletion Application Deletion md page Prevent an Application s child resources from being modified Changes made to the ApplicationSet will propagate to the Applications managed by the ApplicationSet and then Argo CD will propagate the Application changes to the underlying cluster resources as per Argo CD Integration Argo CD Integration md The propagation of Application changes to the cluster is managed by the automated sync settings user guide auto sync md which are referenced in the ApplicationSet template field spec template syncPolicy automated If enabled changes to Applications will automatically propagate to the cluster resources of the cluster Unset this within the ApplicationSet template to pause updates to cluster resources managed by the Application resource spec template syncPolicy automated prune By default Automated sync will not delete resources when Argo CD detects the resource is no longer defined in Git For extra safety set this to false to prevent unexpected changes to the backing Git repository from affecting cluster resources How to modify ApplicationSet container launch parameters There are a couple of ways to modify the ApplicationSet container parameters so as to enable the above settings A Use kubectl edit to modify the deployment on the cluster Edit the applicationset controller Deployment resource on the cluster kubectl edit deployment argocd applicationset controller n argocd Locate the spec template spec containers 0 command field and add the required parameter s yaml spec template spec containers command entrypoint sh argocd applicationset controller Insert new parameters here for example policy create only Save and exit the editor Wait for a new Pod to start containing the updated parameters Or B Edit the install yaml manifest for the ApplicationSet installation Rather than directly editing the cluster resource you may instead choose to modify the installation YAML that is used to install the ApplicationSet controller Applicable for applicationset versions less than 0 4 0 bash Clone the repository git clone https github com argoproj applicationset Checkout the version that corresponds to the one you have installed git checkout version of applicationset example git checkout 0 1 0 cd applicationset manifests open install yaml in a text editor make the same modifications to Deployment as described in the previous section Apply the change to the cluster kubectl apply n argocd f install yaml Preserving changes made to an Applications annotations and labels note The same behavior can be achieved on a per app basis using the ignoreApplicationDifferences ignore certain changes to applications feature described above However preserved fields may be configured globally a feature that is not yet available for ignoreApplicationDifferences It is common practice in Kubernetes to store state in annotations operators will often make use of this To allow for this it is possible to configure a list of annotations that the ApplicationSet should preserve when reconciling For example imagine that we have an Application created from an ApplicationSet but a custom annotation and label has since been added to the Application that does not exist in the ApplicationSet resource yaml apiVersion argoproj io v1alpha1 kind Application metadata This annotation and label exists only on this Application and not in the parent ApplicationSet template annotations my custom annotation some value labels my custom label some value spec To preserve this annotation and label we can use the preservedFields property of the ApplicationSet like so yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec preservedFields annotations my custom annotation labels my custom label The ApplicationSet controller will leave this annotation and label as is when reconciling even though it is not defined in the metadata of the ApplicationSet itself By default the Argo CD notifications and the Argo CD refresh type annotations are also preserved note One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to ARGOCD APPLICATIONSET CONTROLLER GLOBAL PRESERVED ANNOTATIONS and ARGOCD APPLICATIONSET CONTROLLER GLOBAL PRESERVED LABELS respectively Debugging unexpected changes to Applications When the ApplicationSet controller makes a change to an application it logs the patch at the debug level To see these logs set the log level to debug in the argocd cmd params cm ConfigMap in the argocd namespace yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm namespace argocd data applicationsetcontroller log level debug Previewing changes To preview changes that the ApplicationSet controller would make to Applications you can create the AppSet in dry run mode This works whether the AppSet already exists or not shell argocd appset create dry run appset yaml o json jq r status resources name The dry run will populate the returned ApplicationSet s status with the Applications which would be managed with the given config You can compare to the existing Applications to see what would change
argocd Installation Have a file default location is Getting Started Requirements This guide assumes you are familiar with Argo CD and its basic concepts See the for more information Installed command line tool
# Getting Started This guide assumes you are familiar with Argo CD and its basic concepts. See the [Argo CD documentation](../../core_concepts.md) for more information. ## Requirements * Installed [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool * Have a [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`). ## Installation There are a few options for installing the ApplicationSet controller. ### A) Install ApplicationSet as part of Argo CD Starting with Argo CD v2.3, the ApplicationSet controller is bundled with Argo CD. It is no longer necessary to install the ApplicationSet controller separately from Argo CD. Follow the [Argo CD Getting Started](../../getting_started.md) instructions for more information. ### B) Install ApplicationSet into an existing Argo CD install (pre-Argo CD v2.3) **Note**: These instructions only apply to versions of Argo CD before v2.3.0. The ApplicationSet controller *must* be installed into the same namespace as the Argo CD it is targeting. Presuming that Argo CD is installed into the `argocd` namespace, run the following command: ```bash kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/applicationset/v0.4.0/manifests/install.yaml ``` Once installed, the ApplicationSet controller requires no additional setup. The `manifests/install.yaml` file contains the Kubernetes manifests required to install the ApplicationSet controller: - CustomResourceDefinition for `ApplicationSet` resource - Deployment for `argocd-applicationset-controller` - ServiceAccount for use by ApplicationSet controller, to access Argo CD resources - Role granting RBAC access to needed resources, for ServiceAccount - RoleBinding to bind the ServiceAccount and Role <!-- ### C) Install development builds of ApplicationSet controller for access to the latest features Development builds of the ApplicationSet controller can be installed by running the following command: ```bash kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/applicationset/master/manifests/install.yaml ``` With this option you will need to ensure that Argo CD is already installed into the `argocd` namespace. How it works: - After each successful commit to *argoproj/applicationset* `master` branch, a GitHub action will run that performs a container build/push to [`argoproj/argocd-applicationset:latest`](https://quay.io/repository/argoproj/argocd-applicationset?tab=tags ) - [Documentation for the `master`-branch-based developer builds](https://argocd-applicationset.readthedocs.io/en/master/) is available from Read the Docs. !!! warning Development builds contain newer features and bug fixes, but are more likely to be unstable, as compared to release builds. See the `master` branch [Read the Docs](https://argocd-applicationset.readthedocs.io/en/master/) page for documentation on post-release features. --> <!-- ## Upgrading to a Newer Release To upgrade from an older release (eg 0.1.0, 0.2.0) to a newer release (eg 0.3.0), you only need to `kubectl apply` the `install.yaml` for the new release, as described under *Installation* above. There are no manual upgrade steps required between any release of ApplicationSet controller, (including 0.1.0, 0.2.0, and 0.3.0) as of this writing, however, see the behaviour changes in ApplicationSet controller v0.3.0, below. ### Behaviour changes in ApplicationSet controller v0.3.0 There are no breaking changes, however, a couple of behaviours have changed from v0.2.0 to v0.3.0. See the [v0.3.0 upgrade page](upgrading/v0.2.0-to-v0.3.0.md) for details. --> ## Enabling high availability mode To enable high availability, you have to set the command ``` --enable-leader-election=true ``` in argocd-applicationset-controller container and increase the replicas. do following changes in manifests/install.yaml ```bash spec: containers: - command: - entrypoint.sh - argocd-applicationset-controller - --enable-leader-election=true ``` ### Optional: Additional Post-Upgrade Safeguards See the [Controlling Resource Modification](Controlling-Resource-Modification.md) page for information on additional parameters you may wish to add to the ApplicationSet Resource in `install.yaml`, to provide extra security against any initial, unexpected post-upgrade behaviour. For instance, to temporarily prevent the upgraded ApplicationSet controller from making any changes, you could: - Enable dry-run - Use a create-only policy - Enable `preserveResourcesOnDeletion` on your ApplicationSets - Temporarily disable automated sync in your ApplicationSets' template These parameters would allow you to observe/control the behaviour of the new version of the ApplicationSet controller in your environment, to ensure you are happy with the result (see the ApplicationSet log file for details). Just don't forget to remove any temporary changes when you are done testing! However, as mentioned above, these steps are not strictly necessary: upgrading the ApplicationSet controller should be a minimally invasive process, and these are only suggested as an optional precaution for extra safety. ## Next Steps Once your ApplicationSet controller is up and running, proceed to [Use Cases](Use-Cases.md) to learn more about the supported scenarios, or proceed directly to [Generators](Generators.md) to see example `ApplicationSet` resources.
argocd
Getting Started This guide assumes you are familiar with Argo CD and its basic concepts See the Argo CD documentation core concepts md for more information Requirements Installed kubectl https kubernetes io docs tasks tools install kubectl command line tool Have a kubeconfig https kubernetes io docs tasks access application cluster configure access multiple clusters file default location is kube config Installation There are a few options for installing the ApplicationSet controller A Install ApplicationSet as part of Argo CD Starting with Argo CD v2 3 the ApplicationSet controller is bundled with Argo CD It is no longer necessary to install the ApplicationSet controller separately from Argo CD Follow the Argo CD Getting Started getting started md instructions for more information B Install ApplicationSet into an existing Argo CD install pre Argo CD v2 3 Note These instructions only apply to versions of Argo CD before v2 3 0 The ApplicationSet controller must be installed into the same namespace as the Argo CD it is targeting Presuming that Argo CD is installed into the argocd namespace run the following command bash kubectl apply n argocd f https raw githubusercontent com argoproj applicationset v0 4 0 manifests install yaml Once installed the ApplicationSet controller requires no additional setup The manifests install yaml file contains the Kubernetes manifests required to install the ApplicationSet controller CustomResourceDefinition for ApplicationSet resource Deployment for argocd applicationset controller ServiceAccount for use by ApplicationSet controller to access Argo CD resources Role granting RBAC access to needed resources for ServiceAccount RoleBinding to bind the ServiceAccount and Role C Install development builds of ApplicationSet controller for access to the latest features Development builds of the ApplicationSet controller can be installed by running the following command bash kubectl apply n argocd f https raw githubusercontent com argoproj applicationset master manifests install yaml With this option you will need to ensure that Argo CD is already installed into the argocd namespace How it works After each successful commit to argoproj applicationset master branch a GitHub action will run that performs a container build push to argoproj argocd applicationset latest https quay io repository argoproj argocd applicationset tab tags Documentation for the master branch based developer builds https argocd applicationset readthedocs io en master is available from Read the Docs warning Development builds contain newer features and bug fixes but are more likely to be unstable as compared to release builds See the master branch Read the Docs https argocd applicationset readthedocs io en master page for documentation on post release features Upgrading to a Newer Release To upgrade from an older release eg 0 1 0 0 2 0 to a newer release eg 0 3 0 you only need to kubectl apply the install yaml for the new release as described under Installation above There are no manual upgrade steps required between any release of ApplicationSet controller including 0 1 0 0 2 0 and 0 3 0 as of this writing however see the behaviour changes in ApplicationSet controller v0 3 0 below Behaviour changes in ApplicationSet controller v0 3 0 There are no breaking changes however a couple of behaviours have changed from v0 2 0 to v0 3 0 See the v0 3 0 upgrade page upgrading v0 2 0 to v0 3 0 md for details Enabling high availability mode To enable high availability you have to set the command enable leader election true in argocd applicationset controller container and increase the replicas do following changes in manifests install yaml bash spec containers command entrypoint sh argocd applicationset controller enable leader election true Optional Additional Post Upgrade Safeguards See the Controlling Resource Modification Controlling Resource Modification md page for information on additional parameters you may wish to add to the ApplicationSet Resource in install yaml to provide extra security against any initial unexpected post upgrade behaviour For instance to temporarily prevent the upgraded ApplicationSet controller from making any changes you could Enable dry run Use a create only policy Enable preserveResourcesOnDeletion on your ApplicationSets Temporarily disable automated sync in your ApplicationSets template These parameters would allow you to observe control the behaviour of the new version of the ApplicationSet controller in your environment to ensure you are happy with the result see the ApplicationSet log file for details Just don t forget to remove any temporary changes when you are done testing However as mentioned above these steps are not strictly necessary upgrading the ApplicationSet controller should be a minimally invasive process and these are only suggested as an optional precaution for extra safety Next Steps Once your ApplicationSet controller is up and running proceed to Use Cases Use Cases md to learn more about the supported scenarios or proceed directly to Generators Generators md to see example ApplicationSet resources
argocd ApplicationSet is able to use To activate this feature add to your ApplicationSet manifest is available in addition to the default Go Text Template functions The except for and Go Template Introduction
# Go Template ## Introduction ApplicationSet is able to use [Go Text Template](https://pkg.go.dev/text/template). To activate this feature, add `goTemplate: true` to your ApplicationSet manifest. The [Sprig function library](https://masterminds.github.io/sprig/) (except for `env`, `expandenv` and `getHostByName`) is available in addition to the default Go Text Template functions. An additional `normalize` function makes any string parameter usable as a valid DNS name by replacing invalid characters with hyphens and truncating at 253 characters. This is useful when making parameters safe for things like Application names. Another `slugify` function has been added which, by default, sanitizes and smart truncates (it doesn't cut a word into 2). This function accepts a couple of arguments: - The first argument (if provided) is an integer specifying the maximum length of the slug. - The second argument (if provided) is a boolean indicating whether smart truncation is enabled. - The last argument (if provided) is the input name that needs to be slugified. #### Usage example ``` apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: test-appset spec: ... template: metadata: name: 'hellos3--' annotations: label-1: '' label-2: '' label-3: '' ``` If you want to customize [options defined by text/template](https://pkg.go.dev/text/template#Template.Option), you can add the `goTemplateOptions: ["opt1", "opt2", ...]` key to your ApplicationSet next to `goTemplate: true`. Note that at the time of writing, there is only one useful option defined, which is `missingkey=error`. The recommended setting of `goTemplateOptions` is `["missingkey=error"]`, which ensures that if undefined values are looked up by your template then an error is reported instead of being ignored silently. This is not currently the default behavior, for backwards compatibility. ## Motivation Go Template is the Go Standard for string templating. It is also more powerful than fasttemplate (the default templating engine) as it allows doing complex templating logic. ## Limitations Go templates are applied on a per-field basis, and only on string fields. Here are some examples of what is **not** possible with Go text templates: - Templating a boolean field. ::yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: goTemplate: true goTemplateOptions: ["missingkey=error"] template: spec: source: helm: useCredentials: "" # This field may NOT be templated, because it is a boolean field. - Templating an object field: ::yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: goTemplate: true goTemplateOptions: ["missingkey=error"] template: spec: syncPolicy: "" # This field may NOT be templated, because it is an object field. - Using control keywords across fields: ::yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: goTemplate: true goTemplateOptions: ["missingkey=error"] template: spec: source: helm: parameters: # Each of these fields is evaluated as an independent template, so the first one will fail with an error. - name: "" - name: "" value: "" - name: throw-away value: "" - Signature verification is not supported for the templated `project` field when using the Git generator. ::yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet spec: goTemplate: true template: spec: project: ## Migration guide ### Globals All your templates must replace parameters with GoTemplate Syntax: Example: `` becomes `` ### Cluster Generators By activating Go Templating, `` becomes an object. - `` becomes `` - `` becomes `` ### Git Generators By activating Go Templating, `` becomes an object. Therefore, some changes must be made to the Git generators' templating: - `` becomes `` - `` becomes `` - `` becomes `` - `` becomes `` - `` becomes `` - `` becomes `` - `` if being used in the file generator becomes `` Here is an example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons spec: generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD directories: - path: applicationset/examples/git-generator-directory/cluster-addons/* template: metadata: name: '' spec: project: default source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' ``` becomes ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD directories: - path: applicationset/examples/git-generator-directory/cluster-addons/* template: metadata: name: '' spec: project: default source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: '' destination: server: https://kubernetes.default.svc namespace: '' ``` It is also possible to use Sprig functions to construct the path variables manually: | with `goTemplate: false` | with `goTemplate: true` | with `goTemplate: true` + Sprig | | ------------ | ----------- | --------------------- | | `` | `` | `` | | `` | `` | `` | | `` | `` | `` | | `` | `` | `` | | `` | `` | `` | | `` | `-` | `` | ## Available template functions ApplicationSet controller provides: - all [sprig](http://masterminds.github.io/sprig/) Go templates function except `env`, `expandenv` and `getHostByName` - `normalize`: sanitizes the input so that it complies with the following rules: 1. contains no more than 253 characters 2. contains only lowercase alphanumeric characters, '-' or '.' 3. starts and ends with an alphanumeric character - `slugify`: sanitizes like `normalize` and smart truncates (it doesn't cut a word into 2) like described in the [introduction](#introduction) section. - `toYaml` / `fromYaml` / `fromYamlArray` helm like functions ## Examples ### Basic Go template usage This example shows basic string parameter substitution. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - list: elements: - cluster: engineering-dev url: https://1.2.3.4 - cluster: engineering-prod url: https://2.4.6.8 - cluster: finance-preprod url: https://9.8.7.6 template: metadata: name: '-guestbook' spec: project: my-project source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/ destination: server: '' namespace: guestbook ``` ### Fallbacks for unset parameters For some generators, a parameter of a certain name might not always be populated (for example, with the values generator or the git files generator). In these cases, you can use a Go template to provide a fallback value. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - list: elements: - cluster: engineering-dev url: https://kubernetes.default.svc - cluster: engineering-prod url: https://kubernetes.default.svc nameSuffix: -my-name-suffix template: metadata: name: '' spec: project: default source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: applicationset/examples/list-generator/guestbook/ destination: server: '' namespace: guestbook ``` This ApplicationSet will produce an Application called `engineering-dev` and another called `engineering-prod-my-name-suffix`. Note that unset parameters are an error, so you need to avoid looking up a property that doesn't exist. Instead, use template functions like `dig` to do the lookup with a default. If you prefer to have unset parameters default to zero, you can remove `goTemplateOptions: ["missingkey=error"]` or set it to `goTemplateOptions: ["missingkey=invalid"]`
argocd
Go Template Introduction ApplicationSet is able to use Go Text Template https pkg go dev text template To activate this feature add goTemplate true to your ApplicationSet manifest The Sprig function library https masterminds github io sprig except for env expandenv and getHostByName is available in addition to the default Go Text Template functions An additional normalize function makes any string parameter usable as a valid DNS name by replacing invalid characters with hyphens and truncating at 253 characters This is useful when making parameters safe for things like Application names Another slugify function has been added which by default sanitizes and smart truncates it doesn t cut a word into 2 This function accepts a couple of arguments The first argument if provided is an integer specifying the maximum length of the slug The second argument if provided is a boolean indicating whether smart truncation is enabled The last argument if provided is the input name that needs to be slugified Usage example apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name test appset spec template metadata name hellos3 annotations label 1 label 2 label 3 If you want to customize options defined by text template https pkg go dev text template Template Option you can add the goTemplateOptions opt1 opt2 key to your ApplicationSet next to goTemplate true Note that at the time of writing there is only one useful option defined which is missingkey error The recommended setting of goTemplateOptions is missingkey error which ensures that if undefined values are looked up by your template then an error is reported instead of being ignored silently This is not currently the default behavior for backwards compatibility Motivation Go Template is the Go Standard for string templating It is also more powerful than fasttemplate the default templating engine as it allows doing complex templating logic Limitations Go templates are applied on a per field basis and only on string fields Here are some examples of what is not possible with Go text templates Templating a boolean field yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec goTemplate true goTemplateOptions missingkey error template spec source helm useCredentials This field may NOT be templated because it is a boolean field Templating an object field yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec goTemplate true goTemplateOptions missingkey error template spec syncPolicy This field may NOT be templated because it is an object field Using control keywords across fields yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec goTemplate true goTemplateOptions missingkey error template spec source helm parameters Each of these fields is evaluated as an independent template so the first one will fail with an error name name value name throw away value Signature verification is not supported for the templated project field when using the Git generator yaml apiVersion argoproj io v1alpha1 kind ApplicationSet spec goTemplate true template spec project Migration guide Globals All your templates must replace parameters with GoTemplate Syntax Example becomes Cluster Generators By activating Go Templating becomes an object becomes becomes Git Generators By activating Go Templating becomes an object Therefore some changes must be made to the Git generators templating becomes becomes becomes becomes becomes becomes if being used in the file generator becomes Here is an example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons spec generators git repoURL https github com argoproj argo cd git revision HEAD directories path applicationset examples git generator directory cluster addons template metadata name spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD path destination server https kubernetes default svc namespace becomes yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster addons spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git revision HEAD directories path applicationset examples git generator directory cluster addons template metadata name spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD path destination server https kubernetes default svc namespace It is also possible to use Sprig functions to construct the path variables manually with goTemplate false with goTemplate true with goTemplate true Sprig Available template functions ApplicationSet controller provides all sprig http masterminds github io sprig Go templates function except env expandenv and getHostByName normalize sanitizes the input so that it complies with the following rules 1 contains no more than 253 characters 2 contains only lowercase alphanumeric characters or 3 starts and ends with an alphanumeric character slugify sanitizes like normalize and smart truncates it doesn t cut a word into 2 like described in the introduction introduction section toYaml fromYaml fromYamlArray helm like functions Examples Basic Go template usage This example shows basic string parameter substitution yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec goTemplate true goTemplateOptions missingkey error generators list elements cluster engineering dev url https 1 2 3 4 cluster engineering prod url https 2 4 6 8 cluster finance preprod url https 9 8 7 6 template metadata name guestbook spec project my project source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook destination server namespace guestbook Fallbacks for unset parameters For some generators a parameter of a certain name might not always be populated for example with the values generator or the git files generator In these cases you can use a Go template to provide a fallback value yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec goTemplate true goTemplateOptions missingkey error generators list elements cluster engineering dev url https kubernetes default svc cluster engineering prod url https kubernetes default svc nameSuffix my name suffix template metadata name spec project default source repoURL https github com argoproj argo cd git targetRevision HEAD path applicationset examples list generator guestbook destination server namespace guestbook This ApplicationSet will produce an Application called engineering dev and another called engineering prod my name suffix Note that unset parameters are an error so you need to avoid looking up a property that doesn t exist Instead use template functions like dig to do the lookup with a default If you prefer to have unset parameters default to zero you can remove goTemplateOptions missingkey error or set it to goTemplateOptions missingkey invalid
argocd Using a Merge generator is appropriate when a subset of parameter sets require overriding Merge Generator As an example imagine that we have two clusters The Merge generator combines parameters produced by the base first generator with matching parameter sets produced by subsequent generators A matching parameter set has the same values for the configured merge keys Non matching parameter sets are discarded Override precedence is bottom to top the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2 Example Base Cluster generator override Cluster generator List generator
# Merge Generator The Merge generator combines parameters produced by the base (first) generator with matching parameter sets produced by subsequent generators. A _matching_ parameter set has the same values for the configured _merge keys_. _Non-matching_ parameter sets are discarded. Override precedence is bottom-to-top: the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2. Using a Merge generator is appropriate when a subset of parameter sets require overriding. ## Example: Base Cluster generator + override Cluster generator + List generator As an example, imagine that we have two clusters: - A `staging` cluster (at `https://1.2.3.4`) - A `production` cluster (at `https://2.4.6.8`) ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-git spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: # merge 'parent' generator - merge: mergeKeys: - server generators: - clusters: values: kafka: 'true' redis: 'false' # For clusters with a specific label, enable Kafka. - clusters: selector: matchLabels: use-kafka: 'false' values: kafka: 'false' # For a specific cluster, enable Redis. - list: elements: - server: https://2.4.6.8 values.redis: 'true' template: metadata: name: '' spec: project: '' source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: app helm: parameters: - name: kafka value: '' - name: redis value: '' destination: server: '' namespace: default ``` The base Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging and production cluster secrets, and produces two corresponding sets of parameters: ```yaml - name: staging server: https://1.2.3.4 values.kafka: 'true' values.redis: 'false' - name: production server: https://2.4.6.8 values.kafka: 'true' values.redis: 'false' ``` The override Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging cluster secret (which has the required label), and produces the following parameters: ```yaml - name: staging server: https://1.2.3.4 values.kafka: 'false' ``` When merged with the base generator's parameters, the `values.kafka` value for the staging cluster is set to `'false'`. ```yaml - name: staging server: https://1.2.3.4 values.kafka: 'false' values.redis: 'false' - name: production server: https://2.4.6.8 values.kafka: 'true' values.redis: 'false' ``` Finally, the List cluster generates a single set of parameters: ```yaml - server: https://2.4.6.8 values.redis: 'true' ``` When merged with the updated base parameters, the `values.redis` value for the production cluster is set to `'true'`. This is the merge generator's final output: ```yaml - name: staging server: https://1.2.3.4 values.kafka: 'false' values.redis: 'false' - name: production server: https://2.4.6.8 values.kafka: 'true' values.redis: 'true' ``` ## Example: Use value interpolation in merge Some generators support additional values and interpolating from generated variables to selected values. This can be used to teach the merge generator which generated variables to use to combine different generators. The following example combines discovered clusters and a git repository by cluster labels and the branch name: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-git spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: # merge 'parent' generator: # Use the selector set by both child generators to combine them. - merge: mergeKeys: # Note that this would not work with goTemplate enabled, # nested merge keys are not supported there. - values.selector generators: # Assuming, all configured clusters have a label for their location: # Set the selector to this location. - clusters: values: selector: '' # The git repo may have different directories which correspond to the # cluster locations, using these as a selector. - git: repoURL: https://github.com/argoproj/argocd-example-apps/ revision: HEAD directories: - path: '*' values: selector: '' template: metadata: name: '' spec: project: '' source: repoURL: https://github.com/argoproj/argocd-example-apps/ # The cluster values field for each generator will be substituted here: targetRevision: HEAD path: '' destination: server: '' namespace: default ``` Assuming a cluster named `germany01` with the label `metadata.labels.location=Germany` and a git repository containing a directory called `Germany`, this could combine to values as follows: ```yaml # From the cluster generator - name: germany01 server: https://1.2.3.4 # From the git generator path: Germany # Combining selector with the merge generator values.selector: 'Germany' # More values from cluster & git generator # […] ``` ## Restrictions 1. You should specify only a single generator per array entry. This is not valid: - merge: generators: - list: # (...) git: # (...) - While this *will* be accepted by Kubernetes API validation, the controller will report an error on generation. Each generator should be specified in a separate array element, as in the examples above. 1. The Merge generator does not support [`template` overrides](Template.md#generator-templates) specified on child generators. This `template` will not be processed: - merge: generators: - list: elements: - # (...) template: { } # Not processed 1. Combination-type generators (Matrix or Merge) can only be nested once. For example, this will not work: - merge: generators: - merge: generators: - merge: # This third level is invalid. generators: - list: elements: - # (...) 1. Merging on nested values while using `goTemplate: true` is currently not supported, this will not work spec: goTemplate: true generators: - merge: mergeKeys: - values.merge 1. When using a Merge generator nested inside another Matrix or Merge generator, [Post Selectors](Generators-Post-Selector.md) for this nested generator's generators will only be applied when enabled via `spec.applyNestedSelectors`. - merge: generators: - merge: generators: - list elements: - # (...) selector: { } # Only applied when applyNestedSelectors is true
argocd
Merge Generator The Merge generator combines parameters produced by the base first generator with matching parameter sets produced by subsequent generators A matching parameter set has the same values for the configured merge keys Non matching parameter sets are discarded Override precedence is bottom to top the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2 Using a Merge generator is appropriate when a subset of parameter sets require overriding Example Base Cluster generator override Cluster generator List generator As an example imagine that we have two clusters A staging cluster at https 1 2 3 4 A production cluster at https 2 4 6 8 yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster git spec goTemplate true goTemplateOptions missingkey error generators merge parent generator merge mergeKeys server generators clusters values kafka true redis false For clusters with a specific label enable Kafka clusters selector matchLabels use kafka false values kafka false For a specific cluster enable Redis list elements server https 2 4 6 8 values redis true template metadata name spec project source repoURL https github com argoproj argo cd git targetRevision HEAD path app helm parameters name kafka value name redis value destination server namespace default The base Cluster generator scans the set of clusters defined in Argo CD Generators Cluster md finds the staging and production cluster secrets and produces two corresponding sets of parameters yaml name staging server https 1 2 3 4 values kafka true values redis false name production server https 2 4 6 8 values kafka true values redis false The override Cluster generator scans the set of clusters defined in Argo CD Generators Cluster md finds the staging cluster secret which has the required label and produces the following parameters yaml name staging server https 1 2 3 4 values kafka false When merged with the base generator s parameters the values kafka value for the staging cluster is set to false yaml name staging server https 1 2 3 4 values kafka false values redis false name production server https 2 4 6 8 values kafka true values redis false Finally the List cluster generates a single set of parameters yaml server https 2 4 6 8 values redis true When merged with the updated base parameters the values redis value for the production cluster is set to true This is the merge generator s final output yaml name staging server https 1 2 3 4 values kafka false values redis false name production server https 2 4 6 8 values kafka true values redis true Example Use value interpolation in merge Some generators support additional values and interpolating from generated variables to selected values This can be used to teach the merge generator which generated variables to use to combine different generators The following example combines discovered clusters and a git repository by cluster labels and the branch name yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster git spec goTemplate true goTemplateOptions missingkey error generators merge parent generator Use the selector set by both child generators to combine them merge mergeKeys Note that this would not work with goTemplate enabled nested merge keys are not supported there values selector generators Assuming all configured clusters have a label for their location Set the selector to this location clusters values selector The git repo may have different directories which correspond to the cluster locations using these as a selector git repoURL https github com argoproj argocd example apps revision HEAD directories path values selector template metadata name spec project source repoURL https github com argoproj argocd example apps The cluster values field for each generator will be substituted here targetRevision HEAD path destination server namespace default Assuming a cluster named germany01 with the label metadata labels location Germany and a git repository containing a directory called Germany this could combine to values as follows yaml From the cluster generator name germany01 server https 1 2 3 4 From the git generator path Germany Combining selector with the merge generator values selector Germany More values from cluster git generator Restrictions 1 You should specify only a single generator per array entry This is not valid merge generators list git While this will be accepted by Kubernetes API validation the controller will report an error on generation Each generator should be specified in a separate array element as in the examples above 1 The Merge generator does not support template overrides Template md generator templates specified on child generators This template will not be processed merge generators list elements template Not processed 1 Combination type generators Matrix or Merge can only be nested once For example this will not work merge generators merge generators merge This third level is invalid generators list elements 1 Merging on nested values while using goTemplate true is currently not supported this will not work spec goTemplate true generators merge mergeKeys values merge 1 When using a Merge generator nested inside another Matrix or Merge generator Post Selectors Generators Post Selector md for this nested generator s generators will only be applied when enabled via spec applyNestedSelectors merge generators merge generators list elements selector Only applied when applyNestedSelectors is true
argocd And so on Git File Generator List Generator Providing a list of applications to deploy via configuration files with optional configuration options and deploying them to a fixed list of clusters Git Directory Generator Cluster Decision Resource Generator Locate application resources contained within folders of a Git repository and deploy them to a list of clusters provided via an external custom resource SCM Provider Generator Cluster Generator Scanning the repositories of a GitHub organization for application resources and targeting those resources to all available clusters The Matrix generator combines the parameters generated by two child generators iterating through every combination of each generator s generated parameters Matrix Generator By combining both generators parameters to produce every possible combination this allows you to gain the intrinsic properties of both generators For example a small subset of the many possible use cases include
# Matrix Generator The Matrix generator combines the parameters generated by two child generators, iterating through every combination of each generator's generated parameters. By combining both generators parameters, to produce every possible combination, this allows you to gain the intrinsic properties of both generators. For example, a small subset of the many possible use cases include: - *SCM Provider Generator + Cluster Generator*: Scanning the repositories of a GitHub organization for application resources, and targeting those resources to all available clusters. - *Git File Generator + List Generator*: Providing a list of applications to deploy via configuration files, with optional configuration options, and deploying them to a fixed list of clusters. - *Git Directory Generator + Cluster Decision Resource Generator*: Locate application resources contained within folders of a Git repository, and deploy them to a list of clusters provided via an external custom resource. - And so on... Any set of generators may be used, with the combined values of those generators inserted into the `template` parameters, as usual. **Note**: If both child generators are Git generators, one or both of them must use the `pathParamPrefix` option to avoid conflicts when merging the child generators’ items. ## Example: Git Directory generator + Cluster generator As an example, imagine that we have two clusters: - A `staging` cluster (at `https://1.2.3.4`) - A `production` cluster (at `https://2.4.6.8`) And our application YAMLs are defined in a Git repository: - [Argo Workflows controller](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/cluster-addons/argo-workflows) - [Prometheus operator](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/git-generator-directory/cluster-addons/prometheus-operator) Our goal is to deploy both applications onto both clusters, and, more generally, in the future to automatically deploy new applications in the Git repository, and to new clusters defined within Argo CD, as well. For this we will use the Matrix generator, with the Git and the Cluster as child generators: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-git spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: # matrix 'parent' generator - matrix: generators: # git generator, 'child' #1 - git: repoURL: https://github.com/argoproj/argo-cd.git revision: HEAD directories: - path: applicationset/examples/matrix/cluster-addons/* # cluster generator, 'child' #2 - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster template: metadata: name: '-' spec: project: '' source: repoURL: https://github.com/argoproj/argo-cd.git targetRevision: HEAD path: '' destination: server: '' namespace: '' ``` First, the Git directory generator will scan the Git repository, discovering directories under the specified path. It discovers the argo-workflows and prometheus-operator applications, and produces two corresponding sets of parameters: ```yaml - path: /examples/git-generator-directory/cluster-addons/argo-workflows path.basename: argo-workflows - path: /examples/git-generator-directory/cluster-addons/prometheus-operator path.basename: prometheus-operator ``` Next, the Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging and production cluster secrets, and produce two corresponding sets of parameters: ```yaml - name: staging server: https://1.2.3.4 - name: production server: https://2.4.6.8 ``` Finally, the Matrix generator will combine both sets of outputs, and produce: ```yaml - name: staging server: https://1.2.3.4 path: /examples/git-generator-directory/cluster-addons/argo-workflows path.basename: argo-workflows - name: staging server: https://1.2.3.4 path: /examples/git-generator-directory/cluster-addons/prometheus-operator path.basename: prometheus-operator - name: production server: https://2.4.6.8 path: /examples/git-generator-directory/cluster-addons/argo-workflows path.basename: argo-workflows - name: production server: https://2.4.6.8 path: /examples/git-generator-directory/cluster-addons/prometheus-operator path.basename: prometheus-operator ``` (*The full example can be found [here](https://github.com/argoproj/argo-cd/tree/master/applicationset/examples/matrix).*) ## Using Parameters from one child generator in another child generator The Matrix generator allows using the parameters generated by one child generator inside another child generator. Below is an example that uses a git-files generator in conjunction with a cluster generator. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-git spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: # matrix 'parent' generator - matrix: generators: # git generator, 'child' #1 - git: repoURL: https://github.com/argoproj/applicationset.git revision: HEAD files: - path: "examples/git-generator-files-discovery/cluster-config/**/config.json" # cluster generator, 'child' #2 - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster kubernetes.io/environment: '' template: metadata: name: '-guestbook' spec: project: default source: repoURL: https://github.com/argoproj/applicationset.git targetRevision: HEAD path: "examples/git-generator-files-discovery/apps/guestbook" destination: server: '' namespace: guestbook ``` Here is the corresponding folder structure for the git repository used by the git-files generator: ``` ├── apps │ └── guestbook │ ├── guestbook-ui-deployment.yaml │ ├── guestbook-ui-svc.yaml │ └── kustomization.yaml ├── cluster-config │ └── engineering │ ├── dev │ │ └── config.json │ └── prod │ └── config.json └── git-generator-files.yaml ``` In the above example, the `` parameters produced by the git-files generator will resolve to `dev` and `prod`. In the 2nd child generator, the label selector with label `kubernetes.io/environment: ` will resolve with the values produced by the first child generator's parameters (`kubernetes.io/environment: prod` and `kubernetes.io/environment: dev`). So in the above example, clusters with the label `kubernetes.io/environment: prod` will have only prod-specific configuration (ie. `prod/config.json`) applied to it, wheres clusters with the label `kubernetes.io/environment: dev` will have only dev-specific configuration (ie. `dev/config.json`) ## Overriding parameters from one child generator in another child generator The Matrix Generator allows parameters with the same name to be defined in multiple child generators. This is useful, for example, to define default values for all stages in one generator and override them with stage-specific values in another generator. The example below generates a Helm-based application using a matrix generator with two git generators: the first provides stage-specific values (one directory per stage) and the second provides global values for all stages. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: parameter-override-example spec: generators: - matrix: generators: - git: repoURL: https://github.com/example/values.git revision: HEAD files: - path: "**/stage.values.yaml" - git: repoURL: https://github.com/example/values.git revision: HEAD files: - path: "global.values.yaml" goTemplate: true template: metadata: name: example spec: project: default source: repoURL: https://github.com/example/example-app.git targetRevision: HEAD path: . helm: values: | ` }} destination: server: in-cluster namespace: default ``` Given the following structure/content of the example/values repository: ``` ├── test │ └── stage.values.yaml │ stageName: test │ cpuRequest: 100m │ debugEnabled: true ├── staging │ └── stage.values.yaml │ stageName: staging ├── production │ └── stage.values.yaml │ stageName: production │ memoryLimit: 512Mi │ debugEnabled: false └── global.values.yaml cpuRequest: 200m memoryLimit: 256Mi debugEnabled: true ``` The matrix generator above would yield the following results: ```yaml - stageName: test cpuRequest: 100m memoryLimit: 256Mi debugEnabled: true - stageName: staging cpuRequest: 200m memoryLimit: 256Mi debugEnabled: true - stageName: production cpuRequest: 200m memoryLimit: 512Mi debugEnabled: false ``` ## Example: Two Git Generators Using `pathParamPrefix` The matrix generator will fail if its children produce results containing identical keys with differing values. This poses a problem for matrix generators where both children are Git generators since they auto-populate `path`-related parameters in their outputs. To avoid this problem, specify a `pathParamPrefix` on one or both of the child generators to avoid conflicting parameter keys in the output. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: two-gits-with-path-param-prefix spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - matrix: generators: # git file generator referencing files containing details about each # app to be deployed (e.g., `appName`). - git: repoURL: https://github.com/some-org/some-repo.git revision: HEAD files: - path: "apps/*.json" pathParamPrefix: app # git file generator referencing files containing details about # locations to which each app should deploy (e.g., `region` and # `clusterName`). - git: repoURL: https://github.com/some-org/some-repo.git revision: HEAD files: - path: "targets//*.json" pathParamPrefix: target template: {} # ... ``` Then, given the following file structure/content: ``` ├── apps │ ├── app-one.json │ │ { "appName": "app-one" } │ └── app-two.json │ { "appName": "app-two" } └── targets ├── app-one │ ├── east-cluster-one.json │ │ { "region": "east", "clusterName": "cluster-one" } │ └── east-cluster-two.json │ { "region": "east", "clusterName": "cluster-two" } └── app-two ├── east-cluster-one.json │ { "region": "east", "clusterName": "cluster-one" } └── west-cluster-three.json { "region": "west", "clusterName": "cluster-three" } ``` …the matrix generator above would yield the following results: ```yaml - appName: app-one app.path: /apps app.path.filename: app-one.json # plus additional path-related parameters from the first child generator, all # prefixed with "app". region: east clusterName: cluster-one target.path: /targets/app-one target.path.filename: east-cluster-one.json # plus additional path-related parameters from the second child generator, all # prefixed with "target". - appName: app-one app.path: /apps app.path.filename: app-one.json region: east clusterName: cluster-two target.path: /targets/app-one target.path.filename: east-cluster-two.json - appName: app-two app.path: /apps app.path.filename: app-two.json region: east clusterName: cluster-one target.path: /targets/app-two target.path.filename: east-cluster-one.json - appName: app-two app.path: /apps app.path.filename: app-two.json region: west clusterName: cluster-three target.path: /targets/app-two target.path.filename: west-cluster-three.json ``` ## Restrictions 1. The Matrix generator currently only supports combining the outputs of only two child generators (eg does not support generating combinations for 3 or more). 1. You should specify only a single generator per array entry, eg this is not valid: - matrix: generators: - list: # (...) git: # (...) - While this *will* be accepted by Kubernetes API validation, the controller will report an error on generation. Each generator should be specified in a separate array element, as in the examples above. 1. The Matrix generator does not currently support [`template` overrides](Template.md#generator-templates) specified on child generators, eg this `template` will not be processed: - matrix: generators: - list: elements: - # (...) template: { } # Not processed 1. Combination-type generators (matrix or merge) can only be nested once. For example, this will not work: - matrix: generators: - matrix: generators: - matrix: # This third level is invalid. generators: - list: elements: - # (...) 1. When using parameters from one child generator inside another child generator, the child generator that *consumes* the parameters **must come after** the child generator that *produces* the parameters. For example, the below example would be invalid (cluster-generator must come after the git-files generator): - matrix: generators: # cluster generator, 'child' #1 - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster kubernetes.io/environment: '' # is produced by git-files generator # git generator, 'child' #2 - git: repoURL: https://github.com/argoproj/applicationset.git revision: HEAD files: - path: "examples/git-generator-files-discovery/cluster-config/**/config.json" 1. You cannot have both child generators consuming parameters from each another. In the example below, the cluster generator is consuming the `` parameter produced by the git-files generator, whereas the git-files generator is consuming the `` parameter produced by the cluster generator. This will result in a circular dependency, which is invalid. - matrix: generators: # cluster generator, 'child' #1 - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster kubernetes.io/environment: '' # is produced by git-files generator # git generator, 'child' #2 - git: repoURL: https://github.com/argoproj/applicationset.git revision: HEAD files: - path: "examples/git-generator-files-discovery/cluster-config/engineering/**/config.json" # is produced by cluster generator 1. When using a Matrix generator nested inside another Matrix or Merge generator, [Post Selectors](Generators-Post-Selector.md) for this nested generator's generators will only be applied when enabled via `spec.applyNestedSelectors`. You may also need to enable this even if your Post Selectors are not within the nested matrix or Merge generator, but are instead a sibling of a nested Matrix or Merge generator. - matrix: generators: - matrix: generators: - list elements: - # (...) selector: { } # Only applied when applyNestedSelectors is true
argocd
Matrix Generator The Matrix generator combines the parameters generated by two child generators iterating through every combination of each generator s generated parameters By combining both generators parameters to produce every possible combination this allows you to gain the intrinsic properties of both generators For example a small subset of the many possible use cases include SCM Provider Generator Cluster Generator Scanning the repositories of a GitHub organization for application resources and targeting those resources to all available clusters Git File Generator List Generator Providing a list of applications to deploy via configuration files with optional configuration options and deploying them to a fixed list of clusters Git Directory Generator Cluster Decision Resource Generator Locate application resources contained within folders of a Git repository and deploy them to a list of clusters provided via an external custom resource And so on Any set of generators may be used with the combined values of those generators inserted into the template parameters as usual Note If both child generators are Git generators one or both of them must use the pathParamPrefix option to avoid conflicts when merging the child generators items Example Git Directory generator Cluster generator As an example imagine that we have two clusters A staging cluster at https 1 2 3 4 A production cluster at https 2 4 6 8 And our application YAMLs are defined in a Git repository Argo Workflows controller https github com argoproj argo cd tree master applicationset examples git generator directory cluster addons argo workflows Prometheus operator https github com argoproj argo cd tree master applicationset examples git generator directory cluster addons prometheus operator Our goal is to deploy both applications onto both clusters and more generally in the future to automatically deploy new applications in the Git repository and to new clusters defined within Argo CD as well For this we will use the Matrix generator with the Git and the Cluster as child generators yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster git spec goTemplate true goTemplateOptions missingkey error generators matrix parent generator matrix generators git generator child 1 git repoURL https github com argoproj argo cd git revision HEAD directories path applicationset examples matrix cluster addons cluster generator child 2 clusters selector matchLabels argocd argoproj io secret type cluster template metadata name spec project source repoURL https github com argoproj argo cd git targetRevision HEAD path destination server namespace First the Git directory generator will scan the Git repository discovering directories under the specified path It discovers the argo workflows and prometheus operator applications and produces two corresponding sets of parameters yaml path examples git generator directory cluster addons argo workflows path basename argo workflows path examples git generator directory cluster addons prometheus operator path basename prometheus operator Next the Cluster generator scans the set of clusters defined in Argo CD Generators Cluster md finds the staging and production cluster secrets and produce two corresponding sets of parameters yaml name staging server https 1 2 3 4 name production server https 2 4 6 8 Finally the Matrix generator will combine both sets of outputs and produce yaml name staging server https 1 2 3 4 path examples git generator directory cluster addons argo workflows path basename argo workflows name staging server https 1 2 3 4 path examples git generator directory cluster addons prometheus operator path basename prometheus operator name production server https 2 4 6 8 path examples git generator directory cluster addons argo workflows path basename argo workflows name production server https 2 4 6 8 path examples git generator directory cluster addons prometheus operator path basename prometheus operator The full example can be found here https github com argoproj argo cd tree master applicationset examples matrix Using Parameters from one child generator in another child generator The Matrix generator allows using the parameters generated by one child generator inside another child generator Below is an example that uses a git files generator in conjunction with a cluster generator yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name cluster git spec goTemplate true goTemplateOptions missingkey error generators matrix parent generator matrix generators git generator child 1 git repoURL https github com argoproj applicationset git revision HEAD files path examples git generator files discovery cluster config config json cluster generator child 2 clusters selector matchLabels argocd argoproj io secret type cluster kubernetes io environment template metadata name guestbook spec project default source repoURL https github com argoproj applicationset git targetRevision HEAD path examples git generator files discovery apps guestbook destination server namespace guestbook Here is the corresponding folder structure for the git repository used by the git files generator apps guestbook guestbook ui deployment yaml guestbook ui svc yaml kustomization yaml cluster config engineering dev config json prod config json git generator files yaml In the above example the parameters produced by the git files generator will resolve to dev and prod In the 2nd child generator the label selector with label kubernetes io environment will resolve with the values produced by the first child generator s parameters kubernetes io environment prod and kubernetes io environment dev So in the above example clusters with the label kubernetes io environment prod will have only prod specific configuration ie prod config json applied to it wheres clusters with the label kubernetes io environment dev will have only dev specific configuration ie dev config json Overriding parameters from one child generator in another child generator The Matrix Generator allows parameters with the same name to be defined in multiple child generators This is useful for example to define default values for all stages in one generator and override them with stage specific values in another generator The example below generates a Helm based application using a matrix generator with two git generators the first provides stage specific values one directory per stage and the second provides global values for all stages yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name parameter override example spec generators matrix generators git repoURL https github com example values git revision HEAD files path stage values yaml git repoURL https github com example values git revision HEAD files path global values yaml goTemplate true template metadata name example spec project default source repoURL https github com example example app git targetRevision HEAD path helm values destination server in cluster namespace default Given the following structure content of the example values repository test stage values yaml stageName test cpuRequest 100m debugEnabled true staging stage values yaml stageName staging production stage values yaml stageName production memoryLimit 512Mi debugEnabled false global values yaml cpuRequest 200m memoryLimit 256Mi debugEnabled true The matrix generator above would yield the following results yaml stageName test cpuRequest 100m memoryLimit 256Mi debugEnabled true stageName staging cpuRequest 200m memoryLimit 256Mi debugEnabled true stageName production cpuRequest 200m memoryLimit 512Mi debugEnabled false Example Two Git Generators Using pathParamPrefix The matrix generator will fail if its children produce results containing identical keys with differing values This poses a problem for matrix generators where both children are Git generators since they auto populate path related parameters in their outputs To avoid this problem specify a pathParamPrefix on one or both of the child generators to avoid conflicting parameter keys in the output yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name two gits with path param prefix spec goTemplate true goTemplateOptions missingkey error generators matrix generators git file generator referencing files containing details about each app to be deployed e g appName git repoURL https github com some org some repo git revision HEAD files path apps json pathParamPrefix app git file generator referencing files containing details about locations to which each app should deploy e g region and clusterName git repoURL https github com some org some repo git revision HEAD files path targets json pathParamPrefix target template Then given the following file structure content apps app one json appName app one app two json appName app two targets app one east cluster one json region east clusterName cluster one east cluster two json region east clusterName cluster two app two east cluster one json region east clusterName cluster one west cluster three json region west clusterName cluster three the matrix generator above would yield the following results yaml appName app one app path apps app path filename app one json plus additional path related parameters from the first child generator all prefixed with app region east clusterName cluster one target path targets app one target path filename east cluster one json plus additional path related parameters from the second child generator all prefixed with target appName app one app path apps app path filename app one json region east clusterName cluster two target path targets app one target path filename east cluster two json appName app two app path apps app path filename app two json region east clusterName cluster one target path targets app two target path filename east cluster one json appName app two app path apps app path filename app two json region west clusterName cluster three target path targets app two target path filename west cluster three json Restrictions 1 The Matrix generator currently only supports combining the outputs of only two child generators eg does not support generating combinations for 3 or more 1 You should specify only a single generator per array entry eg this is not valid matrix generators list git While this will be accepted by Kubernetes API validation the controller will report an error on generation Each generator should be specified in a separate array element as in the examples above 1 The Matrix generator does not currently support template overrides Template md generator templates specified on child generators eg this template will not be processed matrix generators list elements template Not processed 1 Combination type generators matrix or merge can only be nested once For example this will not work matrix generators matrix generators matrix This third level is invalid generators list elements 1 When using parameters from one child generator inside another child generator the child generator that consumes the parameters must come after the child generator that produces the parameters For example the below example would be invalid cluster generator must come after the git files generator matrix generators cluster generator child 1 clusters selector matchLabels argocd argoproj io secret type cluster kubernetes io environment is produced by git files generator git generator child 2 git repoURL https github com argoproj applicationset git revision HEAD files path examples git generator files discovery cluster config config json 1 You cannot have both child generators consuming parameters from each another In the example below the cluster generator is consuming the parameter produced by the git files generator whereas the git files generator is consuming the parameter produced by the cluster generator This will result in a circular dependency which is invalid matrix generators cluster generator child 1 clusters selector matchLabels argocd argoproj io secret type cluster kubernetes io environment is produced by git files generator git generator child 2 git repoURL https github com argoproj applicationset git revision HEAD files path examples git generator files discovery cluster config engineering config json is produced by cluster generator 1 When using a Matrix generator nested inside another Matrix or Merge generator Post Selectors Generators Post Selector md for this nested generator s generators will only be applied when enabled via spec applyNestedSelectors You may also need to enable this even if your Post Selectors are not within the nested matrix or Merge generator but are instead a sibling of a nested Matrix or Merge generator matrix generators matrix generators list elements selector Only applied when applyNestedSelectors is true
argocd The ApplicationSet controller is a that adds support for an CRD This controller CRD enables both automation and greater flexibility managing Applications across a large number of clusters and within monorepos plus it makes self service usage possible on multitenant Kubernetes clusters The ApplicationSet controller works alongside an existing Argo CD is a declarative GitOps continuous delivery tool which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow Starting with Argo CD v2 3 the ApplicationSet controller is bundled with Argo CD Introduction Introduction to ApplicationSet controller
# Introduction to ApplicationSet controller ## Introduction The ApplicationSet controller is a [Kubernetes controller](https://kubernetes.io/docs/concepts/architecture/controller/) that adds support for an `ApplicationSet` [CustomResourceDefinition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD). This controller/CRD enables both automation and greater flexibility managing [Argo CD](../../index.md) Applications across a large number of clusters and within monorepos, plus it makes self-service usage possible on multitenant Kubernetes clusters. The ApplicationSet controller works alongside an existing [Argo CD installation](../../index.md). Argo CD is a declarative, GitOps continuous delivery tool, which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow. Starting with Argo CD v2.3, the ApplicationSet controller is bundled with Argo CD. The ApplicationSet controller, supplements Argo CD by adding additional features in support of cluster-administrator-focused scenarios. The `ApplicationSet` controller provides: - The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD - The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD - Improved support for monorepos: in the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository - Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces) !!! note Be aware of the [security implications](./Security.md) of ApplicationSets before using them. ## The ApplicationSet resource This example defines a new `guestbook` resource of kind `ApplicationSet`: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - list: elements: - cluster: engineering-dev url: https://1.2.3.4 - cluster: engineering-prod url: https://2.4.6.8 - cluster: finance-preprod url: https://9.8.7.6 template: metadata: name: '-guestbook' spec: project: my-project source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/ destination: server: '' namespace: guestbook ``` In this example, we want to deploy our `guestbook` application (with the Kubernetes resources for this application coming from Git, since this is GitOps) to a list of Kubernetes clusters (with the list of target clusters defined in the List items element of the `ApplicationSet` resource). While there are multiple types of *generators* that are available to use with the `ApplicationSet` resource, this example uses the List generator, which simply contains a fixed, literal list of clusters to target. This list of clusters will be the clusters upon which Argo CD deploys the `guestbook` application resources, once the ApplicationSet controller has processed the `ApplicationSet` resource. Generators, such as the List generator, are responsible for generating *parameters*. Parameters are key-values pairs that are substituted into the `template:` section of the ApplicationSet resource during template rendering. There are multiple generators currently supported by the ApplicationSet controller: - **List generator**: Generates parameters based on a fixed list of cluster name/URL values, as seen in the example above. - **Cluster generator**: Rather than a literal list of clusters (as with the list generator), the cluster generator automatically generates cluster parameters based on the clusters that are defined within Argo CD. - **Git generator**: The Git generator generates parameters based on files or folders that are contained within the Git repository defined within the generator resource. - Files containing JSON values will be parsed and converted into template parameters. - Individual directory paths within the Git repository may be used as parameter values, as well. - **Matrix generator**: The Matrix generators combines the generated parameters of two other generators. See the [generator section](Generators.md) for more information about individual generators, and the other generators not listed above. ## Parameter substitution into templates Independent of which generator is used, parameters generated by a generator are substituted into `` values within the `template:` section of the `ApplicationSet` resource. In this example, the List generator defines `cluster` and `url` parameters, which are then substituted into the template's `` and `` values, respectively. After substitution, this `guestbook` `ApplicationSet` resource is applied to the Kubernetes cluster: 1. The ApplicationSet controller processes the generator entries, producing a set of template parameters. 2. These parameters are substituted into the template, once for each set of parameters. 3. Each rendered template is converted into an Argo CD `Application` resource, which is then created (or updated) within the Argo CD namespace. 4. Finally, the Argo CD controller is notified of these `Application` resources and is responsible for handling them. With the three different clusters defined in our example -- `engineering-dev`, `engineering-prod`, and `finance-preprod` -- this will produce three new Argo CD `Application` resources: one for each cluster. Here is an example of one of the `Application` resources that would be created, for the `engineering-dev` cluster at `1.2.3.4`: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: engineering-dev-guestbook spec: source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/engineering-dev destination: server: https://1.2.3.4 namespace: guestbook ``` We can see that the generated values have been substituted into the `server` and `path` fields of the template, and the template has been rendered into a fully-fleshed out Argo CD Application. The Applications are now also visible from within the Argo CD UI: ![List generator example in Argo CD Web UI](../../assets/applicationset/Introduction/List-Example-In-Argo-CD-Web-UI.png) The ApplicationSet controller will ensure that any changes, updates, or deletions made to `ApplicationSet` resources are automatically applied to the corresponding `Application`(s). For instance, if a new cluster/URL list entry was added to the List generator, a new Argo CD `Application` resource would be accordingly created for this new cluster. Any edits made to the `guestbook` `ApplicationSet` resource will affect all the Argo CD Applications that were instantiated by that resource, including the new Application. While the List generator's literal list of clusters is fairly simplistic, much more sophisticated scenarios are supported by the other available generators in the ApplicationSet controller.
argocd
Introduction to ApplicationSet controller Introduction The ApplicationSet controller is a Kubernetes controller https kubernetes io docs concepts architecture controller that adds support for an ApplicationSet CustomResourceDefinition https kubernetes io docs tasks extend kubernetes custom resources custom resource definitions CRD This controller CRD enables both automation and greater flexibility managing Argo CD index md Applications across a large number of clusters and within monorepos plus it makes self service usage possible on multitenant Kubernetes clusters The ApplicationSet controller works alongside an existing Argo CD installation index md Argo CD is a declarative GitOps continuous delivery tool which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow Starting with Argo CD v2 3 the ApplicationSet controller is bundled with Argo CD The ApplicationSet controller supplements Argo CD by adding additional features in support of cluster administrator focused scenarios The ApplicationSet controller provides The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD Improved support for monorepos in the context of Argo CD a monorepo is multiple Argo CD Application resources defined within a single Git repository Within multitenant clusters improves the ability of individual cluster tenants to deploy applications using Argo CD without needing to involve privileged cluster administrators in enabling the destination clusters namespaces note Be aware of the security implications Security md of ApplicationSets before using them The ApplicationSet resource This example defines a new guestbook resource of kind ApplicationSet yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec goTemplate true goTemplateOptions missingkey error generators list elements cluster engineering dev url https 1 2 3 4 cluster engineering prod url https 2 4 6 8 cluster finance preprod url https 9 8 7 6 template metadata name guestbook spec project my project source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook destination server namespace guestbook In this example we want to deploy our guestbook application with the Kubernetes resources for this application coming from Git since this is GitOps to a list of Kubernetes clusters with the list of target clusters defined in the List items element of the ApplicationSet resource While there are multiple types of generators that are available to use with the ApplicationSet resource this example uses the List generator which simply contains a fixed literal list of clusters to target This list of clusters will be the clusters upon which Argo CD deploys the guestbook application resources once the ApplicationSet controller has processed the ApplicationSet resource Generators such as the List generator are responsible for generating parameters Parameters are key values pairs that are substituted into the template section of the ApplicationSet resource during template rendering There are multiple generators currently supported by the ApplicationSet controller List generator Generates parameters based on a fixed list of cluster name URL values as seen in the example above Cluster generator Rather than a literal list of clusters as with the list generator the cluster generator automatically generates cluster parameters based on the clusters that are defined within Argo CD Git generator The Git generator generates parameters based on files or folders that are contained within the Git repository defined within the generator resource Files containing JSON values will be parsed and converted into template parameters Individual directory paths within the Git repository may be used as parameter values as well Matrix generator The Matrix generators combines the generated parameters of two other generators See the generator section Generators md for more information about individual generators and the other generators not listed above Parameter substitution into templates Independent of which generator is used parameters generated by a generator are substituted into values within the template section of the ApplicationSet resource In this example the List generator defines cluster and url parameters which are then substituted into the template s and values respectively After substitution this guestbook ApplicationSet resource is applied to the Kubernetes cluster 1 The ApplicationSet controller processes the generator entries producing a set of template parameters 2 These parameters are substituted into the template once for each set of parameters 3 Each rendered template is converted into an Argo CD Application resource which is then created or updated within the Argo CD namespace 4 Finally the Argo CD controller is notified of these Application resources and is responsible for handling them With the three different clusters defined in our example engineering dev engineering prod and finance preprod this will produce three new Argo CD Application resources one for each cluster Here is an example of one of the Application resources that would be created for the engineering dev cluster at 1 2 3 4 yaml apiVersion argoproj io v1alpha1 kind Application metadata name engineering dev guestbook spec source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook engineering dev destination server https 1 2 3 4 namespace guestbook We can see that the generated values have been substituted into the server and path fields of the template and the template has been rendered into a fully fleshed out Argo CD Application The Applications are now also visible from within the Argo CD UI List generator example in Argo CD Web UI assets applicationset Introduction List Example In Argo CD Web UI png The ApplicationSet controller will ensure that any changes updates or deletions made to ApplicationSet resources are automatically applied to the corresponding Application s For instance if a new cluster URL list entry was added to the List generator a new Argo CD Application resource would be accordingly created for this new cluster Any edits made to the guestbook ApplicationSet resource will affect all the Argo CD Applications that were instantiated by that resource including the new Application While the List generator s literal list of clusters is fairly simplistic much more sophisticated scenarios are supported by the other available generators in the ApplicationSet controller
argocd kind ApplicationSet apiVersion argoproj io v1alpha1 spec metadata SCM Provider Generator The SCM Provider generator uses the API of an SCMaaS provider eg GitHub to automatically discover repositories within an organization This fits well with GitOps layout patterns that split microservices across many repositories yaml name myapps
# SCM Provider Generator The SCM Provider generator uses the API of an SCMaaS provider (eg GitHub) to automatically discover repositories within an organization. This fits well with GitOps layout patterns that split microservices across many repositories. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: # Which protocol to clone using. cloneProtocol: ssh # See below for provider specific options. github: # ... ``` * `cloneProtocol`: Which protocol to use for the SCM URL. Default is provider-specific but ssh if possible. Not all providers necessarily support all protocols, see provider documentation below for available options. !!! note Know the security implications of using SCM generators. [Only admins may create ApplicationSets](./Security.md#only-admins-may-createupdatedelete-applicationsets) to avoid leaking Secrets, and [only admins may create repos/branches](./Security.md#templated-project-field) if the `project` field of an ApplicationSet with an SCM generator is templated, to avoid granting management of out-of-bounds resources. ## GitHub The GitHub mode uses the GitHub API to scan an organization in either github.com or GitHub Enterprise. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: github: # The GitHub organization to scan. organization: myorg # For GitHub Enterprise: api: https://git.example.com/ # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false. allBranches: true # Reference to a Secret containing an access token. (optional) tokenRef: secretName: github-token key: token # (optional) use a GitHub App to access the API instead of a PAT. appSecretName: gh-app-repo-creds template: # ... ``` * `organization`: Required name of the GitHub organization to scan. If you have multiple organizations, use multiple generators. * `api`: If using GitHub Enterprise, the URL to access it. * `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter. * `tokenRef`: A `Secret` name and key containing the GitHub access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. * `appSecretName`: A `Secret` name containing a GitHub App secret in [repo-creds format][repo-creds]. [repo-creds]: ../declarative-setup.md#repository-credentials For label filtering, the repository topics are used. Available clone protocols are `ssh` and `https`. ## Gitlab The GitLab mode uses the GitLab API to scan and organization in either gitlab.com or self-hosted GitLab. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: gitlab: # The base GitLab group to scan. You can either use the group id or the full namespaced path. group: "8675309" # For self-hosted GitLab: api: https://gitlab.example.com/ # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false. allBranches: true # If true, recurses through subgroups. If false, it searches only in the base group. Defaults to false. includeSubgroups: true # If true and includeSubgroups is also true, include Shared Projects, which is gitlab API default. # If false only search Projects under the same path. Defaults to true. includeSharedProjects: false # filter projects by topic. A single topic is supported by Gitlab API. Defaults to "" (all topics). topic: "my-topic" # Reference to a Secret containing an access token. (optional) tokenRef: secretName: gitlab-token key: token # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates. insecure: false # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional) caRef: configMapName: argocd-tls-certs-cm key: gitlab-ca template: # ... ``` * `group`: Required name of the base GitLab group to scan. If you have multiple base groups, use multiple generators. * `api`: If using self-hosted GitLab, the URL to access it. * `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter. * `includeSubgroups`: By default (false) the controller will only search for repos directly in the base group. If this is true, it will recurse through all the subgroups searching for repos to scan. * `includeSharedProjects`: If true and includeSubgroups is also true, include Shared Projects, which is gitlab API default. If false only search Projects under the same path. In general most would want the behaviour when set to false. Defaults to true. * `topic`: filter projects by topic. A single topic is supported by Gitlab API. Defaults to "" (all topics). * `tokenRef`: A `Secret` name and key containing the GitLab access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. * `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates. * `caRef`: Optional `ConfigMap` name and key containing the GitLab certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs. For label filtering, the repository topics are used. Available clone protocols are `ssh` and `https`. ### Self-signed TLS Certificates As a preferable alternative to setting `insecure` to true, you can configure self-signed TLS certificates for Gitlab. In order for a self-signed TLS certificate be used by an ApplicationSet's SCM / PR Gitlab Generator, the certificate needs to be mounted on the applicationset-controller. The path of the mounted certificate must be explicitly set using the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH` or alternatively using parameter `--scm-root-ca-path`. The applicationset controller will read the mounted certificate to create the Gitlab client for SCM/PR Providers This can be achieved conveniently by setting `applicationsetcontroller.scm.root.ca.path` in the argocd-cmd-params-cm ConfigMap. Be sure to restart the ApplicationSet controller after setting this value. ## Gitea The Gitea mode uses the Gitea API to scan organizations in your instance ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: gitea: # The Gitea owner to scan. owner: myorg # The Gitea instance url api: https://gitea.mydomain.com/ # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false. allBranches: true # Reference to a Secret containing an access token. (optional) tokenRef: secretName: gitea-token key: token template: # ... ``` * `owner`: Required name of the Gitea organization to scan. If you have multiple organizations, use multiple generators. * `api`: The URL of the Gitea instance you are using. * `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter. * `tokenRef`: A `Secret` name and key containing the Gitea access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. * `insecure`: Allow for self-signed TLS certificates. This SCM provider does not yet support label filtering Available clone protocols are `ssh` and `https`. ## Bitbucket Server Use the Bitbucket Server API (1.0) to scan repos in a project. Note that Bitbucket Server is not to same as Bitbucket Cloud (API 2.0) ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: bitbucketServer: project: myproject # URL of the Bitbucket Server. Required. api: https://mycompany.bitbucket.org # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false. allBranches: true # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken # authentication is required to access private repositories basicAuth: # The username to authenticate with username: myuser # Reference to a Secret containing the password or personal access token. passwordRef: secretName: mypassword key: password # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken # authentication is required to access private repositories bearerToken: # Reference to a Secret containing the bearer token. tokenRef: secretName: repotoken key: token # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates. insecure: true # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional) caRef: configMapName: argocd-tls-certs-cm key: bitbucket-ca # Support for filtering by labels is TODO. Bitbucket server labels are not supported for PRs, but they are for repos template: # ... ``` * `project`: Required name of the Bitbucket project * `api`: Required URL to access the Bitbucket REST api. * `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter. If you want to access a private repository, you must also provide the credentials for Basic auth (this is the only auth supported currently): * `username`: The username to authenticate with. It only needs read access to the relevant repo. * `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests. In case of Bitbucket App Token, go with `bearerToken` section. * `tokenRef`: A `Secret` name and key containing the app token to use for requests. In case self-signed BitBucket Server certificates, the following options can be usefully: * `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates. * `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs. Available clone protocols are `ssh` and `https`. ## Azure DevOps Uses the Azure DevOps API to look up eligible repositories based on a team project within an Azure DevOps organization. The default Azure DevOps URL is `https://dev.azure.com`, but this can be overridden with the field `azureDevOps.api`. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: azureDevOps: # The Azure DevOps organization. organization: myorg # URL to Azure DevOps. Optional. Defaults to https://dev.azure.com. api: https://dev.azure.com # If true, scan every branch of eligible repositories. If false, check only the default branch of the eligible repositories. Defaults to false. allBranches: true # The team project within the specified Azure DevOps organization. teamProject: myProject # Reference to a Secret containing the Azure DevOps Personal Access Token (PAT) used for accessing Azure DevOps. accessTokenRef: secretName: azure-devops-scm key: accesstoken template: # ... ``` * `organization`: Required. Name of the Azure DevOps organization. * `teamProject`: Required. The name of the team project within the specified `organization`. * `accessTokenRef`: Required. A `Secret` name and key containing the Azure DevOps Personal Access Token (PAT) to use for requests. * `api`: Optional. URL to Azure DevOps. If not set, `https://dev.azure.com` is used. * `allBranches`: Optional, default `false`. If `true`, scans every branch of eligible repositories. If `false`, check only the default branch of the eligible repositories. ## Bitbucket Cloud The Bitbucket mode uses the Bitbucket API V2 to scan a workspace in bitbucket.org. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: bitbucket: # The workspace id (slug). owner: "example-owner" # The user to use for basic authentication with an app password. user: "example-user" # If true, scan every branch of every repository. If false, scan only the main branch. Defaults to false. allBranches: true # Reference to a Secret containing an app password. appPasswordRef: secretName: appPassword key: password template: # ... ``` * `owner`: The workspace ID (slug) to use when looking up repositories. * `user`: The user to use for authentication to the Bitbucket API V2 at bitbucket.org. * `allBranches`: By default (false) the template will only be evaluated for the main branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter. * `appPasswordRef`: A `Secret` name and key containing the bitbucket app password to use for requests. This SCM provider does not yet support label filtering Available clone protocols are `ssh` and `https`. ## AWS CodeCommit (Alpha) Uses AWS ResourceGroupsTagging and AWS CodeCommit APIs to scan repos across AWS accounts and regions. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: awsCodeCommit: # AWS region to scan repos. # default to the environmental region from ApplicationSet controller. region: us-east-1 # AWS role to assume to scan repos. # default to the environmental role from ApplicationSet controller. role: arn:aws:iam::111111111111:role/argocd-application-set-discovery # If true, scan every branch of every repository. If false, scan only the main branch. Defaults to false. allBranches: true # AWS resource tags to filter repos with. # see https://docs.aws.amazon.com/resourcegroupstagging/latest/APIReference/API_GetResources.html#resourcegrouptagging-GetResources-request-TagFilters for details # default to no tagFilters, to include all repos in the region. tagFilters: - key: organization value: platform-engineering - key: argo-ready template: # ... ``` * `region`: (Optional) AWS region to scan repos. By default, use ApplicationSet controller's current region. * `role`: (Optional) AWS role to assume to scan repos. By default, use ApplicationSet controller's current role. * `allBranches`: (Optional) If `true`, scans every branch of eligible repositories. If `false`, check only the default branch of the eligible repositories. Default `false`. * `tagFilters`: (Optional) A list of tagFilters to filter AWS CodeCommit repos with. See [AWS ResourceGroupsTagging API](https://docs.aws.amazon.com/resourcegroupstagging/latest/APIReference/API_GetResources.html#resourcegrouptagging-GetResources-request-TagFilters) for details. By default, no filter is included. This SCM provider does not support the following features * label filtering * `sha`, `short_sha` and `short_sha_7` template parameters Available clone protocols are `ssh`, `https` and `https-fips`. ### AWS IAM Permission Considerations In order to call AWS APIs to discover AWS CodeCommit repos, ApplicationSet controller must be configured with valid environmental AWS config, like current AWS region and AWS credentials. AWS config can be provided via all standard options, like Instance Metadata Service (IMDS), config file, environment variables, or IAM roles for service accounts (IRSA). Depending on whether `role` is provided in `awsCodeCommit` property, AWS IAM permission requirement is different. #### Discover AWS CodeCommit Repositories in the same AWS Account as ApplicationSet Controller Without specifying `role`, ApplicationSet controller will use its own AWS identity to scan AWS CodeCommit repos. This is suitable when you have a simple setup that all AWS CodeCommit repos reside in the same AWS account as your Argo CD. As the ApplicationSet controller AWS identity is used directly for repo discovery, it must be granted below AWS permissions. * `tag:GetResources` * `codecommit:ListRepositories` * `codecommit:GetRepository` * `codecommit:GetFolder` * `codecommit:ListBranches` #### Discover AWS CodeCommit Repositories across AWS Accounts and Regions By specifying `role`, ApplicationSet controller will first assume the `role`, and use it for repo discovery. This enables more complicated use cases to discover repos from different AWS accounts and regions. The ApplicationSet controller AWS identity should be granted permission to assume target AWS roles. * `sts:AssumeRole` All AWS roles must have repo discovery related permissions. * `tag:GetResources` * `codecommit:ListRepositories` * `codecommit:GetRepository` * `codecommit:GetFolder` * `codecommit:ListBranches` ## Filters Filters allow selecting which repositories to generate for. Each filter can declare one or more conditions, all of which must pass. If multiple filters are present, any can match for a repository to be included. If no filters are specified, all repositories will be processed. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: generators: - scmProvider: filters: # Include any repository starting with "myapp" AND including a Kustomize config AND labeled with "deploy-ok" ... - repositoryMatch: ^myapp pathsExist: [kubernetes/kustomization.yaml] labelMatch: deploy-ok # ... OR include any repository starting with "otherapp" AND a Helm folder and doesn't have file disabledrepo.txt. - repositoryMatch: ^otherapp pathsExist: [helm] pathsDoNotExist: [disabledrepo.txt] template: # ... ``` * `repositoryMatch`: A regexp matched against the repository name. * `pathsExist`: An array of paths within the repository that must exist. Can be a file or directory. * `pathsDoNotExist`: An array of paths within the repository that must not exist. Can be a file or directory. * `labelMatch`: A regexp matched against repository labels. If any label matches, the repository is included. * `branchMatch`: A regexp matched against branch names. ## Template As with all generators, several parameters are generated for use within the `ApplicationSet` resource template. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - scmProvider: # ... template: metadata: name: '' spec: source: repoURL: '' targetRevision: '' path: kubernetes/ project: default destination: server: https://kubernetes.default.svc namespace: default ``` * `organization`: The name of the organization the repository is in. * `repository`: The name of the repository. * `url`: The clone URL for the repository. * `branch`: The default branch of the repository. * `sha`: The Git commit SHA for the branch. * `short_sha`: The abbreviated Git commit SHA for the branch (8 chars or the length of the `sha` if it's shorter). * `short_sha_7`: The abbreviated Git commit SHA for the branch (7 chars or the length of the `sha` if it's shorter). * `labels`: A comma-separated list of repository labels in case of Gitea, repository topics in case of Gitlab and Github. Not supported by Bitbucket Cloud, Bitbucket Server, or Azure DevOps. * `branchNormalized`: The value of `branch` normalized to contain only lowercase alphanumeric characters, '-' or '.'. ## Pass additional key-value pairs via `values` field You may pass additional, arbitrary string key-value pairs via the `values` field of any SCM generator. Values added via the `values` field are added as `values.(field)`. In this example, a `name` parameter value is passed. It is interpolated from `organization` and `repository` to generate a different template name. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: myapps spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - scmProvider: bitbucketServer: project: myproject api: https://mycompany.bitbucket.org allBranches: true basicAuth: username: myuser passwordRef: secretName: mypassword key: password values: name: "-" template: metadata: name: '' spec: source: repoURL: '' targetRevision: '' path: kubernetes/ project: default destination: server: https://kubernetes.default.svc namespace: default ``` !!! note The `values.` prefix is always prepended to values provided via `generators.scmProvider.values` field. Ensure you include this prefix in the parameter name within the `template` when using it. In `values` we can also interpolate all fields set by the SCM generator as mentioned above.
argocd
SCM Provider Generator The SCM Provider generator uses the API of an SCMaaS provider eg GitHub to automatically discover repositories within an organization This fits well with GitOps layout patterns that split microservices across many repositories yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider Which protocol to clone using cloneProtocol ssh See below for provider specific options github cloneProtocol Which protocol to use for the SCM URL Default is provider specific but ssh if possible Not all providers necessarily support all protocols see provider documentation below for available options note Know the security implications of using SCM generators Only admins may create ApplicationSets Security md only admins may createupdatedelete applicationsets to avoid leaking Secrets and only admins may create repos branches Security md templated project field if the project field of an ApplicationSet with an SCM generator is templated to avoid granting management of out of bounds resources GitHub The GitHub mode uses the GitHub API to scan an organization in either github com or GitHub Enterprise yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider github The GitHub organization to scan organization myorg For GitHub Enterprise api https git example com If true scan every branch of every repository If false scan only the default branch Defaults to false allBranches true Reference to a Secret containing an access token optional tokenRef secretName github token key token optional use a GitHub App to access the API instead of a PAT appSecretName gh app repo creds template organization Required name of the GitHub organization to scan If you have multiple organizations use multiple generators api If using GitHub Enterprise the URL to access it allBranches By default false the template will only be evaluated for the default branch of each repo If this is true every branch of every repository will be passed to the filters If using this flag you likely want to use a branchMatch filter tokenRef A Secret name and key containing the GitHub access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories appSecretName A Secret name containing a GitHub App secret in repo creds format repo creds repo creds declarative setup md repository credentials For label filtering the repository topics are used Available clone protocols are ssh and https Gitlab The GitLab mode uses the GitLab API to scan and organization in either gitlab com or self hosted GitLab yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider gitlab The base GitLab group to scan You can either use the group id or the full namespaced path group 8675309 For self hosted GitLab api https gitlab example com If true scan every branch of every repository If false scan only the default branch Defaults to false allBranches true If true recurses through subgroups If false it searches only in the base group Defaults to false includeSubgroups true If true and includeSubgroups is also true include Shared Projects which is gitlab API default If false only search Projects under the same path Defaults to true includeSharedProjects false filter projects by topic A single topic is supported by Gitlab API Defaults to all topics topic my topic Reference to a Secret containing an access token optional tokenRef secretName gitlab token key token If true skips validating the SCM provider s TLS certificate useful for self signed certificates insecure false Reference to a ConfigMap containing trusted CA certs useful for self signed certificates optional caRef configMapName argocd tls certs cm key gitlab ca template group Required name of the base GitLab group to scan If you have multiple base groups use multiple generators api If using self hosted GitLab the URL to access it allBranches By default false the template will only be evaluated for the default branch of each repo If this is true every branch of every repository will be passed to the filters If using this flag you likely want to use a branchMatch filter includeSubgroups By default false the controller will only search for repos directly in the base group If this is true it will recurse through all the subgroups searching for repos to scan includeSharedProjects If true and includeSubgroups is also true include Shared Projects which is gitlab API default If false only search Projects under the same path In general most would want the behaviour when set to false Defaults to true topic filter projects by topic A single topic is supported by Gitlab API Defaults to all topics tokenRef A Secret name and key containing the GitLab access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories insecure By default false Skip checking the validity of the SCM s certificate useful for self signed TLS certificates caRef Optional ConfigMap name and key containing the GitLab certificates to trust useful for self signed TLS certificates Possibly reference the ArgoCD CM holding the trusted certs For label filtering the repository topics are used Available clone protocols are ssh and https Self signed TLS Certificates As a preferable alternative to setting insecure to true you can configure self signed TLS certificates for Gitlab In order for a self signed TLS certificate be used by an ApplicationSet s SCM PR Gitlab Generator the certificate needs to be mounted on the applicationset controller The path of the mounted certificate must be explicitly set using the environment variable ARGOCD APPLICATIONSET CONTROLLER SCM ROOT CA PATH or alternatively using parameter scm root ca path The applicationset controller will read the mounted certificate to create the Gitlab client for SCM PR Providers This can be achieved conveniently by setting applicationsetcontroller scm root ca path in the argocd cmd params cm ConfigMap Be sure to restart the ApplicationSet controller after setting this value Gitea The Gitea mode uses the Gitea API to scan organizations in your instance yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider gitea The Gitea owner to scan owner myorg The Gitea instance url api https gitea mydomain com If true scan every branch of every repository If false scan only the default branch Defaults to false allBranches true Reference to a Secret containing an access token optional tokenRef secretName gitea token key token template owner Required name of the Gitea organization to scan If you have multiple organizations use multiple generators api The URL of the Gitea instance you are using allBranches By default false the template will only be evaluated for the default branch of each repo If this is true every branch of every repository will be passed to the filters If using this flag you likely want to use a branchMatch filter tokenRef A Secret name and key containing the Gitea access token to use for requests If not specified will make anonymous requests which have a lower rate limit and can only see public repositories insecure Allow for self signed TLS certificates This SCM provider does not yet support label filtering Available clone protocols are ssh and https Bitbucket Server Use the Bitbucket Server API 1 0 to scan repos in a project Note that Bitbucket Server is not to same as Bitbucket Cloud API 2 0 yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider bitbucketServer project myproject URL of the Bitbucket Server Required api https mycompany bitbucket org If true scan every branch of every repository If false scan only the default branch Defaults to false allBranches true Credentials for Basic authentication App Password Either basicAuth or bearerToken authentication is required to access private repositories basicAuth The username to authenticate with username myuser Reference to a Secret containing the password or personal access token passwordRef secretName mypassword key password Credentials for Bearer Token App Token authentication Either basicAuth or bearerToken authentication is required to access private repositories bearerToken Reference to a Secret containing the bearer token tokenRef secretName repotoken key token If true skips validating the SCM provider s TLS certificate useful for self signed certificates insecure true Reference to a ConfigMap containing trusted CA certs useful for self signed certificates optional caRef configMapName argocd tls certs cm key bitbucket ca Support for filtering by labels is TODO Bitbucket server labels are not supported for PRs but they are for repos template project Required name of the Bitbucket project api Required URL to access the Bitbucket REST api allBranches By default false the template will only be evaluated for the default branch of each repo If this is true every branch of every repository will be passed to the filters If using this flag you likely want to use a branchMatch filter If you want to access a private repository you must also provide the credentials for Basic auth this is the only auth supported currently username The username to authenticate with It only needs read access to the relevant repo passwordRef A Secret name and key containing the password or personal access token to use for requests In case of Bitbucket App Token go with bearerToken section tokenRef A Secret name and key containing the app token to use for requests In case self signed BitBucket Server certificates the following options can be usefully insecure By default false Skip checking the validity of the SCM s certificate useful for self signed TLS certificates caRef Optional ConfigMap name and key containing the BitBucket server certificates to trust useful for self signed TLS certificates Possibly reference the ArgoCD CM holding the trusted certs Available clone protocols are ssh and https Azure DevOps Uses the Azure DevOps API to look up eligible repositories based on a team project within an Azure DevOps organization The default Azure DevOps URL is https dev azure com but this can be overridden with the field azureDevOps api yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider azureDevOps The Azure DevOps organization organization myorg URL to Azure DevOps Optional Defaults to https dev azure com api https dev azure com If true scan every branch of eligible repositories If false check only the default branch of the eligible repositories Defaults to false allBranches true The team project within the specified Azure DevOps organization teamProject myProject Reference to a Secret containing the Azure DevOps Personal Access Token PAT used for accessing Azure DevOps accessTokenRef secretName azure devops scm key accesstoken template organization Required Name of the Azure DevOps organization teamProject Required The name of the team project within the specified organization accessTokenRef Required A Secret name and key containing the Azure DevOps Personal Access Token PAT to use for requests api Optional URL to Azure DevOps If not set https dev azure com is used allBranches Optional default false If true scans every branch of eligible repositories If false check only the default branch of the eligible repositories Bitbucket Cloud The Bitbucket mode uses the Bitbucket API V2 to scan a workspace in bitbucket org yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider bitbucket The workspace id slug owner example owner The user to use for basic authentication with an app password user example user If true scan every branch of every repository If false scan only the main branch Defaults to false allBranches true Reference to a Secret containing an app password appPasswordRef secretName appPassword key password template owner The workspace ID slug to use when looking up repositories user The user to use for authentication to the Bitbucket API V2 at bitbucket org allBranches By default false the template will only be evaluated for the main branch of each repo If this is true every branch of every repository will be passed to the filters If using this flag you likely want to use a branchMatch filter appPasswordRef A Secret name and key containing the bitbucket app password to use for requests This SCM provider does not yet support label filtering Available clone protocols are ssh and https AWS CodeCommit Alpha Uses AWS ResourceGroupsTagging and AWS CodeCommit APIs to scan repos across AWS accounts and regions yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider awsCodeCommit AWS region to scan repos default to the environmental region from ApplicationSet controller region us east 1 AWS role to assume to scan repos default to the environmental role from ApplicationSet controller role arn aws iam 111111111111 role argocd application set discovery If true scan every branch of every repository If false scan only the main branch Defaults to false allBranches true AWS resource tags to filter repos with see https docs aws amazon com resourcegroupstagging latest APIReference API GetResources html resourcegrouptagging GetResources request TagFilters for details default to no tagFilters to include all repos in the region tagFilters key organization value platform engineering key argo ready template region Optional AWS region to scan repos By default use ApplicationSet controller s current region role Optional AWS role to assume to scan repos By default use ApplicationSet controller s current role allBranches Optional If true scans every branch of eligible repositories If false check only the default branch of the eligible repositories Default false tagFilters Optional A list of tagFilters to filter AWS CodeCommit repos with See AWS ResourceGroupsTagging API https docs aws amazon com resourcegroupstagging latest APIReference API GetResources html resourcegrouptagging GetResources request TagFilters for details By default no filter is included This SCM provider does not support the following features label filtering sha short sha and short sha 7 template parameters Available clone protocols are ssh https and https fips AWS IAM Permission Considerations In order to call AWS APIs to discover AWS CodeCommit repos ApplicationSet controller must be configured with valid environmental AWS config like current AWS region and AWS credentials AWS config can be provided via all standard options like Instance Metadata Service IMDS config file environment variables or IAM roles for service accounts IRSA Depending on whether role is provided in awsCodeCommit property AWS IAM permission requirement is different Discover AWS CodeCommit Repositories in the same AWS Account as ApplicationSet Controller Without specifying role ApplicationSet controller will use its own AWS identity to scan AWS CodeCommit repos This is suitable when you have a simple setup that all AWS CodeCommit repos reside in the same AWS account as your Argo CD As the ApplicationSet controller AWS identity is used directly for repo discovery it must be granted below AWS permissions tag GetResources codecommit ListRepositories codecommit GetRepository codecommit GetFolder codecommit ListBranches Discover AWS CodeCommit Repositories across AWS Accounts and Regions By specifying role ApplicationSet controller will first assume the role and use it for repo discovery This enables more complicated use cases to discover repos from different AWS accounts and regions The ApplicationSet controller AWS identity should be granted permission to assume target AWS roles sts AssumeRole All AWS roles must have repo discovery related permissions tag GetResources codecommit ListRepositories codecommit GetRepository codecommit GetFolder codecommit ListBranches Filters Filters allow selecting which repositories to generate for Each filter can declare one or more conditions all of which must pass If multiple filters are present any can match for a repository to be included If no filters are specified all repositories will be processed yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec generators scmProvider filters Include any repository starting with myapp AND including a Kustomize config AND labeled with deploy ok repositoryMatch myapp pathsExist kubernetes kustomization yaml labelMatch deploy ok OR include any repository starting with otherapp AND a Helm folder and doesn t have file disabledrepo txt repositoryMatch otherapp pathsExist helm pathsDoNotExist disabledrepo txt template repositoryMatch A regexp matched against the repository name pathsExist An array of paths within the repository that must exist Can be a file or directory pathsDoNotExist An array of paths within the repository that must not exist Can be a file or directory labelMatch A regexp matched against repository labels If any label matches the repository is included branchMatch A regexp matched against branch names Template As with all generators several parameters are generated for use within the ApplicationSet resource template yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators scmProvider template metadata name spec source repoURL targetRevision path kubernetes project default destination server https kubernetes default svc namespace default organization The name of the organization the repository is in repository The name of the repository url The clone URL for the repository branch The default branch of the repository sha The Git commit SHA for the branch short sha The abbreviated Git commit SHA for the branch 8 chars or the length of the sha if it s shorter short sha 7 The abbreviated Git commit SHA for the branch 7 chars or the length of the sha if it s shorter labels A comma separated list of repository labels in case of Gitea repository topics in case of Gitlab and Github Not supported by Bitbucket Cloud Bitbucket Server or Azure DevOps branchNormalized The value of branch normalized to contain only lowercase alphanumeric characters or Pass additional key value pairs via values field You may pass additional arbitrary string key value pairs via the values field of any SCM generator Values added via the values field are added as values field In this example a name parameter value is passed It is interpolated from organization and repository to generate a different template name yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name myapps spec goTemplate true goTemplateOptions missingkey error generators scmProvider bitbucketServer project myproject api https mycompany bitbucket org allBranches true basicAuth username myuser passwordRef secretName mypassword key password values name template metadata name spec source repoURL targetRevision path kubernetes project default destination server https kubernetes default svc namespace default note The values prefix is always prepended to values provided via generators scmProvider values field Ensure you include this prefix in the parameter name within the template when using it In values we can also interpolate all fields set by the SCM generator as mentioned above
argocd feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications owned by an ApplicationSet resource It may be removed in future releases or modified in backwards incompatible ways Progressive Syncs warning Alpha Feature Since v2 6 0 The Progressive Syncs feature set is intended to be light and flexible The feature only interacts with the health of managed Applications It is not intended to support direct integrations with other Rollout controllers such as the native ReplicaSet controller or Argo Rollouts Use Cases This is an experimental
# Progressive Syncs !!! warning "Alpha Feature (Since v2.6.0)" This is an experimental, [alpha-quality](https://github.com/argoproj/argoproj/blob/main/community/feature-status.md#alpha) feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications owned by an ApplicationSet resource. It may be removed in future releases or modified in backwards-incompatible ways. ## Use Cases The Progressive Syncs feature set is intended to be light and flexible. The feature only interacts with the health of managed Applications. It is not intended to support direct integrations with other Rollout controllers (such as the native ReplicaSet controller or Argo Rollouts). * Progressive Syncs watch for the managed Application resources to become "Healthy" before proceeding to the next stage. * Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https://argoproj.github.io/argo-rollouts/) are all supported, because the Application enters a "Progressing" state while pods are being rolled out. In fact, any resource with a health check that can report a "Progressing" status is supported. * [Argo CD Resource Hooks](../../user-guide/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change. ## Enabling Progressive Syncs As an experimental feature, progressive syncs must be explicitly enabled, in one of these ways. 1. Pass `--enable-progressive-syncs` to the ApplicationSet controller args. 1. Set `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=true` in the ApplicationSet controller environment variables. 1. Set `applicationsetcontroller.enable.progressive.syncs: true` in the Argo CD `argocd-cmd-params-cm` ConfigMap. ## Strategies * AllAtOnce (default) * RollingSync ### AllAtOnce This default Application update behavior is unchanged from the original ApplicationSet implementation. All Applications managed by the ApplicationSet resource are updated simultaneously when the ApplicationSet is updated. ### RollingSync This update strategy allows you to group Applications by labels present on the generated Application resources. When the ApplicationSet changes, the changes will be applied to each group of Application resources sequentially. * Application groups are selected using their labels and `matchExpressions`. * All `matchExpressions` must be true for an Application to be selected (multiple expressions match with AND behavior). * The `In` and `NotIn` operators must match at least one value to be considered true (OR behavior). * The `NotIn` operator has priority in the event that both a `NotIn` and `In` operator produce a match. * All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications. * The number of simultaneous Application updates in a group will not exceed its `maxUpdate` parameter (default is 100%, unbounded). * RollingSync will capture external changes outside the ApplicationSet resource, since it relies on watching the OutOfSync status of the managed Applications. * RollingSync will force all generated Applications to have autosync disabled. Warnings are printed in the applicationset-controller logs for any Application specs with an automated syncPolicy enabled. * Sync operations are triggered the same way as if they were triggered by the UI or CLI (by directly setting the `operation` status field on the Application resource). This means that a RollingSync will respect sync windows just as if a user had clicked the "Sync" button in the Argo UI. * When a sync is triggered, the sync is performed with the same syncPolicy configured for the Application. For example, this preserves the Application's retry settings. * If an Application is considered "Pending" for `applicationsetcontroller.default.application.progressing.timeout` seconds, the Application is automatically moved to Healthy status (default 300). * If an Application is not selected in any step, it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI. #### Example The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels. Once a change is pushed, the following will happen in order. * All `env-dev` Applications will be updated simultaneously. * The rollout will wait for all `env-qa` Applications to be manually synced via the `argocd` CLI or by clicking the Sync button in the UI. * 10% of all `env-prod` Applications will be updated at a time until all `env-prod` Applications have been updated. ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: generators: - list: elements: - cluster: engineering-dev url: https://1.2.3.4 env: env-dev - cluster: engineering-qa url: https://2.4.6.8 env: env-qa - cluster: engineering-prod url: https://9.8.7.6/ env: env-prod strategy: type: RollingSync rollingSync: steps: - matchExpressions: - key: envLabel operator: In values: - env-dev #maxUpdate: 100% # if undefined, all applications matched are updated together (default is 100%) - matchExpressions: - key: envLabel operator: In values: - env-qa maxUpdate: 0 # if 0, no matched applications will be updated - matchExpressions: - key: envLabel operator: In values: - env-prod maxUpdate: 10% # maxUpdate supports both integer and percentage string values (rounds down, but floored at 1 Application for >0%) goTemplate: true goTemplateOptions: ["missingkey=error"] template: metadata: name: '-guestbook' labels: envLabel: '' spec: project: my-project source: repoURL: https://github.com/infra-team/cluster-deployments.git targetRevision: HEAD path: guestbook/ destination: server: '' namespace: guestbook ```
argocd
Progressive Syncs warning Alpha Feature Since v2 6 0 This is an experimental alpha quality https github com argoproj argoproj blob main community feature status md alpha feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications owned by an ApplicationSet resource It may be removed in future releases or modified in backwards incompatible ways Use Cases The Progressive Syncs feature set is intended to be light and flexible The feature only interacts with the health of managed Applications It is not intended to support direct integrations with other Rollout controllers such as the native ReplicaSet controller or Argo Rollouts Progressive Syncs watch for the managed Application resources to become Healthy before proceeding to the next stage Deployments DaemonSets StatefulSets and Argo Rollouts https argoproj github io argo rollouts are all supported because the Application enters a Progressing state while pods are being rolled out In fact any resource with a health check that can report a Progressing status is supported Argo CD Resource Hooks user guide resource hooks md are supported We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used such as smoke testing after a DaemonSet change Enabling Progressive Syncs As an experimental feature progressive syncs must be explicitly enabled in one of these ways 1 Pass enable progressive syncs to the ApplicationSet controller args 1 Set ARGOCD APPLICATIONSET CONTROLLER ENABLE PROGRESSIVE SYNCS true in the ApplicationSet controller environment variables 1 Set applicationsetcontroller enable progressive syncs true in the Argo CD argocd cmd params cm ConfigMap Strategies AllAtOnce default RollingSync AllAtOnce This default Application update behavior is unchanged from the original ApplicationSet implementation All Applications managed by the ApplicationSet resource are updated simultaneously when the ApplicationSet is updated RollingSync This update strategy allows you to group Applications by labels present on the generated Application resources When the ApplicationSet changes the changes will be applied to each group of Application resources sequentially Application groups are selected using their labels and matchExpressions All matchExpressions must be true for an Application to be selected multiple expressions match with AND behavior The In and NotIn operators must match at least one value to be considered true OR behavior The NotIn operator has priority in the event that both a NotIn and In operator produce a match All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications The number of simultaneous Application updates in a group will not exceed its maxUpdate parameter default is 100 unbounded RollingSync will capture external changes outside the ApplicationSet resource since it relies on watching the OutOfSync status of the managed Applications RollingSync will force all generated Applications to have autosync disabled Warnings are printed in the applicationset controller logs for any Application specs with an automated syncPolicy enabled Sync operations are triggered the same way as if they were triggered by the UI or CLI by directly setting the operation status field on the Application resource This means that a RollingSync will respect sync windows just as if a user had clicked the Sync button in the Argo UI When a sync is triggered the sync is performed with the same syncPolicy configured for the Application For example this preserves the Application s retry settings If an Application is considered Pending for applicationsetcontroller default application progressing timeout seconds the Application is automatically moved to Healthy status default 300 If an Application is not selected in any step it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI Example The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels Once a change is pushed the following will happen in order All env dev Applications will be updated simultaneously The rollout will wait for all env qa Applications to be manually synced via the argocd CLI or by clicking the Sync button in the UI 10 of all env prod Applications will be updated at a time until all env prod Applications have been updated yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec generators list elements cluster engineering dev url https 1 2 3 4 env env dev cluster engineering qa url https 2 4 6 8 env env qa cluster engineering prod url https 9 8 7 6 env env prod strategy type RollingSync rollingSync steps matchExpressions key envLabel operator In values env dev maxUpdate 100 if undefined all applications matched are updated together default is 100 matchExpressions key envLabel operator In values env qa maxUpdate 0 if 0 no matched applications will be updated matchExpressions key envLabel operator In values env prod maxUpdate 10 maxUpdate supports both integer and percentage string values rounds down but floored at 1 Application for 0 goTemplate true goTemplateOptions missingkey error template metadata name guestbook labels envLabel spec project my project source repoURL https github com infra team cluster deployments git targetRevision HEAD path guestbook destination server namespace guestbook
argocd Use cases supported by the ApplicationSet controller Use case cluster add ons An initial design focus of the ApplicationSet controller was to allow an infrastructure team s Kubernetes cluster administrators the ability to automatically create a large diverse set of Argo CD Applications across a significant number of clusters and manage those Applications as a single unit One example of why this is needed is the cluster add on use case With the concept of generators the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications Generators produce template parameter data from a variety of sources including Argo CD clusters and Git repositories supporting and enabling new use cases While these tools may be utilized for whichever purpose is desired here are some of the specific use cases that the ApplicationSet controller was designed to support
# Use cases supported by the ApplicationSet controller With the concept of generators, the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications. Generators produce template parameter data from a variety of sources, including Argo CD clusters and Git repositories, supporting and enabling new use cases. While these tools may be utilized for whichever purpose is desired, here are some of the specific use cases that the ApplicationSet controller was designed to support. ## Use case: cluster add-ons An initial design focus of the ApplicationSet controller was to allow an infrastructure team's Kubernetes cluster administrators the ability to automatically create a large, diverse set of Argo CD Applications, across a significant number of clusters, and manage those Applications as a single unit. One example of why this is needed is the *cluster add-on use case*. In the *cluster add-on use case*, an administrator is responsible for provisioning cluster add-ons to one or more Kubernetes clusters: cluster-addons are operators such as the [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator), or controllers such as the [argo-workflows controller](https://argoproj.github.io/argo-workflows/) (part of the [Argo ecosystem](https://argoproj.github.io/)). Typically these add-ons are required by the applications of development teams (as tenants of a multi-tenant cluster, for instance, they may wish to provide metrics data to Prometheus or orchestrate workflows via Argo Workflows). Since installing these add-ons requires cluster-level permissions not held by individual development teams, installation is the responsibility of the infrastructure/ops team of an organization, and within a large organization this team might be responsible for tens, hundreds, or thousands of Kubernetes clusters (with new clusters being added/modified/removed on a regular basis). The need to scale across a large number of clusters, and automatically respond to the lifecycle of new clusters, necessarily mandates some form of automation. A further requirement would be allowing the targeting of add-ons to a subset of clusters using specific criteria (eg staging vs production). ![Cluster add-on diagram](../../assets/applicationset/Use-Cases/Cluster-Add-Ons.png) In this example, the infrastructure team maintains a Git repository containing application manifests for the Argo Workflows controller, and Prometheus operator. The infrastructure team would like to deploy both these add-on to a large number of clusters, using Argo CD, and likewise wishes to easily manage the creation/deletion of new clusters. In this use case, we may use either the List, Cluster, or Git generators of the ApplicationSet controller to provide the required behaviour: - *List generator*: Administrators maintain two `ApplicationSet` resources, one for each application (Workflows and Prometheus), and include the list of clusters they wish to target within the List generator elements of each. - With this generator, adding/removing clusters requires manually updating the `ApplicationSet` resource's list elements. - *Cluster generator*: Administrators maintain two `ApplicationSet` resources, one for each application (Workflows and Prometheus), and ensure that all new cluster are defined within Argo CD. - Since the Cluster generator automatically detects and targets the clusters defined within Argo CD, [adding/remove a cluster from Argo CD](../../declarative-setup/#clusters) will automatically cause Argo CD Application resources (for each application) to be created by the ApplicationSet controller. - *Git generator*: The Git generator is the most flexible/powerful of the generators, and thus there are a number of different ways to tackle this use case. Here are a couple: - Using the Git generator `files` field: A list of clusters is kept as a JSON file within a Git repository. Updates to the JSON file, through Git commits, cause new clusters to be added/removed. - Using the Git generator `directories` field: For each target cluster, a corresponding directory of that name exists in a Git repository. Adding/modifying a directory, through Git commits, would trigger an update for the cluster that has shares the directory name. See the [generators section](Generators.md) for details on each of the generators. ## Use case: monorepos In the *monorepo use case*, Kubernetes cluster administrators manage the entire state of a single Kubernetes cluster from a single Git repository. Manifest changes merged into the Git repository should automatically deploy to the cluster. ![Monorepo diagram](../../assets/applicationset/Use-Cases/Monorepos.png) In this example, the infrastructure team maintains a Git repository containing application manifests for an Argo Workflows controller, and a Prometheus operator. Independent development teams also have added additional services they wish to deploy to the cluster. Changes made to the Git repository -- for example, updating the version of a deployed artifact -- should automatically cause that update to be applied to the corresponding Kubernetes cluster by Argo CD. The Git generator may be used to support this use case: - The Git generator `directories` field may be used to specify particular subdirectories (using wildcards) containing the individual applications to deploy. - The Git generator `files` field may reference Git repository files containing JSON metadata, with that metadata describing the individual applications to deploy. - See the Git generator documentation for more details. ## Use case: self-service of Argo CD Applications on multitenant clusters The *self-service use case* seeks to allow developers (as the end users of a multitenant Kubernetes cluster) greater flexibility to: - Deploy multiple applications to a single cluster, in an automated fashion, using Argo CD - Deploy to multiple clusters, in an automated fashion, using Argo CD - But, in both cases, to empower those developers to be able to do so without needing to involve a cluster administrator (to create the necessarily Argo CD Applications/AppProject resources on their behalf) One potential solution to this use case is for development teams to define Argo CD `Application` resources within a Git repository (containing the manifests they wish to deploy), in an [app-of-apps pattern](../../cluster-bootstrapping/#app-of-apps-pattern), and for cluster administrators to then review/accept changes to this repository via merge requests. While this might sound like an effective solution, a major disadvantage is that a high degree of trust/scrutiny is needed to accept commits containing Argo CD `Application` spec changes. This is because there are many sensitive fields contained within the `Application` spec, including `project`, `cluster`, and `namespace`. An inadvertent merge might allow applications to access namespaces/clusters where they did not belong. Thus in the self-service use case, administrators desire to only allow some fields of the `Application` spec to be controlled by developers (eg the Git source repository) but not other fields (eg the target namespace, or target cluster, should be restricted). Fortunately, the ApplicationSet controller presents an alternative solution to this use case: cluster administrators may safely create an `ApplicationSet` resource containing a Git generator that restricts deployment of application resources to fixed values with the `template` field, while allowing customization of 'safe' fields by developers, at will. The `config.json` files contain information describing the app. ```json { (...) "app": { "source": "https://github.com/argoproj/argo-cd", "revision": "HEAD", "path": "applicationset/examples/git-generator-files-discovery/apps/guestbook" } (...) } ``` ```yaml kind: ApplicationSet # (...) spec: goTemplate: true goTemplateOptions: ["missingkey=error"] generators: - git: repoURL: https://github.com/argoproj/argo-cd.git files: - path: "apps/**/config.json" template: spec: project: dev-team-one # project is restricted source: # developers may customize app details using JSON files from above repo URL repoURL: targetRevision: path: destination: name: production-cluster # cluster is restricted namespace: dev-team-one # namespace is restricted ``` See the [Git generator](Generators-Git.md) for more details.
argocd
Use cases supported by the ApplicationSet controller With the concept of generators the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications Generators produce template parameter data from a variety of sources including Argo CD clusters and Git repositories supporting and enabling new use cases While these tools may be utilized for whichever purpose is desired here are some of the specific use cases that the ApplicationSet controller was designed to support Use case cluster add ons An initial design focus of the ApplicationSet controller was to allow an infrastructure team s Kubernetes cluster administrators the ability to automatically create a large diverse set of Argo CD Applications across a significant number of clusters and manage those Applications as a single unit One example of why this is needed is the cluster add on use case In the cluster add on use case an administrator is responsible for provisioning cluster add ons to one or more Kubernetes clusters cluster addons are operators such as the Prometheus operator https github com prometheus operator prometheus operator or controllers such as the argo workflows controller https argoproj github io argo workflows part of the Argo ecosystem https argoproj github io Typically these add ons are required by the applications of development teams as tenants of a multi tenant cluster for instance they may wish to provide metrics data to Prometheus or orchestrate workflows via Argo Workflows Since installing these add ons requires cluster level permissions not held by individual development teams installation is the responsibility of the infrastructure ops team of an organization and within a large organization this team might be responsible for tens hundreds or thousands of Kubernetes clusters with new clusters being added modified removed on a regular basis The need to scale across a large number of clusters and automatically respond to the lifecycle of new clusters necessarily mandates some form of automation A further requirement would be allowing the targeting of add ons to a subset of clusters using specific criteria eg staging vs production Cluster add on diagram assets applicationset Use Cases Cluster Add Ons png In this example the infrastructure team maintains a Git repository containing application manifests for the Argo Workflows controller and Prometheus operator The infrastructure team would like to deploy both these add on to a large number of clusters using Argo CD and likewise wishes to easily manage the creation deletion of new clusters In this use case we may use either the List Cluster or Git generators of the ApplicationSet controller to provide the required behaviour List generator Administrators maintain two ApplicationSet resources one for each application Workflows and Prometheus and include the list of clusters they wish to target within the List generator elements of each With this generator adding removing clusters requires manually updating the ApplicationSet resource s list elements Cluster generator Administrators maintain two ApplicationSet resources one for each application Workflows and Prometheus and ensure that all new cluster are defined within Argo CD Since the Cluster generator automatically detects and targets the clusters defined within Argo CD adding remove a cluster from Argo CD declarative setup clusters will automatically cause Argo CD Application resources for each application to be created by the ApplicationSet controller Git generator The Git generator is the most flexible powerful of the generators and thus there are a number of different ways to tackle this use case Here are a couple Using the Git generator files field A list of clusters is kept as a JSON file within a Git repository Updates to the JSON file through Git commits cause new clusters to be added removed Using the Git generator directories field For each target cluster a corresponding directory of that name exists in a Git repository Adding modifying a directory through Git commits would trigger an update for the cluster that has shares the directory name See the generators section Generators md for details on each of the generators Use case monorepos In the monorepo use case Kubernetes cluster administrators manage the entire state of a single Kubernetes cluster from a single Git repository Manifest changes merged into the Git repository should automatically deploy to the cluster Monorepo diagram assets applicationset Use Cases Monorepos png In this example the infrastructure team maintains a Git repository containing application manifests for an Argo Workflows controller and a Prometheus operator Independent development teams also have added additional services they wish to deploy to the cluster Changes made to the Git repository for example updating the version of a deployed artifact should automatically cause that update to be applied to the corresponding Kubernetes cluster by Argo CD The Git generator may be used to support this use case The Git generator directories field may be used to specify particular subdirectories using wildcards containing the individual applications to deploy The Git generator files field may reference Git repository files containing JSON metadata with that metadata describing the individual applications to deploy See the Git generator documentation for more details Use case self service of Argo CD Applications on multitenant clusters The self service use case seeks to allow developers as the end users of a multitenant Kubernetes cluster greater flexibility to Deploy multiple applications to a single cluster in an automated fashion using Argo CD Deploy to multiple clusters in an automated fashion using Argo CD But in both cases to empower those developers to be able to do so without needing to involve a cluster administrator to create the necessarily Argo CD Applications AppProject resources on their behalf One potential solution to this use case is for development teams to define Argo CD Application resources within a Git repository containing the manifests they wish to deploy in an app of apps pattern cluster bootstrapping app of apps pattern and for cluster administrators to then review accept changes to this repository via merge requests While this might sound like an effective solution a major disadvantage is that a high degree of trust scrutiny is needed to accept commits containing Argo CD Application spec changes This is because there are many sensitive fields contained within the Application spec including project cluster and namespace An inadvertent merge might allow applications to access namespaces clusters where they did not belong Thus in the self service use case administrators desire to only allow some fields of the Application spec to be controlled by developers eg the Git source repository but not other fields eg the target namespace or target cluster should be restricted Fortunately the ApplicationSet controller presents an alternative solution to this use case cluster administrators may safely create an ApplicationSet resource containing a Git generator that restricts deployment of application resources to fixed values with the template field while allowing customization of safe fields by developers at will The config json files contain information describing the app json app source https github com argoproj argo cd revision HEAD path applicationset examples git generator files discovery apps guestbook yaml kind ApplicationSet spec goTemplate true goTemplateOptions missingkey error generators git repoURL https github com argoproj argo cd git files path apps config json template spec project dev team one project is restricted source developers may customize app details using JSON files from above repo URL repoURL targetRevision path destination name production cluster cluster is restricted namespace dev team one namespace is restricted See the Git generator Generators Git md for more details
argocd Command Reference The API server is a gRPC REST server which exposes the API consumed by the Web UI CLI and CI CD systems This command runs API server in the foreground It can be configured by following options Synopsis Run the ArgoCD API server argocd server
# `argocd-server` Command Reference ## argocd-server Run the ArgoCD API server ### Synopsis The API server is a gRPC/REST server which exposes the API consumed by the Web UI, CLI, and CI/CD systems. This command runs API server in the foreground. It can be configured by following options. ``` argocd-server [flags] ``` ### Examples ``` # Start the Argo CD API server with default settings $ argocd-server # Start the Argo CD API server on a custom port and enable tracing $ argocd-server --port 8888 --otlp-address localhost:4317 ``` ### Options ``` --address string Listen on given address (default "0.0.0.0") --api-content-types string Semicolon separated list of allowed content types for non GET api requests. Any content type is allowed if empty. (default "application/json") --app-state-cache-expiration duration Cache expiration for app state (default 1h0m0s) --application-namespaces strings List of additional namespaces where application resources can be managed in --appset-allowed-scm-providers strings The list of allowed custom SCM provider API URLs. This restriction does not apply to SCM or PR generators which do not accept a custom API URL. (Default: Empty = all) --appset-enable-new-git-file-globbing Enable new globbing in Git files generator. --appset-enable-scm-providers Enable retrieving information from SCM providers, used by the SCM and PR generators (Default: true) (default true) --appset-scm-root-ca-path string Provide Root CA Path for self-signed TLS Certificates --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --basehref string Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from / (default "/") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --connection-status-cache-expiration duration Cache expiration for cluster/repo connection status (default 1h0m0s) --content-security-policy value Set Content-Security-Policy header in HTTP responses to value. To disable, set to "". (default "frame-ancestors 'self';") --context string The name of the kubeconfig context to use --default-cache-expiration duration Cache expiration default (default 24h0m0s) --dex-server string Dex server address (default "argocd-dex-server:5556") --dex-server-plaintext Use a plaintext client (non-TLS) to connect to dex server --dex-server-strict-tls Perform strict validation of TLS certificates when connecting to dex server --disable-auth Disable client authentication --disable-compression If true, opt-out of response compression for all requests to the server --enable-gzip Enable GZIP compression (default true) --enable-k8s-event none Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all]) --enable-proxy-extension Enable Proxy Extension feature --gloglevel int Set the glog logging level -h, --help help for argocd-server --insecure Run server without TLS --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to a kube config. Only required if out-of-cluster --logformat string Set the logging format. One of: text|json (default "text") --login-attempts-expiration duration Cache expiration for failed login attempts (default 24h0m0s) --loglevel string Set the logging level. One of: debug|info|warn|error (default "info") --metrics-address string Listen for metrics on given address (default "0.0.0.0") --metrics-port int Start metrics on given port (default 8083) -n, --namespace string If present, the namespace scope for this CLI request --oidc-cache-expiration duration Cache expiration for OIDC state (default 3m0s) --otlp-address string OpenTelemetry collector address to send traces to --otlp-attrs strings List of OpenTelemetry collector extra attrs when send traces, each attribute is separated by a colon(e.g. key:value) --otlp-headers stringToString List of OpenTelemetry collector extra headers sent with traces, headers are comma-separated key-value pairs(e.g. key1=value1,key2=value2) (default []) --otlp-insecure OpenTelemetry collector insecure mode (default true) --password string Password for basic authentication to the API server --port int Listen on given port (default 8080) --proxy-url string If provided, this URL will be used to connect via proxy --redis string Redis server hostname and port (e.g. argocd-redis:6379). --redis-ca-certificate string Path to Redis server CA certificate (e.g. /etc/certs/redis/ca.crt). If not specified, system trusted CAs will be used for server certificate validation. --redis-client-certificate string Path to Redis client certificate (e.g. /etc/certs/redis/client.crt). --redis-client-key string Path to Redis client key (e.g. /etc/certs/redis/client.crt). --redis-compress string Enable compression for data sent to Redis with the required compression algorithm. (possible values: gzip, none) (default "gzip") --redis-insecure-skip-tls-verify Skip Redis server certificate validation. --redis-use-tls Use TLS when connecting to Redis. --redisdb int Redis database. --repo-cache-expiration duration Cache expiration for repo state, incl. app lists, app details, manifest generation, revision meta-data (default 24h0m0s) --repo-server string Repo server address (default "argocd-repo-server:8081") --repo-server-default-cache-expiration duration Cache expiration default (default 24h0m0s) --repo-server-plaintext Use a plaintext client (non-TLS) to connect to repository server --repo-server-redis string Redis server hostname and port (e.g. argocd-redis:6379). --repo-server-redis-ca-certificate string Path to Redis server CA certificate (e.g. /etc/certs/redis/ca.crt). If not specified, system trusted CAs will be used for server certificate validation. --repo-server-redis-client-certificate string Path to Redis client certificate (e.g. /etc/certs/redis/client.crt). --repo-server-redis-client-key string Path to Redis client key (e.g. /etc/certs/redis/client.crt). --repo-server-redis-compress string Enable compression for data sent to Redis with the required compression algorithm. (possible values: gzip, none) (default "gzip") --repo-server-redis-insecure-skip-tls-verify Skip Redis server certificate validation. --repo-server-redis-use-tls Use TLS when connecting to Redis. --repo-server-redisdb int Redis database. --repo-server-sentinel stringArray Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379). --repo-server-sentinelmaster string Redis sentinel master group name. (default "master") --repo-server-strict-tls Perform strict validation of TLS certificates when connecting to repo server --repo-server-timeout-seconds int Repo server RPC call timeout seconds. (default 60) --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") --revision-cache-expiration duration Cache expiration for cached revision (default 3m0s) --revision-cache-lock-timeout duration Cache TTL for locks to prevent duplicate requests on revisions, set to 0 to disable (default 10s) --rootpath string Used if Argo CD is running behind reverse proxy under subpath different from / --sentinel stringArray Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379). --sentinelmaster string Redis sentinel master group name. (default "master") --server string The address and port of the Kubernetes API server --staticassets string Directory path that contains additional static assets (default "/shared/app") --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --tlsciphers string The list of acceptable ciphers to be used when establishing TLS connections. Use 'list' to list available ciphers. (default "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384") --tlsmaxversion string The maximum SSL/TLS version that is acceptable (one of: 1.0|1.1|1.2|1.3) (default "1.3") --tlsminversion string The minimum SSL/TLS version that is acceptable (one of: 1.0|1.1|1.2|1.3) (default "1.2") --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server --webhook-parallelism-limit int Number of webhook requests processed concurrently (default 50) --x-frame-options value Set X-Frame-Options header in HTTP responses to value. To disable, set to "". (default "sameorigin") ``` ### SEE ALSO * [argocd-server version](argocd-server_version.md) - Print version information
argocd
argocd server Command Reference argocd server Run the ArgoCD API server Synopsis The API server is a gRPC REST server which exposes the API consumed by the Web UI CLI and CI CD systems This command runs API server in the foreground It can be configured by following options argocd server flags Examples Start the Argo CD API server with default settings argocd server Start the Argo CD API server on a custom port and enable tracing argocd server port 8888 otlp address localhost 4317 Options address string Listen on given address default 0 0 0 0 api content types string Semicolon separated list of allowed content types for non GET api requests Any content type is allowed if empty default application json app state cache expiration duration Cache expiration for app state default 1h0m0s application namespaces strings List of additional namespaces where application resources can be managed in appset allowed scm providers strings The list of allowed custom SCM provider API URLs This restriction does not apply to SCM or PR generators which do not accept a custom API URL Default Empty all appset enable new git file globbing Enable new globbing in Git files generator appset enable scm providers Enable retrieving information from SCM providers used by the SCM and PR generators Default true default true appset scm root ca path string Provide Root CA Path for self signed TLS Certificates as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups as uid string UID to impersonate for the operation basehref string Value for base href in index html Used if Argo CD is running behind reverse proxy under subpath different from default certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use connection status cache expiration duration Cache expiration for cluster repo connection status default 1h0m0s content security policy value Set Content Security Policy header in HTTP responses to value To disable set to default frame ancestors self context string The name of the kubeconfig context to use default cache expiration duration Cache expiration default default 24h0m0s dex server string Dex server address default argocd dex server 5556 dex server plaintext Use a plaintext client non TLS to connect to dex server dex server strict tls Perform strict validation of TLS certificates when connecting to dex server disable auth Disable client authentication disable compression If true opt out of response compression for all requests to the server enable gzip Enable GZIP compression default true enable k8s event none Enable ArgoCD to use k8s event For disabling all events set the value as none e g enable k8s event none For enabling specific events set the value as event reason e g enable k8s event StatusRefreshed ResourceCreated default all enable proxy extension Enable Proxy Extension feature gloglevel int Set the glog logging level h help help for argocd server insecure Run server without TLS insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to a kube config Only required if out of cluster logformat string Set the logging format One of text json default text login attempts expiration duration Cache expiration for failed login attempts default 24h0m0s loglevel string Set the logging level One of debug info warn error default info metrics address string Listen for metrics on given address default 0 0 0 0 metrics port int Start metrics on given port default 8083 n namespace string If present the namespace scope for this CLI request oidc cache expiration duration Cache expiration for OIDC state default 3m0s otlp address string OpenTelemetry collector address to send traces to otlp attrs strings List of OpenTelemetry collector extra attrs when send traces each attribute is separated by a colon e g key value otlp headers stringToString List of OpenTelemetry collector extra headers sent with traces headers are comma separated key value pairs e g key1 value1 key2 value2 default otlp insecure OpenTelemetry collector insecure mode default true password string Password for basic authentication to the API server port int Listen on given port default 8080 proxy url string If provided this URL will be used to connect via proxy redis string Redis server hostname and port e g argocd redis 6379 redis ca certificate string Path to Redis server CA certificate e g etc certs redis ca crt If not specified system trusted CAs will be used for server certificate validation redis client certificate string Path to Redis client certificate e g etc certs redis client crt redis client key string Path to Redis client key e g etc certs redis client crt redis compress string Enable compression for data sent to Redis with the required compression algorithm possible values gzip none default gzip redis insecure skip tls verify Skip Redis server certificate validation redis use tls Use TLS when connecting to Redis redisdb int Redis database repo cache expiration duration Cache expiration for repo state incl app lists app details manifest generation revision meta data default 24h0m0s repo server string Repo server address default argocd repo server 8081 repo server default cache expiration duration Cache expiration default default 24h0m0s repo server plaintext Use a plaintext client non TLS to connect to repository server repo server redis string Redis server hostname and port e g argocd redis 6379 repo server redis ca certificate string Path to Redis server CA certificate e g etc certs redis ca crt If not specified system trusted CAs will be used for server certificate validation repo server redis client certificate string Path to Redis client certificate e g etc certs redis client crt repo server redis client key string Path to Redis client key e g etc certs redis client crt repo server redis compress string Enable compression for data sent to Redis with the required compression algorithm possible values gzip none default gzip repo server redis insecure skip tls verify Skip Redis server certificate validation repo server redis use tls Use TLS when connecting to Redis repo server redisdb int Redis database repo server sentinel stringArray Redis sentinel hostname and port e g argocd redis ha announce 0 6379 repo server sentinelmaster string Redis sentinel master group name default master repo server strict tls Perform strict validation of TLS certificates when connecting to repo server repo server timeout seconds int Repo server RPC call timeout seconds default 60 request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 revision cache expiration duration Cache expiration for cached revision default 3m0s revision cache lock timeout duration Cache TTL for locks to prevent duplicate requests on revisions set to 0 to disable default 10s rootpath string Used if Argo CD is running behind reverse proxy under subpath different from sentinel stringArray Redis sentinel hostname and port e g argocd redis ha announce 0 6379 sentinelmaster string Redis sentinel master group name default master server string The address and port of the Kubernetes API server staticassets string Directory path that contains additional static assets default shared app tls server name string If provided this name will be used to validate server certificate If this is not provided hostname used to contact the server is used tlsciphers string The list of acceptable ciphers to be used when establishing TLS connections Use list to list available ciphers default TLS ECDHE RSA WITH AES 256 GCM SHA384 tlsmaxversion string The maximum SSL TLS version that is acceptable one of 1 0 1 1 1 2 1 3 default 1 3 tlsminversion string The minimum SSL TLS version that is acceptable one of 1 0 1 1 1 2 1 3 default 1 2 token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use username string Username for basic authentication to the API server webhook parallelism limit int Number of webhook requests processed concurrently default 50 x frame options value Set X Frame Options header in HTTP responses to value To disable set to default sameorigin SEE ALSO argocd server version argocd server version md Print version information
argocd Known Issues Argo CD 2 4 0 introduced a breaking API change renaming the filter to v2 4 to 2 5 Impact to API clients Broken filter before 2 5 15
# v2.4 to 2.5 ## Known Issues ### Broken `project` filter before 2.5.15 Argo CD 2.4.0 introduced a breaking API change, renaming the `project` filter to `projects`. #### Impact to API clients A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API. If the client uses the `project` field to filter projects, the filter will not be applied. **The failing project filter could have detrimental consequences if, for example, you rely on it to list Applications to be deleted.** #### Impact to CLI clients CLI clients older that v2.4.0 rely on client-side filtering and are not impacted by this bug. #### How to fix the problem Upgrade to Argo CD >=2.4.27, >=2.5.15, or >=2.6.6. This version of Argo CD will accept both `project` and `projects` as valid filters. ### Broken matrix-nested git files generator in 2.5.14 Argo CD 2.5.14 introduced a bug in the matrix-nested git files generator. The bug only applies when the git files generator is the second generator nested under a matrix. For example: ```yaml apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: generators: - matrix: generators: - clusters: {} - git: repoURL: https://git.example.com/org/repo.git revision: HEAD files: - path: "defaults/*.yaml" template: # ... ``` The nested git files generator will produce no parameters, causing the matrix generator to also produce no parameters. This will cause the ApplicationSet to produce no Applications. If the ApplicationSet controller is [configured with the ability to delete applications](https://argo-cd.readthedocs.io/en/latest/operator-manual/applicationset/Controlling-Resource-Modification/), it will delete all Applications which were previously created by the ApplicationSet. To avoid this issue, upgrade directly to >=2.5.15 or >= 2.6.6. ## Configure RBAC to account for new `applicationsets` resource 2.5 introduces a new `applicationsets` [RBAC resource](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#rbac-resources-and-actions). When you upgrade to 2.5, RBAC policies with `*` in the resource field and `create`, `update`, `delete`, `get`, or `*` in the action field will automatically grant the `applicationsets` privilege. To avoid granting the new privilege, replace the existing policy with a list of new policies explicitly listing the old resources. ### Example Old: ```csv p, role:org-admin, *, create, *, allow ``` New: ```csv p, role:org-admin, clusters, create, *, allow p, role:org-admin, projects, create, *, allow p, role:org-admin, applications, create, *, allow p, role:org-admin, repositories, create, *, allow p, role:org-admin, certificates, create, *, allow p, role:org-admin, accounts, create, *, allow p, role:org-admin, gpgkeys, create, *, allow p, role:org-admin, exec, create, *, allow ``` (Note that `applicationsets` is missing from the list, to preserve pre-2.5 permissions.) ## argocd-cm plugins (CMPs) are deprecated Starting with Argo CD v2.5, installing config management plugins (CMPs) via the `argocd-cm` ConfigMap is deprecated. Support will be removed in v2.7. You can continue to use the plugins by [installing them as sidecars](https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/) on the repo-server Deployment. Sidecar plugins are significantly more secure. Plugin code runs in its own container with an almost completely-isolated filesystem. If an attacker compromises a plugin, the attacker's ability to cause harm is significantly mitigated. To determine whether argocd-cm plugins are still in use, scan your argocd-repo-server and argocd-server logs for the following message: > argocd-cm plugins are deprecated, and support will be removed in v2.6. Upgrade your plugin to be installed via sidecar. https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/ **NOTE:** removal of argocd-cm plugin support was delayed to v2.7. Update your logs scan to use `v2.7` instead of `v2.6`. If you run `argocd app list` as admin, the list of Applications using deprecated plugins will be logged as a warning. ## Dex server TLS configuration In order to secure the communications between the dex server and the Argo CD API server, TLS is now enabled by default on the dex server. By default, without configuration, the dex server will generate a self-signed certificate upon startup. However, we recommend that users configure their own TLS certificate using the `argocd-dex-server-tls` secret. Please refer to the [TLS configuration guide](../tls.md#configuring-tls-to-argocd-dex-server) for more information. ## Invalid users.session.duration values now fall back to 24h Before v2.5, an invalid `users.session.duration` value in argocd-cm would 1) log a warning and 2) result in user sessions having no duration limit. Starting with v2.5, invalid duration values will fall back to the default value of 24 hours with a warning. ## Out-of-bounds symlinks now blocked at fetch There have been several path traversal and identification vulnerabilities disclosed in the past related to symlinks. To help prevent any further vulnerabilities, we now scan all repositories and Helm charts for **out of bounds symlinks** at the time they are fetched and block further processing if they are found. An out-of-bounds symlink is defined as any symlink that leaves the root of the Git repository or Helm chart, even if the final target is within the root. If an out of bounds symlink is found, a warning will be printed to the repo server console and an error will be shown in the UI or CLI. Below is an example directory structure showing valid symlinks and invalid symlinks. ``` chart ├── Chart.yaml ├── values │ └── values.yaml ├── bad-link.yaml -> ../out-of-bounds.yaml # Blocked ├── bad-link-2.yaml -> ../chart/values/values.yaml # Blocked because it leaves the root ├── bad-link-3.yaml -> /absolute/link.yaml # Blocked └── good-link.yaml -> values/values.yaml # OK ``` If you rely on out of bounds symlinks, this check can be disabled one of three ways: 1. The `--allow-oob-symlinks` argument on the repo server. 2. The `reposerver.allow.oob.symlinks` key if you are using `argocd-cmd-params-cm` 3. Directly setting `ARGOCD_REPO_SERVER_ALLOW_OOB_SYMLINKS` environment variable on the repo server. It is **strongly recommended** to leave this check enabled. Disabling the check will not allow _all_ out-of-bounds symlinks. Those will still be blocked for things like values files in Helm charts, but symlinks which are not explicitly blocked by other checks will be allowed. ## Deprecated client-side manifest diffs When using `argocd app diff --local`, code from the repo server is run on the user's machine in order to locally generate manifests for comparing against the live manifests of an app. However, this requires that the necessary tools (Helm, Kustomize, etc) are installed with the correct versions. Even worse, it does not support Config Management Plugins (CMPs) whatsoever. In order to support CMPs and reduce local requirements, we have implemented *server-side generation* of local manifests via the `--server-side-generate` argument. For example, `argocd app diff --local repoDir --server-side-generate` will upload the contents of `repoDir` to the repo server and run your manifest generation pipeline against it, the same as it would for a Git repo. In v2.7, the `--server-side-generate` argument will become the default, and client-side generation will be supported as an alternative. !!! warning The semantics of *where* Argo will start generating manifests within a repo has changed between client-side and server-side generation. With client-side generation, the application's path (`spec.source.path`) was ignored and the value of `--local-repo-root` was effectively used (by default `/` relative to `--local`). For example, given an application that has an application path of `/manifests`, you would have had to run `argocd app diff --local yourRepo/manifests`. This behavior did not match the repo server's process of downloading the full repo/chart and then beginning generation in the path specified in the application manifest. When switching to server-side generation, `--local` should point to the root of your repo *without* including your `spec.source.path`. This is especially important to keep in mind when `--server-side-generate` becomes the default in v2.7. Existing scripts utilizing `diff --local` may break in v2.7 if `spec.source.path` was not `/`. ## Upgraded Kustomize Version The bundled Kustomize version has been upgraded from 4.4.1 to 4.5.7. ## Upgraded Helm Version Note that bundled Helm version has been upgraded from 3.9.0 to 3.10.1. ## Upgraded HAProxy version The HAProxy version in the HA manifests has been upgraded from 2.0.25 to 2.6.2. To read about the changes/improvements, see the HAProxy major release announcements ([2.1.0](https://www.mail-archive.com/[email protected]/msg35491.html), [2.2.0](https://www.mail-archive.com/[email protected]/msg37852.html), [2.3.0](https://www.mail-archive.com/[email protected]/msg38812.html), [2.4.0](https://www.mail-archive.com/[email protected]/msg40499.html), [2.5.0](https://www.mail-archive.com/[email protected]/msg41508.html), and [2.6.0](https://www.mail-archive.com/[email protected]/msg42371.html). ## Logs RBAC enforcement will remain opt-in This note is just for clarity. No action is required. We [expected](../upgrading/2.3-2.4.md#enable-logs-rbac-enforcement) to enable logs RBAC enforcement by default in 2.5. We have decided not to do that in the 2.x series due to disruption for users of [Project Roles](../../user-guide/projects.md#project-roles). ## `argocd app create` for old CLI versions fails with API version >=2.5.16 Starting with Argo CD 2.5.16, the API returns `PermissionDenied` instead of `NotFound` for Application `GET` requests if the Application does not exist. The Argo CD CLI before versions starting with version 2.5.0-rc1 and before versions 2.5.16 and 2.6.7 does a `GET` request before the `POST` request in `argocd app create`. The command does not gracefully handle the `PermissionDenied` response and will therefore fail to create/update the Application. To solve the issue, upgrade the CLI to at least 2.5.16, or 2.6.7. CLIs older than 2.5.0-rc1 are unaffected. ## Golang upgrade in 2.5.20 In 2.5.20, we upgrade the Golang version used to build Argo CD from 1.18 to 1.19. If you use Argo CD as a library, you may need to upgrade your Go version.
argocd
v2 4 to 2 5 Known Issues Broken project filter before 2 5 15 Argo CD 2 4 0 introduced a breaking API change renaming the project filter to projects Impact to API clients A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API If the client uses the project field to filter projects the filter will not be applied The failing project filter could have detrimental consequences if for example you rely on it to list Applications to be deleted Impact to CLI clients CLI clients older that v2 4 0 rely on client side filtering and are not impacted by this bug How to fix the problem Upgrade to Argo CD 2 4 27 2 5 15 or 2 6 6 This version of Argo CD will accept both project and projects as valid filters Broken matrix nested git files generator in 2 5 14 Argo CD 2 5 14 introduced a bug in the matrix nested git files generator The bug only applies when the git files generator is the second generator nested under a matrix For example yaml apiVersion argoproj io v1alpha1 kind ApplicationSet metadata name guestbook spec generators matrix generators clusters git repoURL https git example com org repo git revision HEAD files path defaults yaml template The nested git files generator will produce no parameters causing the matrix generator to also produce no parameters This will cause the ApplicationSet to produce no Applications If the ApplicationSet controller is configured with the ability to delete applications https argo cd readthedocs io en latest operator manual applicationset Controlling Resource Modification it will delete all Applications which were previously created by the ApplicationSet To avoid this issue upgrade directly to 2 5 15 or 2 6 6 Configure RBAC to account for new applicationsets resource 2 5 introduces a new applicationsets RBAC resource https argo cd readthedocs io en stable operator manual rbac rbac resources and actions When you upgrade to 2 5 RBAC policies with in the resource field and create update delete get or in the action field will automatically grant the applicationsets privilege To avoid granting the new privilege replace the existing policy with a list of new policies explicitly listing the old resources Example Old csv p role org admin create allow New csv p role org admin clusters create allow p role org admin projects create allow p role org admin applications create allow p role org admin repositories create allow p role org admin certificates create allow p role org admin accounts create allow p role org admin gpgkeys create allow p role org admin exec create allow Note that applicationsets is missing from the list to preserve pre 2 5 permissions argocd cm plugins CMPs are deprecated Starting with Argo CD v2 5 installing config management plugins CMPs via the argocd cm ConfigMap is deprecated Support will be removed in v2 7 You can continue to use the plugins by installing them as sidecars https argo cd readthedocs io en stable user guide config management plugins on the repo server Deployment Sidecar plugins are significantly more secure Plugin code runs in its own container with an almost completely isolated filesystem If an attacker compromises a plugin the attacker s ability to cause harm is significantly mitigated To determine whether argocd cm plugins are still in use scan your argocd repo server and argocd server logs for the following message argocd cm plugins are deprecated and support will be removed in v2 6 Upgrade your plugin to be installed via sidecar https argo cd readthedocs io en stable user guide config management plugins NOTE removal of argocd cm plugin support was delayed to v2 7 Update your logs scan to use v2 7 instead of v2 6 If you run argocd app list as admin the list of Applications using deprecated plugins will be logged as a warning Dex server TLS configuration In order to secure the communications between the dex server and the Argo CD API server TLS is now enabled by default on the dex server By default without configuration the dex server will generate a self signed certificate upon startup However we recommend that users configure their own TLS certificate using the argocd dex server tls secret Please refer to the TLS configuration guide tls md configuring tls to argocd dex server for more information Invalid users session duration values now fall back to 24h Before v2 5 an invalid users session duration value in argocd cm would 1 log a warning and 2 result in user sessions having no duration limit Starting with v2 5 invalid duration values will fall back to the default value of 24 hours with a warning Out of bounds symlinks now blocked at fetch There have been several path traversal and identification vulnerabilities disclosed in the past related to symlinks To help prevent any further vulnerabilities we now scan all repositories and Helm charts for out of bounds symlinks at the time they are fetched and block further processing if they are found An out of bounds symlink is defined as any symlink that leaves the root of the Git repository or Helm chart even if the final target is within the root If an out of bounds symlink is found a warning will be printed to the repo server console and an error will be shown in the UI or CLI Below is an example directory structure showing valid symlinks and invalid symlinks chart Chart yaml values values yaml bad link yaml out of bounds yaml Blocked bad link 2 yaml chart values values yaml Blocked because it leaves the root bad link 3 yaml absolute link yaml Blocked good link yaml values values yaml OK If you rely on out of bounds symlinks this check can be disabled one of three ways 1 The allow oob symlinks argument on the repo server 2 The reposerver allow oob symlinks key if you are using argocd cmd params cm 3 Directly setting ARGOCD REPO SERVER ALLOW OOB SYMLINKS environment variable on the repo server It is strongly recommended to leave this check enabled Disabling the check will not allow all out of bounds symlinks Those will still be blocked for things like values files in Helm charts but symlinks which are not explicitly blocked by other checks will be allowed Deprecated client side manifest diffs When using argocd app diff local code from the repo server is run on the user s machine in order to locally generate manifests for comparing against the live manifests of an app However this requires that the necessary tools Helm Kustomize etc are installed with the correct versions Even worse it does not support Config Management Plugins CMPs whatsoever In order to support CMPs and reduce local requirements we have implemented server side generation of local manifests via the server side generate argument For example argocd app diff local repoDir server side generate will upload the contents of repoDir to the repo server and run your manifest generation pipeline against it the same as it would for a Git repo In v2 7 the server side generate argument will become the default and client side generation will be supported as an alternative warning The semantics of where Argo will start generating manifests within a repo has changed between client side and server side generation With client side generation the application s path spec source path was ignored and the value of local repo root was effectively used by default relative to local For example given an application that has an application path of manifests you would have had to run argocd app diff local yourRepo manifests This behavior did not match the repo server s process of downloading the full repo chart and then beginning generation in the path specified in the application manifest When switching to server side generation local should point to the root of your repo without including your spec source path This is especially important to keep in mind when server side generate becomes the default in v2 7 Existing scripts utilizing diff local may break in v2 7 if spec source path was not Upgraded Kustomize Version The bundled Kustomize version has been upgraded from 4 4 1 to 4 5 7 Upgraded Helm Version Note that bundled Helm version has been upgraded from 3 9 0 to 3 10 1 Upgraded HAProxy version The HAProxy version in the HA manifests has been upgraded from 2 0 25 to 2 6 2 To read about the changes improvements see the HAProxy major release announcements 2 1 0 https www mail archive com haproxy formilux org msg35491 html 2 2 0 https www mail archive com haproxy formilux org msg37852 html 2 3 0 https www mail archive com haproxy formilux org msg38812 html 2 4 0 https www mail archive com haproxy formilux org msg40499 html 2 5 0 https www mail archive com haproxy formilux org msg41508 html and 2 6 0 https www mail archive com haproxy formilux org msg42371 html Logs RBAC enforcement will remain opt in This note is just for clarity No action is required We expected upgrading 2 3 2 4 md enable logs rbac enforcement to enable logs RBAC enforcement by default in 2 5 We have decided not to do that in the 2 x series due to disruption for users of Project Roles user guide projects md project roles argocd app create for old CLI versions fails with API version 2 5 16 Starting with Argo CD 2 5 16 the API returns PermissionDenied instead of NotFound for Application GET requests if the Application does not exist The Argo CD CLI before versions starting with version 2 5 0 rc1 and before versions 2 5 16 and 2 6 7 does a GET request before the POST request in argocd app create The command does not gracefully handle the PermissionDenied response and will therefore fail to create update the Application To solve the issue upgrade the CLI to at least 2 5 16 or 2 6 7 CLIs older than 2 5 0 rc1 are unaffected Golang upgrade in 2 5 20 In 2 5 20 we upgrade the Golang version used to build Argo CD from 1 18 to 1 19 If you use Argo CD as a library you may need to upgrade your Go version
argocd Argo CD Notifications and ApplicationSet Are Bundled into Argo CD remove references to https github com argoproj labs argocd notifications and https github com argoproj labs applicationset The Argo CD Notifications and ApplicationSet are part of Argo CD now You no longer need to install them separately v2 2 to 2 3 The bundled manifests are drop in replacements for the previous versions If you are using Kustomize to bundle the manifests together then just The Notifications and ApplicationSet components are bundled into default Argo CD installation manifests
# v2.2 to 2.3 ## Argo CD Notifications and ApplicationSet Are Bundled into Argo CD The Argo CD Notifications and ApplicationSet are part of Argo CD now. You no longer need to install them separately. The Notifications and ApplicationSet components are bundled into default Argo CD installation manifests. The bundled manifests are drop-in replacements for the previous versions. If you are using Kustomize to bundle the manifests together then just remove references to https://github.com/argoproj-labs/argocd-notifications and https://github.com/argoproj-labs/applicationset. If you are using [the argocd-notifications helm chart](https://github.com/argoproj/argo-helm/tree/argocd-notifications-1.8.1/charts/argocd-notifications), you can move the chart [values](https://github.com/argoproj/argo-helm/blob/argocd-notifications-1.8.1/charts/argocd-notifications/values.yaml) to the `notifications` section of the argo-cd chart [values](https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L2152). Although most values remain as is, for details please look up the values that are relevant to you. No action is required if you are using `kubectl apply`. ## Configure Additional Argo CD Binaries We have removed non-Linux Argo CD binaries (Darwin amd64 and Windows amd64) from the image ([#7668](https://github.com/argoproj/argo-cd/pull/7668)) and the associated download buttons in the help page in the UI. Those removed binaries will still be included in the release assets and we made those configurable in [#7755](https://github.com/argoproj/argo-cd/pull/7755). You can add download buttons for other OS architectures by adding the following to your `argocd-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: help.download.linux-arm64: "path-or-url-to-download" help.download.darwin-amd64: "path-or-url-to-download" help.download.darwin-arm64: "path-or-url-to-download" help.download.windows-amd64: "path-or-url-to-download" ``` ## Removed Python from the base image If you are using a [Config Management Plugin](../config-management-plugins.md) that relies on Python, you will need to build a custom image on the Argo CD base to install Python. ## Upgraded Kustomize Version Note that bundled Kustomize version has been upgraded from 4.2.0 to 4.4.1. ## Upgraded Helm Version Note that bundled Helm version has been upgraded from 3.7.1 to 3.8.0. ## Support for private repo SSH keys using the SHA-1 signature hash algorithm is removed in 2.3.7 Argo CD 2.3.7 upgraded its base image from Ubuntu 21.04 to Ubuntu 22.04, which upgraded OpenSSH to 8.9. OpenSSH starting with 8.8 [dropped support for the `ssh-rsa` SHA-1 key signature algorithm](https://www.openssh.com/txt/release-8.8). The signature algorithm is _not_ the same as the algorithm used when generating the key. There is no need to update keys. The signature algorithm is negotiated with the SSH server when the connection is being set up. The client offers its list of accepted signature algorithms, and if the server has a match, the connection proceeds. For most SSH servers on up-to-date git providers, acceptable algorithms other than `ssh-rsa` should be available. Before upgrading to Argo CD 2.3.7, check whether your git provider(s) using SSH authentication support algorithms newer than `rsa-ssh`. 1. Make sure your version of SSH >= 8.9 (the version used by Argo CD). If not, upgrade it before proceeding. ```shell ssh -V ``` Example output: `OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022` 2. Once you have a recent version of OpenSSH, follow the directions from the [OpenSSH 8.8 release notes](https://www.openssh.com/txt/release-8.7): > To check whether a server is using the weak ssh-rsa public key > algorithm, for host authentication, try to connect to it after > removing the ssh-rsa algorithm from ssh(1)'s allowed list: > > ```shell > ssh -oHostKeyAlgorithms=-ssh-rsa user@host > ``` > > If the host key verification fails and no other supported host key > types are available, the server software on that host should be > upgraded. If the server does not support an acceptable version, you will get an error similar to this; ``` $ ssh -oHostKeyAlgorithms=-ssh-rsa vs-ssh.visualstudio.com Unable to negotiate with 20.42.134.1 port 22: no matching host key type found. Their offer: ssh-rsa ``` This indicates that the server needs to update its supported key signature algorithms, and Argo CD will not connect to it. ### Workaround The [OpenSSH 8.8 release notes](https://www.openssh.com/txt/release-8.8) describe a workaround if you cannot change the server's key signature algorithms configuration. > Incompatibility is more likely when connecting to older SSH > implementations that have not been upgraded or have not closely tracked > improvements in the SSH protocol. For these cases, it may be necessary > to selectively re-enable RSA/SHA1 to allow connection and/or user > authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms > options. For example, the following stanza in ~/.ssh/config will enable > RSA/SHA1 for host and user authentication for a single destination host: > > ``` > Host old-host > HostkeyAlgorithms +ssh-rsa > PubkeyAcceptedAlgorithms +ssh-rsa > ``` > > We recommend enabling RSA/SHA1 only as a stopgap measure until legacy > implementations can be upgraded or reconfigured with another key type > (such as ECDSA or Ed25519). To apply this to Argo CD, you could create a ConfigMap with the desired ssh config file and then mount it at `/home/argocd/.ssh/config`.
argocd
v2 2 to 2 3 Argo CD Notifications and ApplicationSet Are Bundled into Argo CD The Argo CD Notifications and ApplicationSet are part of Argo CD now You no longer need to install them separately The Notifications and ApplicationSet components are bundled into default Argo CD installation manifests The bundled manifests are drop in replacements for the previous versions If you are using Kustomize to bundle the manifests together then just remove references to https github com argoproj labs argocd notifications and https github com argoproj labs applicationset If you are using the argocd notifications helm chart https github com argoproj argo helm tree argocd notifications 1 8 1 charts argocd notifications you can move the chart values https github com argoproj argo helm blob argocd notifications 1 8 1 charts argocd notifications values yaml to the notifications section of the argo cd chart values https github com argoproj argo helm blob main charts argo cd values yaml L2152 Although most values remain as is for details please look up the values that are relevant to you No action is required if you are using kubectl apply Configure Additional Argo CD Binaries We have removed non Linux Argo CD binaries Darwin amd64 and Windows amd64 from the image 7668 https github com argoproj argo cd pull 7668 and the associated download buttons in the help page in the UI Those removed binaries will still be included in the release assets and we made those configurable in 7755 https github com argoproj argo cd pull 7755 You can add download buttons for other OS architectures by adding the following to your argocd cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data help download linux arm64 path or url to download help download darwin amd64 path or url to download help download darwin arm64 path or url to download help download windows amd64 path or url to download Removed Python from the base image If you are using a Config Management Plugin config management plugins md that relies on Python you will need to build a custom image on the Argo CD base to install Python Upgraded Kustomize Version Note that bundled Kustomize version has been upgraded from 4 2 0 to 4 4 1 Upgraded Helm Version Note that bundled Helm version has been upgraded from 3 7 1 to 3 8 0 Support for private repo SSH keys using the SHA 1 signature hash algorithm is removed in 2 3 7 Argo CD 2 3 7 upgraded its base image from Ubuntu 21 04 to Ubuntu 22 04 which upgraded OpenSSH to 8 9 OpenSSH starting with 8 8 dropped support for the ssh rsa SHA 1 key signature algorithm https www openssh com txt release 8 8 The signature algorithm is not the same as the algorithm used when generating the key There is no need to update keys The signature algorithm is negotiated with the SSH server when the connection is being set up The client offers its list of accepted signature algorithms and if the server has a match the connection proceeds For most SSH servers on up to date git providers acceptable algorithms other than ssh rsa should be available Before upgrading to Argo CD 2 3 7 check whether your git provider s using SSH authentication support algorithms newer than rsa ssh 1 Make sure your version of SSH 8 9 the version used by Argo CD If not upgrade it before proceeding shell ssh V Example output OpenSSH 8 9p1 Ubuntu 3 OpenSSL 3 0 2 15 Mar 2022 2 Once you have a recent version of OpenSSH follow the directions from the OpenSSH 8 8 release notes https www openssh com txt release 8 7 To check whether a server is using the weak ssh rsa public key algorithm for host authentication try to connect to it after removing the ssh rsa algorithm from ssh 1 s allowed list shell ssh oHostKeyAlgorithms ssh rsa user host If the host key verification fails and no other supported host key types are available the server software on that host should be upgraded If the server does not support an acceptable version you will get an error similar to this ssh oHostKeyAlgorithms ssh rsa vs ssh visualstudio com Unable to negotiate with 20 42 134 1 port 22 no matching host key type found Their offer ssh rsa This indicates that the server needs to update its supported key signature algorithms and Argo CD will not connect to it Workaround The OpenSSH 8 8 release notes https www openssh com txt release 8 8 describe a workaround if you cannot change the server s key signature algorithms configuration Incompatibility is more likely when connecting to older SSH implementations that have not been upgraded or have not closely tracked improvements in the SSH protocol For these cases it may be necessary to selectively re enable RSA SHA1 to allow connection and or user authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms options For example the following stanza in ssh config will enable RSA SHA1 for host and user authentication for a single destination host Host old host HostkeyAlgorithms ssh rsa PubkeyAcceptedAlgorithms ssh rsa We recommend enabling RSA SHA1 only as a stopgap measure until legacy implementations can be upgraded or reconfigured with another key type such as ECDSA or Ed25519 To apply this to Argo CD you could create a ConfigMap with the desired ssh config file and then mount it at home argocd ssh config
argocd Redis Upgraded to v6 2 1 However if you are running Argo CD in production with multiple users it is recommended to upgrade during off peak hours to avoid user visible failures The Redis itself should be able to upgrade with no downtime as well as Argo CD does not use it as a persistent store The bundled Redis version has been upgraded to v6 2 1 v1 8 to 2 0
# v1.8 to 2.0 ## Redis Upgraded to v6.2.1 The bundled Redis version has been upgraded to v6.2.1. The Redis itself should be able to upgrade with no downtime, as well as Argo CD does not use it as a persistent store. However, if you are running Argo CD in production with multiple users it is recommended to upgrade during off-peak hours to avoid user-visible failures. ## Environment variables expansion Argo CD supports using [environment variables](../../../user-guide/build-environment/) in config management tools parameters. The expansion logic has been improved and now expands missing environment variables into an empty string. ## Docker image migrated to use Ubuntu as base The official Docker image has been migrated to use `ubuntu:20.10` instead of `debian:10-slim` as base image. While this should not affect user experience, you might be affected if you use custom-built images and/or include third party tools in custom-built images. Please make sure that your custom tools are still working with the update to v2.0 before deploying it onto production. ## Container registry switched to quay.io and sundown of Docker Hub repository Due to Docker Hub's new rate-limiting and retention policies, the Argo project has decided to switch to the [quay.io](https://quay.io) registry as a new home for all images published by its sub-projects. As of Argo CD version 2.0, the installation manifests are configured to pull the container images from `quay.io` and we announce the **sundown** of the existing Docker Hub repositories. For the 2.0 release this means, we will still push to both registries, but we will stop pushing images to Docker Hub once Argo CD 2.1 has been released. Please make sure that your clusters can pull from the `quay.io` registry. If you aren't able to do so timely, you can change the container image slugs in the installation manually to Docker Hub as a workaround to install Argo CD 2.0. This workaround will not be possible anymore with 2.1, however. ## Dex tool migrated from argocd-util to argocd-dex The dex commands `rundex` and `gendexcfg` have been migrated from `argocd-util` to `argocd-dex`. It means that you need to update `argocd-dex-server` deployment's commands to install `argocd-dex` binary instead of `argocd-util` in init container and run dex command from `argocd-dex` instead of `argocd-util`: ```bash initContainers: - command: - cp - -n - /usr/local/bin/argocd - /shared/argocd-dex ``` ```bash containers: - command: - /shared/argocd-dex - rundex ``` Note that starting from v2.0 argocd binary behaviour has changed. It will have all argocd binaries such `argocd-dex`, `argocd-server`, `argocd-repo-server`, `argocd-application-controller`, `argocd-util`, `argocd` baked inside. The binary will change behaviour based on its name. ## Updated retry params type from String to Duration for app sync App Sync command exposes certain retry options, which allows the users to parameterize the sync retries. Two of those params, `retry-backoff-duration` and `retry-backoff-max-duration` were declared as type `string` rather than `duration`. This allowed users to provide the values to these flags without time unit (seconds, minutes, hours ...) or any random string as well, but since we have migrated from `string` to `duration`, it is now mandatory for users to provide a unit (valid duration). ```bash EXAMPLE: argocd app sync <app-name> --retry-backoff-duration=10 -> invalid argocd app sync <app-name> --retry-backoff-duration=10s -> valid ``` ## Switch to Golang 1.16 The official Argo CD binaries are now being build using Go 1.16, making a jump from the previous 1.14.x. Users should note that Go 1.15 introduced deprecation of validating server names against the `CommonName` property of a certificate when performing TLS connections. If you have repository servers with an incompatible certificate, connections to those servers might break. You will have to issue correct certificates to unbreak such a situation. ## Migration of CRDs from apiextensions/v1beta1 to apiextensions/v1 Our CRDs (`Application` and `AppProject`) have been moved from the deprecated `apiextensions/v1beta1` to the `apiextensions/v1` API group. This does **not** affect the version of the CRDs themselves. We do not expect that changes to existing CRs for `Application` and `AppProject` are required from users, or that this change requires otherwise actions and this note is just included for completeness. ## Helm v3 is now the default when rendering Charts With this release, we made Helm v3 being the default version for rendering any Helm charts through Argo CD. We also disabled the Helm version auto-detection depending on the `apiVersion` field of the `Chart.yaml`, so the charts will be rendered using Helm v3 regardless of what's in the Chart's `apiVersion` field. This can result in minor out-of-sync conditions on your Applications that were previously rendered using Helm v2 (e.g. a change in one of the annotations that Helm adds). You can fix this by syncing the Application. If you have existing Charts that require to be rendered using Helm v2, you will need to explicitly configure your Application to use Helm v2 for rendering the chart, as described [here](../../user-guide/helm.md#helm-version). Please also note that Helm v2 is now being considered deprecated in Argo CD, as it will not receive any updates from the upstream Helm project anymore. We will still ship the Helm v2 binary for the next two releases, but it will be subject to removal after that grace period. Users are encouraged to upgrade any Charts that still require Helm v2 to be compatible with Helm v3. ## Kustomize version updated to v3.9.4 Argo CD now ships with Kustomize v3.9.4 by default. Please make sure that your manifests will render correctly with this Kustomize version. If you need backwards compatibility to a previous version of Kustomize, please consider setting up a custom Kustomize version and configure your Applications to be rendered using that specific version.
argocd
v1 8 to 2 0 Redis Upgraded to v6 2 1 The bundled Redis version has been upgraded to v6 2 1 The Redis itself should be able to upgrade with no downtime as well as Argo CD does not use it as a persistent store However if you are running Argo CD in production with multiple users it is recommended to upgrade during off peak hours to avoid user visible failures Environment variables expansion Argo CD supports using environment variables user guide build environment in config management tools parameters The expansion logic has been improved and now expands missing environment variables into an empty string Docker image migrated to use Ubuntu as base The official Docker image has been migrated to use ubuntu 20 10 instead of debian 10 slim as base image While this should not affect user experience you might be affected if you use custom built images and or include third party tools in custom built images Please make sure that your custom tools are still working with the update to v2 0 before deploying it onto production Container registry switched to quay io and sundown of Docker Hub repository Due to Docker Hub s new rate limiting and retention policies the Argo project has decided to switch to the quay io https quay io registry as a new home for all images published by its sub projects As of Argo CD version 2 0 the installation manifests are configured to pull the container images from quay io and we announce the sundown of the existing Docker Hub repositories For the 2 0 release this means we will still push to both registries but we will stop pushing images to Docker Hub once Argo CD 2 1 has been released Please make sure that your clusters can pull from the quay io registry If you aren t able to do so timely you can change the container image slugs in the installation manually to Docker Hub as a workaround to install Argo CD 2 0 This workaround will not be possible anymore with 2 1 however Dex tool migrated from argocd util to argocd dex The dex commands rundex and gendexcfg have been migrated from argocd util to argocd dex It means that you need to update argocd dex server deployment s commands to install argocd dex binary instead of argocd util in init container and run dex command from argocd dex instead of argocd util bash initContainers command cp n usr local bin argocd shared argocd dex bash containers command shared argocd dex rundex Note that starting from v2 0 argocd binary behaviour has changed It will have all argocd binaries such argocd dex argocd server argocd repo server argocd application controller argocd util argocd baked inside The binary will change behaviour based on its name Updated retry params type from String to Duration for app sync App Sync command exposes certain retry options which allows the users to parameterize the sync retries Two of those params retry backoff duration and retry backoff max duration were declared as type string rather than duration This allowed users to provide the values to these flags without time unit seconds minutes hours or any random string as well but since we have migrated from string to duration it is now mandatory for users to provide a unit valid duration bash EXAMPLE argocd app sync app name retry backoff duration 10 invalid argocd app sync app name retry backoff duration 10s valid Switch to Golang 1 16 The official Argo CD binaries are now being build using Go 1 16 making a jump from the previous 1 14 x Users should note that Go 1 15 introduced deprecation of validating server names against the CommonName property of a certificate when performing TLS connections If you have repository servers with an incompatible certificate connections to those servers might break You will have to issue correct certificates to unbreak such a situation Migration of CRDs from apiextensions v1beta1 to apiextensions v1 Our CRDs Application and AppProject have been moved from the deprecated apiextensions v1beta1 to the apiextensions v1 API group This does not affect the version of the CRDs themselves We do not expect that changes to existing CRs for Application and AppProject are required from users or that this change requires otherwise actions and this note is just included for completeness Helm v3 is now the default when rendering Charts With this release we made Helm v3 being the default version for rendering any Helm charts through Argo CD We also disabled the Helm version auto detection depending on the apiVersion field of the Chart yaml so the charts will be rendered using Helm v3 regardless of what s in the Chart s apiVersion field This can result in minor out of sync conditions on your Applications that were previously rendered using Helm v2 e g a change in one of the annotations that Helm adds You can fix this by syncing the Application If you have existing Charts that require to be rendered using Helm v2 you will need to explicitly configure your Application to use Helm v2 for rendering the chart as described here user guide helm md helm version Please also note that Helm v2 is now being considered deprecated in Argo CD as it will not receive any updates from the upstream Helm project anymore We will still ship the Helm v2 binary for the next two releases but it will be subject to removal after that grace period Users are encouraged to upgrade any Charts that still require Helm v2 to be compatible with Helm v3 Kustomize version updated to v3 9 4 Argo CD now ships with Kustomize v3 9 4 by default Please make sure that your manifests will render correctly with this Kustomize version If you need backwards compatibility to a previous version of Kustomize please consider setting up a custom Kustomize version and configure your Applications to be rendered using that specific version
argocd Known Issues v2 3 to 2 4 Argo CD 2 4 0 introduced a breaking API change renaming the filter to Impact to API clients Broken filter before 2 4 27
# v2.3 to 2.4 ## Known Issues ### Broken `project` filter before 2.4.27 Argo CD 2.4.0 introduced a breaking API change, renaming the `project` filter to `projects`. #### Impact to API clients A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API. If the client uses the `project` field to filter projects, the filter will not be applied. **The failing project filter could have detrimental consequences if, for example, you rely on it to list Applications to be deleted.** #### Impact to CLI clients CLI clients older that v2.4.0 rely on client-side filtering and are not impacted by this bug. #### How to fix the problem Upgrade to Argo CD >=2.4.27, >=2.5.15, or >=2.6.6. This version of Argo CD will accept both `project` and `projects` as valid filters. ## KSonnet support is removed Ksonnet was deprecated in [2019](https://github.com/ksonnet/ksonnet/pull/914/files) and is no longer maintained. The time has come to remove it from the Argo CD. ## Helm 2 support is removed Helm 2 has not been officially supported since [Nov 2020](https://helm.sh/blog/helm-2-becomes-unsupported/). In order to ensure a smooth transition, Helm 2 support was preserved in the Argo CD. We feel that Helm 3 is stable, and it is time to drop Helm 2 support. ## Support for private repo SSH keys using the SHA-1 signature hash algorithm is removed Note: this change was back-ported to 2.3.7 and 2.2.12. Argo CD 2.4 upgraded its base image from Ubuntu 20.04 to Ubuntu 22.04, which upgraded OpenSSH to 8.9. OpenSSH starting with 8.8 [dropped support for the `ssh-rsa` SHA-1 key signature algorithm](https://www.openssh.com/txt/release-8.8). The signature algorithm is _not_ the same as the algorithm used when generating the key. There is no need to update keys. The signature algorithm is negotiated with the SSH server when the connection is being set up. The client offers its list of accepted signature algorithms, and if the server has a match, the connection proceeds. For most SSH servers on up-to-date git providers, acceptable algorithms other than `ssh-rsa` should be available. Before upgrading to Argo CD 2.4, check whether your git provider(s) using SSH authentication support algorithms newer than `rsa-ssh`. 1. Make sure your version of SSH >= 8.9 (the version used by Argo CD). If not, upgrade it before proceeding. ```shell ssh -V ``` Example output: `OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022` 2. Once you have a recent version of OpenSSH, follow the directions from the [OpenSSH 8.8 release notes](https://www.openssh.com/txt/release-8.7): > To check whether a server is using the weak ssh-rsa public key > algorithm, for host authentication, try to connect to it after > removing the ssh-rsa algorithm from ssh(1)'s allowed list: > > ```shell > ssh -oHostKeyAlgorithms=-ssh-rsa user@host > ``` > > If the host key verification fails and no other supported host key > types are available, the server software on that host should be > upgraded. If the server does not support an acceptable version, you will get an error similar to this; ``` $ ssh -oHostKeyAlgorithms=-ssh-rsa vs-ssh.visualstudio.com Unable to negotiate with 20.42.134.1 port 22: no matching host key type found. Their offer: ssh-rsa ``` This indicates that the server needs to update its supported key signature algorithms, and Argo CD will not connect to it. ### Workaround The [OpenSSH 8.8 release notes](https://www.openssh.com/txt/release-8.8) describe a workaround if you cannot change the server's key signature algorithms configuration. > Incompatibility is more likely when connecting to older SSH > implementations that have not been upgraded or have not closely tracked > improvements in the SSH protocol. For these cases, it may be necessary > to selectively re-enable RSA/SHA1 to allow connection and/or user > authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms > options. For example, the following stanza in ~/.ssh/config will enable > RSA/SHA1 for host and user authentication for a single destination host: > > ``` > Host old-host > HostkeyAlgorithms +ssh-rsa > PubkeyAcceptedAlgorithms +ssh-rsa > ``` > > We recommend enabling RSA/SHA1 only as a stopgap measure until legacy > implementations can be upgraded or reconfigured with another key type > (such as ECDSA or Ed25519). To apply this to Argo CD, you could create a ConfigMap with the desired ssh config file and then mount it at `/home/argocd/.ssh/config`. ## Configure RBAC to account for new `exec` resource 2.4 introduces a new `exec` [RBAC resource](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#rbac-resources-and-actions). When you upgrade to 2.4, RBAC policies with `*` in the resource field and `create` or `*` in the action field will automatically grant the `exec` privilege. To avoid granting the new privilege, replace the existing policy with a list of new policies explicitly listing the old resources. The exec feature is [disabled by default](https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#exec-resource), but it is still a good idea to double-check your RBAC configuration to enforce least necessary privileges. ### Example Old: ```csv p, role:org-admin, *, create, my-proj/*, allow ``` New: ```csv p, role:org-admin, clusters, create, my-proj/*, allow p, role:org-admin, projects, create, my-proj/*, allow p, role:org-admin, applications, create, my-proj/*, allow p, role:org-admin, repositories, create, my-proj/*, allow p, role:org-admin, certificates, create, my-proj/*, allow p, role:org-admin, accounts, create, my-proj/*, allow p, role:org-admin, gpgkeys, create, my-proj/*, allow ``` ## Enable logs RBAC enforcement 2.4 introduced `logs` as a new RBAC resource. In 2.3, users with `applications, get` access automatically get logs access. <del>In 2.5, you will have to explicitly grant `logs, get` access. Logs RBAC enforcement can be enabled with a flag in 2.4. We recommend enabling the flag now for an easier upgrade experience in 2.5.</del> !!! important Logs RBAC enforcement **will not** be enabled by default in 2.5. This decision [was made](https://github.com/argoproj/argo-cd/issues/10551#issuecomment-1242303457) to avoid breaking logs access under [Project Roles](../../user-guide/projects.md#project-roles), which do not provide a mechanism to grant `logs` resource access. To enabled logs RBAC enforcement, add this to your argocd-cm ConfigMap: ```yaml server.rbac.log.enforce.enable: "true" ``` If you want to allow the same users to continue to have logs access, just find every line that grants `applications, get` access and also grant `logs, get`. ### Example Old: ```csv p, role:staging-db-admins, applications, get, staging-db-admins/*, allow p, role:test-db-admins, applications, *, staging-db-admins/*, allow ``` New: ```csv p, role:staging-db-admins, applications, get, staging-db-admins/*, allow p, role:staging-db-admins, logs, get, staging-db-admins/*, allow p, role:test-db-admins, applications, *, staging-db-admins/*, allow p, role:test-db-admins, logs, get, staging-db-admins/*, allow ``` ### Pod Logs UI Since 2.4.9, the LOGS tab in pod view is visible in the UI only for users with explicit allow get logs policy. ### Known pod logs UI issue prior to 2.4.9 Upon pressing the "LOGS" tab in pod view by users who don't have an explicit allow get logs policy, the red "unable to load data: Internal error" is received in the bottom of the screen, and "Failed to load data, please try again" is displayed. ## Test repo-server with its new dedicated Service Account As a security enhancement, the argocd-repo-server Deployment uses its own Service Account instead of `default`. If you have a custom environment that might depend on repo-server using the `default` Service Account (such as a plugin that uses the Service Account for auth), be sure to test before deploying the 2.4 upgrade to production. ## Plugins ### Remove the shared volume from any sidecar plugins As a security enhancement, [sidecar plugins](../config-management-plugins.md#option-2-configure-plugin-via-sidecar) no longer share the /tmp directory with the repo-server. If you have one or more sidecar plugins enabled, replace the /tmp volume mount for each sidecar to use a volume specific to each plugin. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: argocd-repo-server spec: template: spec: containers: - name: your-plugin-name volumeMounts: - mountPath: /tmp name: your-plugin-name-tmp volumes: # Add this volume. - name: your-plugin-name-tmp emptyDir: {} ``` ### Update plugins to use newly-prefixed environment variables If you use plugins that depend on user-supplied environment variables, then they must be updated to be compatible with Argo CD 2.4. Here is an example of user-supplied environment variables in the `plugin` section of an Application spec: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application spec: source: plugin: env: - name: FOO value: bar ``` Going forward, all user-supplied environment variables will be prefixed with `ARGOCD_ENV_` before being sent to the plugin's `init`, `generate`, or `discover` commands. This prevents users from setting potentially-sensitive environment variables. If you have written a custom plugin which handles user-provided environment variables, update it to handle the new prefix. If you use a third-party plugin which does not explicitly advertise Argo CD 2.4 support, it might not handle the prefixed environment variables. Open an issue with the plugin's authors and confirm support before upgrading to Argo CD 2.4. ### Confirm sidecar plugins have all necessary environment variables A bug in < 2.4 caused `init` and `generate` commands to receive environment variables from the main repo-server container, taking precedence over environment variables from the plugin's sidecar. Starting in 2.4, sidecar plugins will not receive environment variables from the main repo-server container. Make sure that any environment variables necessary for the sidecar plugin to function are set on the sidecar plugin. argocd-cm plugins will continue to receive environment variables from the main repo-server container.
argocd
v2 3 to 2 4 Known Issues Broken project filter before 2 4 27 Argo CD 2 4 0 introduced a breaking API change renaming the project filter to projects Impact to API clients A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API If the client uses the project field to filter projects the filter will not be applied The failing project filter could have detrimental consequences if for example you rely on it to list Applications to be deleted Impact to CLI clients CLI clients older that v2 4 0 rely on client side filtering and are not impacted by this bug How to fix the problem Upgrade to Argo CD 2 4 27 2 5 15 or 2 6 6 This version of Argo CD will accept both project and projects as valid filters KSonnet support is removed Ksonnet was deprecated in 2019 https github com ksonnet ksonnet pull 914 files and is no longer maintained The time has come to remove it from the Argo CD Helm 2 support is removed Helm 2 has not been officially supported since Nov 2020 https helm sh blog helm 2 becomes unsupported In order to ensure a smooth transition Helm 2 support was preserved in the Argo CD We feel that Helm 3 is stable and it is time to drop Helm 2 support Support for private repo SSH keys using the SHA 1 signature hash algorithm is removed Note this change was back ported to 2 3 7 and 2 2 12 Argo CD 2 4 upgraded its base image from Ubuntu 20 04 to Ubuntu 22 04 which upgraded OpenSSH to 8 9 OpenSSH starting with 8 8 dropped support for the ssh rsa SHA 1 key signature algorithm https www openssh com txt release 8 8 The signature algorithm is not the same as the algorithm used when generating the key There is no need to update keys The signature algorithm is negotiated with the SSH server when the connection is being set up The client offers its list of accepted signature algorithms and if the server has a match the connection proceeds For most SSH servers on up to date git providers acceptable algorithms other than ssh rsa should be available Before upgrading to Argo CD 2 4 check whether your git provider s using SSH authentication support algorithms newer than rsa ssh 1 Make sure your version of SSH 8 9 the version used by Argo CD If not upgrade it before proceeding shell ssh V Example output OpenSSH 8 9p1 Ubuntu 3 OpenSSL 3 0 2 15 Mar 2022 2 Once you have a recent version of OpenSSH follow the directions from the OpenSSH 8 8 release notes https www openssh com txt release 8 7 To check whether a server is using the weak ssh rsa public key algorithm for host authentication try to connect to it after removing the ssh rsa algorithm from ssh 1 s allowed list shell ssh oHostKeyAlgorithms ssh rsa user host If the host key verification fails and no other supported host key types are available the server software on that host should be upgraded If the server does not support an acceptable version you will get an error similar to this ssh oHostKeyAlgorithms ssh rsa vs ssh visualstudio com Unable to negotiate with 20 42 134 1 port 22 no matching host key type found Their offer ssh rsa This indicates that the server needs to update its supported key signature algorithms and Argo CD will not connect to it Workaround The OpenSSH 8 8 release notes https www openssh com txt release 8 8 describe a workaround if you cannot change the server s key signature algorithms configuration Incompatibility is more likely when connecting to older SSH implementations that have not been upgraded or have not closely tracked improvements in the SSH protocol For these cases it may be necessary to selectively re enable RSA SHA1 to allow connection and or user authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms options For example the following stanza in ssh config will enable RSA SHA1 for host and user authentication for a single destination host Host old host HostkeyAlgorithms ssh rsa PubkeyAcceptedAlgorithms ssh rsa We recommend enabling RSA SHA1 only as a stopgap measure until legacy implementations can be upgraded or reconfigured with another key type such as ECDSA or Ed25519 To apply this to Argo CD you could create a ConfigMap with the desired ssh config file and then mount it at home argocd ssh config Configure RBAC to account for new exec resource 2 4 introduces a new exec RBAC resource https argo cd readthedocs io en stable operator manual rbac rbac resources and actions When you upgrade to 2 4 RBAC policies with in the resource field and create or in the action field will automatically grant the exec privilege To avoid granting the new privilege replace the existing policy with a list of new policies explicitly listing the old resources The exec feature is disabled by default https argo cd readthedocs io en stable operator manual rbac exec resource but it is still a good idea to double check your RBAC configuration to enforce least necessary privileges Example Old csv p role org admin create my proj allow New csv p role org admin clusters create my proj allow p role org admin projects create my proj allow p role org admin applications create my proj allow p role org admin repositories create my proj allow p role org admin certificates create my proj allow p role org admin accounts create my proj allow p role org admin gpgkeys create my proj allow Enable logs RBAC enforcement 2 4 introduced logs as a new RBAC resource In 2 3 users with applications get access automatically get logs access del In 2 5 you will have to explicitly grant logs get access Logs RBAC enforcement can be enabled with a flag in 2 4 We recommend enabling the flag now for an easier upgrade experience in 2 5 del important Logs RBAC enforcement will not be enabled by default in 2 5 This decision was made https github com argoproj argo cd issues 10551 issuecomment 1242303457 to avoid breaking logs access under Project Roles user guide projects md project roles which do not provide a mechanism to grant logs resource access To enabled logs RBAC enforcement add this to your argocd cm ConfigMap yaml server rbac log enforce enable true If you want to allow the same users to continue to have logs access just find every line that grants applications get access and also grant logs get Example Old csv p role staging db admins applications get staging db admins allow p role test db admins applications staging db admins allow New csv p role staging db admins applications get staging db admins allow p role staging db admins logs get staging db admins allow p role test db admins applications staging db admins allow p role test db admins logs get staging db admins allow Pod Logs UI Since 2 4 9 the LOGS tab in pod view is visible in the UI only for users with explicit allow get logs policy Known pod logs UI issue prior to 2 4 9 Upon pressing the LOGS tab in pod view by users who don t have an explicit allow get logs policy the red unable to load data Internal error is received in the bottom of the screen and Failed to load data please try again is displayed Test repo server with its new dedicated Service Account As a security enhancement the argocd repo server Deployment uses its own Service Account instead of default If you have a custom environment that might depend on repo server using the default Service Account such as a plugin that uses the Service Account for auth be sure to test before deploying the 2 4 upgrade to production Plugins Remove the shared volume from any sidecar plugins As a security enhancement sidecar plugins config management plugins md option 2 configure plugin via sidecar no longer share the tmp directory with the repo server If you have one or more sidecar plugins enabled replace the tmp volume mount for each sidecar to use a volume specific to each plugin yaml apiVersion apps v1 kind Deployment metadata name argocd repo server spec template spec containers name your plugin name volumeMounts mountPath tmp name your plugin name tmp volumes Add this volume name your plugin name tmp emptyDir Update plugins to use newly prefixed environment variables If you use plugins that depend on user supplied environment variables then they must be updated to be compatible with Argo CD 2 4 Here is an example of user supplied environment variables in the plugin section of an Application spec yaml apiVersion argoproj io v1alpha1 kind Application spec source plugin env name FOO value bar Going forward all user supplied environment variables will be prefixed with ARGOCD ENV before being sent to the plugin s init generate or discover commands This prevents users from setting potentially sensitive environment variables If you have written a custom plugin which handles user provided environment variables update it to handle the new prefix If you use a third party plugin which does not explicitly advertise Argo CD 2 4 support it might not handle the prefixed environment variables Open an issue with the plugin s authors and confirm support before upgrading to Argo CD 2 4 Confirm sidecar plugins have all necessary environment variables A bug in 2 4 caused init and generate commands to receive environment variables from the main repo server container taking precedence over environment variables from the plugin s sidecar Starting in 2 4 sidecar plugins will not receive environment variables from the main repo server container Make sure that any environment variables necessary for the sidecar plugin to function are set on the sidecar plugin argocd cm plugins will continue to receive environment variables from the main repo server container
argocd v2 6 to 2 7 field and in the action field it will automatically grant the When you upgrade to 2 7 RBAC policies with in the resource Configure RBAC to account for new resource privilege RBAC resource 2 2 7 introduces the new Proxy Extensions 1 feature with a new
# v2.6 to 2.7 ## Configure RBAC to account for new `extensions` resource 2.7 introduces the new [Proxy Extensions][1] feature with a new `extensions` [RBAC resource][2]. When you upgrade to 2.7, RBAC policies with `*` in the *resource* field and `*` in the action field, it will automatically grant the `extensions` privilege. The Proxy Extension feature is disabled by default, however it is recommended to check your RBAC configurations to enforce the least necessary privileges. Example Old: ```csv p, role:org-admin, *, *, *, allow ``` New: ```csv p, role:org-admin, clusters, create, my-proj/*, allow p, role:org-admin, projects, create, my-proj/*, allow p, role:org-admin, applications, create, my-proj/*, allow p, role:org-admin, repositories, create, my-proj/*, allow p, role:org-admin, certificates, create, my-proj/*, allow p, role:org-admin, accounts, create, my-proj/*, allow p, role:org-admin, gpgkeys, create, my-proj/*, allow # If you don't want to grant the new permission, don't include the following line p, role:org-admin, extensions, invoke, my-proj/*, allow ``` ## Upgraded Helm Version Note that bundled Helm version has been upgraded from 3.10.3 to 3.11.2. ## Upgraded Kustomize Version Note that bundled Kustomize version has been upgraded from 4.5.7 to 5.0.1. ## Notifications: `^` behavior change in Sprig's semver functions Argo CD 2.7 upgrades Sprig templating specifically within Argo CD notifications to v3. That upgrade includes an upgrade of [Masterminds/semver](https://github.com/Masterminds/semver/releases) to v3. Masterminds/semver v3 changed the behavior of the `^` prefix in semantic version constraints. If you are using sprig template functions in your notifications templates which include references to [Sprig's semver functions](https://masterminds.github.io/sprig/semver.html) and use the `^` prefix, read the [Masterminds/semver changelog](https://github.com/Masterminds/semver/releases/tag/v3.0.0) to understand how your notifications' behavior may change. ## Tini as entrypoint The manifests are now using [`tini` as entrypoint][3], instead of `entrypoint.sh`. Until 2.8, `entrypoint.sh` is retained for upgrade compatibility. This means that the deployment manifests have to be updated after upgrading to 2.7, and before upgrading to 2.8 later. In case the manifests are updated before moving to 2.8, the containers will not be able to start. [1]: ../../developer-guide/extensions/proxy-extensions.md [2]: https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#the-extensions-resource [3]: https://github.com/argoproj/argo-cd/pull/12707 ## Deep Links template updates Deep Links now allow you to access other values like `cluster`, `project`, `application` and `resource` in the url and condition templates for specific categories of links. The templating syntax has also been updated to be prefixed with the type of resource you want to access for example previously if you had a `resource.links` config like : ```yaml resource.links: | - url: https://mycompany.splunk.com?search= title: Splunk if: kind == "Pod" || kind == "Deployment" ``` This would become : ```yaml resource.links: | - url: https://mycompany.splunk.com?search=&env= title: Splunk if: resource.kind == "Pod" || resource.kind == "Deployment" ``` Read the full [documentation](../deep_links.md) to see all possible combinations of values accessible fo each category of links. ## Support of `helm.sh/resource-policy` annotation Argo CD now supports the `helm.sh/resource-policy` annotation to control the deletion of resources. The behavior is the same as the behavior of `argocd.argoproj.io/sync-options: Delete=false` annotation: if the annotation is present and set to `keep`, the resource will not be deleted when the application is deleted. ## Check your Kustomize patches for `--redis` changes Starting in Argo CD 2.7, the install manifests no longer pass the Redis server name via `--redis`. If your environment uses Kustomize JSON patches to modify the Redis server name, the patch might break when you upgrade to the 2.7 manifests. If it does, you can remove the patch and instead set the Redis server name via the `redis.server` field in the argocd-cmd-params-cm ConfigMap. That value will be passed to the necessary components via `valueFrom` environment variables. ## `argocd applicationset` CLI incompatibilities for ApplicationSets with list generators If you are running Argo CD v2.7.0-2.7.2 server-side, then CLI versions outside that range will incorrectly handle list generators. That is because the gRPC interface for those versions used the `elements` field number for the new `elementsYaml` field. If you are running the Argo CD CLI versions v2.7.0-2.7.2 with a server-side version of v2.7.3 or later, then the CLI will send the contents of the `elements` field to the server, which will interpret it as the `elementsYaml` field. This will cause the ApplicationSet to fail at runtime with an error similar to this: ``` error unmarshling decoded ElementsYaml error converting YAML to JSON: yaml: control characters are not allowed ``` Be sure to use CLI version v2.7.3 or later with server-side version v2.7.3 or later.
argocd
v2 6 to 2 7 Configure RBAC to account for new extensions resource 2 7 introduces the new Proxy Extensions 1 feature with a new extensions RBAC resource 2 When you upgrade to 2 7 RBAC policies with in the resource field and in the action field it will automatically grant the extensions privilege The Proxy Extension feature is disabled by default however it is recommended to check your RBAC configurations to enforce the least necessary privileges Example Old csv p role org admin allow New csv p role org admin clusters create my proj allow p role org admin projects create my proj allow p role org admin applications create my proj allow p role org admin repositories create my proj allow p role org admin certificates create my proj allow p role org admin accounts create my proj allow p role org admin gpgkeys create my proj allow If you don t want to grant the new permission don t include the following line p role org admin extensions invoke my proj allow Upgraded Helm Version Note that bundled Helm version has been upgraded from 3 10 3 to 3 11 2 Upgraded Kustomize Version Note that bundled Kustomize version has been upgraded from 4 5 7 to 5 0 1 Notifications behavior change in Sprig s semver functions Argo CD 2 7 upgrades Sprig templating specifically within Argo CD notifications to v3 That upgrade includes an upgrade of Masterminds semver https github com Masterminds semver releases to v3 Masterminds semver v3 changed the behavior of the prefix in semantic version constraints If you are using sprig template functions in your notifications templates which include references to Sprig s semver functions https masterminds github io sprig semver html and use the prefix read the Masterminds semver changelog https github com Masterminds semver releases tag v3 0 0 to understand how your notifications behavior may change Tini as entrypoint The manifests are now using tini as entrypoint 3 instead of entrypoint sh Until 2 8 entrypoint sh is retained for upgrade compatibility This means that the deployment manifests have to be updated after upgrading to 2 7 and before upgrading to 2 8 later In case the manifests are updated before moving to 2 8 the containers will not be able to start 1 developer guide extensions proxy extensions md 2 https argo cd readthedocs io en stable operator manual rbac the extensions resource 3 https github com argoproj argo cd pull 12707 Deep Links template updates Deep Links now allow you to access other values like cluster project application and resource in the url and condition templates for specific categories of links The templating syntax has also been updated to be prefixed with the type of resource you want to access for example previously if you had a resource links config like yaml resource links url https mycompany splunk com search title Splunk if kind Pod kind Deployment This would become yaml resource links url https mycompany splunk com search env title Splunk if resource kind Pod resource kind Deployment Read the full documentation deep links md to see all possible combinations of values accessible fo each category of links Support of helm sh resource policy annotation Argo CD now supports the helm sh resource policy annotation to control the deletion of resources The behavior is the same as the behavior of argocd argoproj io sync options Delete false annotation if the annotation is present and set to keep the resource will not be deleted when the application is deleted Check your Kustomize patches for redis changes Starting in Argo CD 2 7 the install manifests no longer pass the Redis server name via redis If your environment uses Kustomize JSON patches to modify the Redis server name the patch might break when you upgrade to the 2 7 manifests If it does you can remove the patch and instead set the Redis server name via the redis server field in the argocd cmd params cm ConfigMap That value will be passed to the necessary components via valueFrom environment variables argocd applicationset CLI incompatibilities for ApplicationSets with list generators If you are running Argo CD v2 7 0 2 7 2 server side then CLI versions outside that range will incorrectly handle list generators That is because the gRPC interface for those versions used the elements field number for the new elementsYaml field If you are running the Argo CD CLI versions v2 7 0 2 7 2 with a server side version of v2 7 3 or later then the CLI will send the contents of the elements field to the server which will interpret it as the elementsYaml field This will cause the ApplicationSet to fail at runtime with an error similar to this error unmarshling decoded ElementsYaml error converting YAML to JSON yaml control characters are not allowed Be sure to use CLI version v2 7 3 or later with server side version v2 7 3 or later
argocd Keycloak Integrating Keycloak and ArgoCD Creating a new client in Keycloak to determine privileges in Argo You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication using groups set in Keycloak These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak
# Keycloak # Integrating Keycloak and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak. You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication, using groups set in Keycloak to determine privileges in Argo. ## Creating a new client in Keycloak First we need to setup a new client. Start by logging into your keycloak server, select the realm you want to use (`master` by default) and then go to __Clients__ and click the __Create client__ button at the top. ![Keycloak add client](../../assets/keycloak-add-client.png "Keycloak add client") Enable the __Client authentication__. ![Keycloak add client Step 2](../../assets/keycloak-add-client_2.png "Keycloak add client Step 2") Configure the client by setting the __Root URL__, __Web origins__, __Admin URL__ to the hostname (https://{hostname}). Also you can set __Home URL__ to your _/applications_ path and __Valid Post logout redirect URIs__ to "+". The Valid Redirect URIs should be set to https://{hostname}/auth/callback (you can also set the less secure https://{hostname}/* for testing/development purposes, but it's not recommended in production). ![Keycloak configure client](../../assets/keycloak-configure-client.png "Keycloak configure client") Make sure to click __Save__. There should be a tab called __Credentials__. You can copy the Secret that we'll use in our ArgoCD configuration. ![Keycloak client secret](../../assets/keycloak-client-secret.png "Keycloak client secret") ## Configuring the groups claim In order for ArgoCD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token. To do this we'll start by creating a new __Client Scope__ called _groups_. ![Keycloak add scope](../../assets/keycloak-add-scope.png "Keycloak add scope") Once you've created the client scope you can now add a Token Mapper which will add the groups claim to the token when the client requests the groups scope. In the Tab "Mappers", click on "Configure a new mapper" and choose __Group Membership__. Make sure to set the __Name__ as well as the __Token Claim Name__ to _groups_. Also disable the "Full group path". ![Keycloak groups mapper](../../assets/keycloak-groups-mapper.png "Keycloak groups mapper") We can now configure the client to provide the _groups_ scope. Go back to the client we've created earlier and go to the Tab "Client Scopes". Click on "Add client scope", choose the _groups_ scope and add it either to the __Default__ or to the __Optional__ Client Scope. If you put it in the Optional category you will need to make sure that ArgoCD requests the scope in its OIDC configuration. Since we will always want group information, I recommend using the Default category. ![Keycloak client scope](../../assets/keycloak-client-scope.png "Keycloak client scope") Create a group called _ArgoCDAdmins_ and have your current user join the group. ![Keycloak user group](../../assets/keycloak-user-group.png "Keycloak user group") ## Configuring ArgoCD OIDC Let's start by storing the client secret you generated earlier in the argocd secret _argocd-secret_. 1. First you'll need to encode the client secret in base64: `$ echo -n '83083958-8ec6-47b0-a411-a8c55381fbd2' | base64` 2. Then you can edit the secret and add the base64 value to a new key called _oidc.keycloak.clientSecret_ using `$ kubectl edit secret argocd-secret`. Your Secret should look something like this: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret data: ... oidc.keycloak.clientSecret: ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy ... ``` Now we can configure the config map and add the oidc configuration to enable our keycloak authentication. You can use `$ kubectl edit configmap argocd-cm`. Your ConfigMap should look like this: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: url: https://argocd.example.com oidc.config: | name: Keycloak issuer: https://keycloak.example.com/realms/master clientID: argocd clientSecret: $oidc.keycloak.clientSecret requestedScopes: ["openid", "profile", "email", "groups"] ``` Make sure that: - __issuer__ ends with the correct realm (in this example _master_) - __issuer__ on Keycloak releases older than version 17 the URL must include /auth (in this example /auth/realms/master) - __clientID__ is set to the Client ID you configured in Keycloak - __clientSecret__ points to the right key you created in the _argocd-secret_ Secret - __requestedScopes__ contains the _groups_ claim if you didn't add it to the Default scopes ## Configuring ArgoCD Policy Now that we have an authentication that provides groups we want to apply a policy to these groups. We can modify the _argocd-rbac-cm_ ConfigMap using `$ kubectl edit configmap argocd-rbac-cm`. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm data: policy.csv: | g, ArgoCDAdmins, role:admin ``` In this example we give the role _role:admin_ to all users in the group _ArgoCDAdmins_. ## Login You can now login using our new Keycloak OIDC authentication: ![Keycloak ArgoCD login](../../assets/keycloak-login.png "Keycloak ArgoCD login") ## Troubleshoot If ArgoCD auth returns 401 or when the login attempt leads to the loop, then restart the argocd-server pod. ``` kubectl rollout restart deployment argocd-server -n argocd ```
argocd
Keycloak Integrating Keycloak and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication using groups set in Keycloak to determine privileges in Argo Creating a new client in Keycloak First we need to setup a new client Start by logging into your keycloak server select the realm you want to use master by default and then go to Clients and click the Create client button at the top Keycloak add client assets keycloak add client png Keycloak add client Enable the Client authentication Keycloak add client Step 2 assets keycloak add client 2 png Keycloak add client Step 2 Configure the client by setting the Root URL Web origins Admin URL to the hostname https hostname Also you can set Home URL to your applications path and Valid Post logout redirect URIs to The Valid Redirect URIs should be set to https hostname auth callback you can also set the less secure https hostname for testing development purposes but it s not recommended in production Keycloak configure client assets keycloak configure client png Keycloak configure client Make sure to click Save There should be a tab called Credentials You can copy the Secret that we ll use in our ArgoCD configuration Keycloak client secret assets keycloak client secret png Keycloak client secret Configuring the groups claim In order for ArgoCD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token To do this we ll start by creating a new Client Scope called groups Keycloak add scope assets keycloak add scope png Keycloak add scope Once you ve created the client scope you can now add a Token Mapper which will add the groups claim to the token when the client requests the groups scope In the Tab Mappers click on Configure a new mapper and choose Group Membership Make sure to set the Name as well as the Token Claim Name to groups Also disable the Full group path Keycloak groups mapper assets keycloak groups mapper png Keycloak groups mapper We can now configure the client to provide the groups scope Go back to the client we ve created earlier and go to the Tab Client Scopes Click on Add client scope choose the groups scope and add it either to the Default or to the Optional Client Scope If you put it in the Optional category you will need to make sure that ArgoCD requests the scope in its OIDC configuration Since we will always want group information I recommend using the Default category Keycloak client scope assets keycloak client scope png Keycloak client scope Create a group called ArgoCDAdmins and have your current user join the group Keycloak user group assets keycloak user group png Keycloak user group Configuring ArgoCD OIDC Let s start by storing the client secret you generated earlier in the argocd secret argocd secret 1 First you ll need to encode the client secret in base64 echo n 83083958 8ec6 47b0 a411 a8c55381fbd2 base64 2 Then you can edit the secret and add the base64 value to a new key called oidc keycloak clientSecret using kubectl edit secret argocd secret Your Secret should look something like this yaml apiVersion v1 kind Secret metadata name argocd secret data oidc keycloak clientSecret ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy Now we can configure the config map and add the oidc configuration to enable our keycloak authentication You can use kubectl edit configmap argocd cm Your ConfigMap should look like this yaml apiVersion v1 kind ConfigMap metadata name argocd cm data url https argocd example com oidc config name Keycloak issuer https keycloak example com realms master clientID argocd clientSecret oidc keycloak clientSecret requestedScopes openid profile email groups Make sure that issuer ends with the correct realm in this example master issuer on Keycloak releases older than version 17 the URL must include auth in this example auth realms master clientID is set to the Client ID you configured in Keycloak clientSecret points to the right key you created in the argocd secret Secret requestedScopes contains the groups claim if you didn t add it to the Default scopes Configuring ArgoCD Policy Now that we have an authentication that provides groups we want to apply a policy to these groups We can modify the argocd rbac cm ConfigMap using kubectl edit configmap argocd rbac cm yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm data policy csv g ArgoCDAdmins role admin In this example we give the role role admin to all users in the group ArgoCDAdmins Login You can now login using our new Keycloak OIDC authentication Keycloak ArgoCD login assets keycloak login png Keycloak ArgoCD login Troubleshoot If ArgoCD auth returns 401 or when the login attempt leads to the loop then restart the argocd server pod kubectl rollout restart deployment argocd server n argocd
argocd Integrating OneLogin and ArgoCD If you re using this IdP please consider to this document markdownlint enable MD033 note Are you using this Please contribute OneLogin div style text align center img src assets argo png div markdownlint disable MD033
# OneLogin !!! note "Are you using this? Please contribute!" If you're using this IdP please consider [contributing](../../developer-guide/docs-site.md) to this document. <!-- markdownlint-disable MD033 --> <div style="text-align:center"><img src="../../../assets/argo.png" /></div> <!-- markdownlint-enable MD033 --> # Integrating OneLogin and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating with OneLogin. You will create a custom OIDC application within OneLogin and configure ArgoCD to use OneLogin for authentication, using UserRoles set in OneLogin to determine privileges in Argo. ## Creating and Configuring OneLogin App For your ArgoCD application to communicate with OneLogin, you will first need to create and configure the OIDC application on the OneLogin side. ### Create OIDC Application To create the application, do the following: 1. Navigate to your OneLogin portal, then Administration > Applications. 2. Click "Add App". 3. Search for "OpenID Connect" in the search field. 4. Select the "OpenId Connect (OIDC)" app to create. 5. Update the "Display Name" field (could be something like "ArgoCD (Production)". 6. Click "Save". ### Configuring OIDC Application Settings Now that the application is created, you can configure the settings of the app. #### Configuration Tab Update the "Configuration" settings as follows: 1. Select the "Configuration" tab on the left. 2. Set the "Login Url" field to https://argocd.myproject.com/auth/login, replacing the hostname with your own. 3. Set the "Redirect Url" field to https://argocd.myproject.com/auth/callback, replacing the hostname with your own. 4. Click "Save". !!! note "OneLogin may not let you save any other fields until the above fields are set." #### Info Tab You can update the "Display Name", "Description", "Notes", or the display images that appear in the OneLogin portal here. #### Parameters Tab This tab controls what information is sent to Argo in the token. By default it will contain a Groups field and "Credentials are" is set to "Configured by admin". Leave "Credentials are" as the default. How the Value of the Groups field is configured will vary based on your needs, but to use OneLogin User roles for ArgoCD privileges, configure the Value of the Groups field with the following: 1. Click "Groups". A modal appears. 2. Set the "Default if no value selected" field to "User Roles". 3. Set the transform field (below it) to "Semicolon Delimited Input". 4. Click "Save". When a user attempts to login to Argo with OneLogin, the User roles in OneLogin, say, Manager, ProductTeam, and TestEngineering, will be included in the Groups field in the token. These are the values needed for Argo to assign permissions. The groups field in the token will look similar to the following: ``` "groups": [ "Manager", "ProductTeam", "TestEngineering", ], ``` #### Rules Tab To get up and running, you do not need to make modifications to any settings here. #### SSO Tab This tab contains much of the information needed to be placed into your ArgoCD configuration file (API endpoints, client ID, client secret). Confirm "Application Type" is set to "Web". Confirm "Token Endpoint" is set to "Basic". #### Access Tab This tab controls who can see this application in the OneLogin portal. Select the roles you wish to have access to this application and click "Save". #### Users Tab This tab shows you the individual users that have access to this application (usually the ones that have roles specified in the Access Tab). To get up and running, you do not need to make modifications to any settings here. #### Privileges Tab This tab shows which OneLogin users can configure this app. To get up and running, you do not need to make modifications to any settings here. ## Updating OIDC configuration in ArgoCD Now that the OIDC application is configured in OneLogin, you can update Argo configuration to communicate with OneLogin, as well as control permissions for those users that authenticate via OneLogin. ### Tell Argo where OneLogin is Argo needs to have its config map (argocd-cm) updated in order to communicate with OneLogin. Consider the following yaml: ``` apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/part-of: argocd data: url: https://<argocd.myproject.com> oidc.config: | name: OneLogin issuer: https://<subdomain>.onelogin.com/oidc/2 clientID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaaaaaaa clientSecret: abcdef123456 # Optional set of OIDC scopes to request. If omitted, defaults to: ["openid", "profile", "email", "groups"] requestedScopes: ["openid", "profile", "email", "groups"] ``` The "url" key should have a value of the hostname of your Argo project. The "clientID" is taken from the SSO tab of the OneLogin application. The “issuer” is taken from the SSO tab of the OneLogin application. It is one of the issuer api endpoints. The "clientSecret" value is a client secret located in the SSO tab of the OneLogin application. !!! note "If you get an `invalid_client` error when trying the authenticate with OneLogin, there is a possibility that your client secret is not proper. Keep in mind that in previous versions `clientSecret` value had to be base64 encrypted, but it is not required anymore." ### Configure Permissions for OneLogin Auth'd Users Permissions in ArgoCD can be configured by using the OneLogin role names that are passed in the Groups field in the token. Consider the following yaml in argocd-rbac-cm.yaml: ``` apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd labels: app.kubernetes.io/part-of: argocd data: policy.default: role:readonly policy.csv: | p, role:org-admin, applications, *, */*, allow p, role:org-admin, clusters, get, *, allow p, role:org-admin, repositories, get, *, allow p, role:org-admin, repositories, create, *, allow p, role:org-admin, repositories, update, *, allow p, role:org-admin, repositories, delete, *, allow g, TestEngineering, role:org-admin ``` In OneLogin, a user with user role "TestEngineering" will receive ArgoCD admin privileges when they log in to Argo via OneLogin. All other users will receive the readonly role. The key takeaway here is that "TestEngineering" is passed via the Group field in the token (which is specified in the Parameters tab in OneLogin).
argocd
OneLogin note Are you using this Please contribute If you re using this IdP please consider contributing developer guide docs site md to this document markdownlint disable MD033 div style text align center img src assets argo png div markdownlint enable MD033 Integrating OneLogin and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating with OneLogin You will create a custom OIDC application within OneLogin and configure ArgoCD to use OneLogin for authentication using UserRoles set in OneLogin to determine privileges in Argo Creating and Configuring OneLogin App For your ArgoCD application to communicate with OneLogin you will first need to create and configure the OIDC application on the OneLogin side Create OIDC Application To create the application do the following 1 Navigate to your OneLogin portal then Administration Applications 2 Click Add App 3 Search for OpenID Connect in the search field 4 Select the OpenId Connect OIDC app to create 5 Update the Display Name field could be something like ArgoCD Production 6 Click Save Configuring OIDC Application Settings Now that the application is created you can configure the settings of the app Configuration Tab Update the Configuration settings as follows 1 Select the Configuration tab on the left 2 Set the Login Url field to https argocd myproject com auth login replacing the hostname with your own 3 Set the Redirect Url field to https argocd myproject com auth callback replacing the hostname with your own 4 Click Save note OneLogin may not let you save any other fields until the above fields are set Info Tab You can update the Display Name Description Notes or the display images that appear in the OneLogin portal here Parameters Tab This tab controls what information is sent to Argo in the token By default it will contain a Groups field and Credentials are is set to Configured by admin Leave Credentials are as the default How the Value of the Groups field is configured will vary based on your needs but to use OneLogin User roles for ArgoCD privileges configure the Value of the Groups field with the following 1 Click Groups A modal appears 2 Set the Default if no value selected field to User Roles 3 Set the transform field below it to Semicolon Delimited Input 4 Click Save When a user attempts to login to Argo with OneLogin the User roles in OneLogin say Manager ProductTeam and TestEngineering will be included in the Groups field in the token These are the values needed for Argo to assign permissions The groups field in the token will look similar to the following groups Manager ProductTeam TestEngineering Rules Tab To get up and running you do not need to make modifications to any settings here SSO Tab This tab contains much of the information needed to be placed into your ArgoCD configuration file API endpoints client ID client secret Confirm Application Type is set to Web Confirm Token Endpoint is set to Basic Access Tab This tab controls who can see this application in the OneLogin portal Select the roles you wish to have access to this application and click Save Users Tab This tab shows you the individual users that have access to this application usually the ones that have roles specified in the Access Tab To get up and running you do not need to make modifications to any settings here Privileges Tab This tab shows which OneLogin users can configure this app To get up and running you do not need to make modifications to any settings here Updating OIDC configuration in ArgoCD Now that the OIDC application is configured in OneLogin you can update Argo configuration to communicate with OneLogin as well as control permissions for those users that authenticate via OneLogin Tell Argo where OneLogin is Argo needs to have its config map argocd cm updated in order to communicate with OneLogin Consider the following yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io part of argocd data url https argocd myproject com oidc config name OneLogin issuer https subdomain onelogin com oidc 2 clientID aaaaaaaa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaa clientSecret abcdef123456 Optional set of OIDC scopes to request If omitted defaults to openid profile email groups requestedScopes openid profile email groups The url key should have a value of the hostname of your Argo project The clientID is taken from the SSO tab of the OneLogin application The issuer is taken from the SSO tab of the OneLogin application It is one of the issuer api endpoints The clientSecret value is a client secret located in the SSO tab of the OneLogin application note If you get an invalid client error when trying the authenticate with OneLogin there is a possibility that your client secret is not proper Keep in mind that in previous versions clientSecret value had to be base64 encrypted but it is not required anymore Configure Permissions for OneLogin Auth d Users Permissions in ArgoCD can be configured by using the OneLogin role names that are passed in the Groups field in the token Consider the following yaml in argocd rbac cm yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm namespace argocd labels app kubernetes io part of argocd data policy default role readonly policy csv p role org admin applications allow p role org admin clusters get allow p role org admin repositories get allow p role org admin repositories create allow p role org admin repositories update allow p role org admin repositories delete allow g TestEngineering role org admin In OneLogin a user with user role TestEngineering will receive ArgoCD admin privileges when they log in to Argo via OneLogin All other users will receive the readonly role The key takeaway here is that TestEngineering is passed via the Group field in the token which is specified in the Parameters tab in OneLogin
argocd If you re using this IdP please consider to this document A working Single Sign On configuration using Okta via at least two methods was achieved using note Are you using this Please contribute Okta
# Okta !!! note "Are you using this? Please contribute!" If you're using this IdP please consider [contributing](../../developer-guide/docs-site.md) to this document. A working Single Sign-On configuration using Okta via at least two methods was achieved using: * [SAML (with Dex)](#saml-with-dex) * [OIDC (without Dex)](#oidc-without-dex) ## SAML (with Dex) !!! note "Okta app group assignment" The Okta app's **Group Attribute Statements** regex will be used later to map Okta groups to Argo CD RBAC roles. 1. Create a new SAML application in Okta UI. * ![Okta SAML App 1](../../assets/saml-1.png) I've disabled `App Visibility` because Dex doesn't support Provider-initiated login flows. * ![Okta SAML App 2](../../assets/saml-2.png) 1. Click `View setup instructions` after creating the application in Okta. * ![Okta SAML App 3](../../assets/saml-3.png) 1. Copy the Argo CD URL to the `argocd-cm` in the data.url <!-- markdownlint-disable MD046 --> ```yaml data: url: https://argocd.example.com ``` <!-- markdownlint-disable MD046 --> 1. Download the CA certificate to use in the `argocd-cm` configuration. * If you are using this in the caData field, you will need to pass the entire certificate (including `-----BEGIN CERTIFICATE-----` and `-----END CERTIFICATE-----` stanzas) through base64 encoding, for example, `base64 my_cert.pem`. * If you are using the ca field and storing the CA certificate separately as a secret, you will need to mount the secret to the `dex` container in the `argocd-dex-server` Deployment. * ![Okta SAML App 4](../../assets/saml-4.png) 1. Edit the `argocd-cm` and configure the `data.dex.config` section: <!-- markdownlint-disable MD046 --> ```yaml dex.config: | logger: level: debug format: json connectors: - type: saml id: okta name: Okta config: ssoURL: https://yourorganization.oktapreview.com/app/yourorganizationsandbox_appnamesaml_2/rghdr9s6hg98s9dse/sso/saml # You need `caData` _OR_ `ca`, but not both. caData: | <CA cert passed through base64 encoding> # You need `caData` _OR_ `ca`, but not both. # Path to mount the secret to the dex container ca: /path/to/ca.pem redirectURI: https://ui.argocd.yourorganization.net/api/dex/callback usernameAttr: email emailAttr: email groupsAttr: group ``` <!-- markdownlint-enable MD046 --> ---- ### Private deployment It is possible to setup Okta SSO with a private Argo CD installation, where the Okta callback URL is the only publicly exposed endpoint. The settings are largely the same with a few changes in the Okta app configuration and the `data.dex.config` section of the `argocd-cm` ConfigMap. Using this deployment model, the user connects to the private Argo CD UI and the Okta authentication flow seamlessly redirects back to the private UI URL. Often this public endpoint is exposed through an [Ingress object](../../ingress/#private-argo-cd-ui-with-multiple-ingress-objects-and-byo-certificate). 1. Update the URLs in the Okta app's General settings * ![Okta SAML App Split](../../assets/saml-split.png) The `Single sign on URL` field points to the public exposed endpoint, and all other URL fields point to the internal endpoint. 1. Update the `data.dex.config` section of the `argocd-cm` ConfigMap with the external endpoint reference. <!-- markdownlint-disable MD046 --> ```yaml dex.config: | logger: level: debug connectors: - type: saml id: okta name: Okta config: ssoURL: https://yourorganization.oktapreview.com/app/yourorganizationsandbox_appnamesaml_2/rghdr9s6hg98s9dse/sso/saml # You need `caData` _OR_ `ca`, but not both. caData: | <CA cert passed through base64 encoding> # You need `caData` _OR_ `ca`, but not both. # Path to mount the secret to the dex container ca: /path/to/ca.pem redirectURI: https://external.path.to.argocd.io/api/dex/callback usernameAttr: email emailAttr: email groupsAttr: group ``` <!-- markdownlint-enable MD046 --> ### Connect Okta Groups to Argo CD Roles Argo CD is aware of user memberships of Okta groups that match the *Group Attribute Statements* regex. The example above uses the `argocd-*` regex, so Argo CD would be aware of a group named `argocd-admins`. Modify the `argocd-rbac-cm` ConfigMap to connect the `argocd-admins` Okta group to the builtin Argo CD `admin` role. <!-- markdownlint-disable MD046 --> ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm data: policy.csv: | g, argocd-admins, role:admin scopes: '[email,groups]' ``` ## OIDC (without Dex) !!! warning "Okta groups for RBAC" If you want `groups` scope returned from Okta, you will need to enable [API Access Management with Okta](https://developer.okta.com/docs/concepts/api-access-management/). This addon is free, and automatically enabled, on Okta developer edition. However, it's an optional add-on for production environments, with an additional associated cost. You may alternately add a "groups" scope and claim to the default authorization server, and then filter the claim in the Okta application configuration. It's not clear if this requires the Authorization Server add-on. If this is not an option for you, use the [SAML (with Dex)](#saml-with-dex) option above instead. !!! note These instructions and screenshots are of Okta version 2023.05.2 E. You can find the current version in the Okta website footer. First, create the OIDC integration: 1. On the `Okta Admin` page, navigate to the Okta Applications at `Applications > Applications.` 1. Choose `Create App Integration`, and choose `OIDC`, and then `Web Application` in the resulting dialogues. ![Okta OIDC app dialogue](../../assets/okta-create-oidc-app.png) 1. Update the following: 1. `App Integration name` and `Logo` - set these to suit your needs; they'll be displayed in the Okta catalogue. 1. `Sign-in redirect URLs`: Add `https://argocd.example.com/auth/callback`; replacing `argocd.example.com` with your ArgoCD web interface URL. 1. `Sign-out redirect URIs`: Add `https://argocd.example.com`; substituting the correct domain name as above. 1. Either assign groups, or choose to skip this step for now. 1. Leave the rest of the options as-is, and save the integration. ![Okta app settings](../../assets/okta-app.png) 1. Copy the `Client ID` and the `Client Secret` from the newly created app; you will need these later. Next, create a custom Authorization server: 1. On the `Okta Admin` page, navigate to the Okta API Management at `Security > API`. 1. Click `Add Authorization Server`, and assign it a name and a description. The `Audience` should match your ArgoCD URL - `https://argocd.example.com` 1. Click `Scopes > Add Scope`: 1. Add a scope called `groups`. Leave the rest of the options as default. ![Groups Scope](../../assets/okta-groups-scope.png) 1. Click `Claims > Add Claim`: 1. Add a claim called `groups`. 1. Adjust the `Include in token type` to `ID Token`, `Always`. 1. Adjust the `Value type` to `Groups`. 1. Add a filter that will match the Okta groups you want passed on to ArgoCD; for example `Regex: argocd-.*`. 1. Set `Include in` to `groups` (the scope you created above). ![Groups Claim](../../assets/okta-groups-claim.png) 1. Click on `Access Policies` > `Add Policy.` This policy will restrict how this authorization server is used. 1. Add a name and description. 1. Assign the policy to the client (application integration) you created above. The field should auto-complete as you type. 1. Create the policy. ![Auth Policy](../../assets/okta-auth-policy.png) 1. Add a rule to the policy: 1. Add a name; `default` is a reasonable name for this rule. 1. Fine-tune the settings to suit your organization's security posture. Some ideas: 1. uncheck all the grant types except the Authorization Code. 1. Adjust the token lifetime to govern how long a session can last. 1. Restrict refresh token lifetime, or completely disable it. ![Default rule](../../assets/okta-auth-rule.png) 1. Finally, click `Back to Authorization Servers`, and copy the `Issuer URI`. You will need this later. ### CLI login In order to login with the CLI `argocd login https://argocd.example.com --sso`, Okta requires a separate dedicated App Integration: 1. Create a new `Create App Integration`, and choose `OIDC`, and then `Single-Page Application`. 1. Update the following: 1. `App Integration name` and `Logo` - set these to suit your needs; they'll be displayed in the Okta catalogue. 1. `Sign-in redirect URLs`: Add `http://localhost:8085/auth/callback`. 1. `Sign-out redirect URIs`: Add `http://localhost:8085`. 1. Either assign groups, or choose to skip this step for now. 1. Leave the rest of the options as-is, and save the integration. 1. Copy the `Client ID` from the newly created app; `cliClientID: <Client ID>` will be used in your `argocd-cm` ConfigMap. 1. Edit your Authorization Server `Access Policies`: 1. Navigate to the Okta API Management at `Security > API`. 1. Choose your existing `Authorization Server` that was created previously. 1. Click `Access Policies` > `Edit Policy`. 1. Assign your newly created `App Integration` by filling in the text box and clicking `Update Policy`. ![Edit Policy](../../assets/okta-auth-policy-edit.png) If you haven't yet created Okta groups, and assigned them to the application integration, you should do that now: 1. Go to `Directory > Groups` 1. For each group you wish to add: 1. Click `Add Group`, and choose a meaningful name. It should match the regex or pattern you added to your custom `group` claim. 1. Click on the group (refresh the page if the new group didn't show up in the list). 1. Assign Okta users to the group. 1. Click on `Applications` and assign the OIDC application integration you created to this group. 1. Repeat as needed. Finally, configure ArgoCD itself. Edit the `argocd-cm` configmap: <!-- markdownlint-disable MD046 --> ```yaml url: https://argocd.example.com oidc.config: | name: Okta # this is the authorization server URI issuer: https://example.okta.com/oauth2/aus9abcdefgABCDEFGd7 clientID: 0oa9abcdefgh123AB5d7 cliClientID: gfedcba0987654321GEFDCBA # Optional if using the CLI for SSO clientSecret: ABCDEFG1234567890abcdefg requestedScopes: ["openid", "profile", "email", "groups"] requestedIDTokenClaims: {"groups": {"essential": true}} ``` You may want to store the `clientSecret` in a Kubernetes secret; see [how to deal with SSO secrets](./index.md/#sensitive-data-and-sso-client-secrets ) for more details.
argocd
Okta note Are you using this Please contribute If you re using this IdP please consider contributing developer guide docs site md to this document A working Single Sign On configuration using Okta via at least two methods was achieved using SAML with Dex saml with dex OIDC without Dex oidc without dex SAML with Dex note Okta app group assignment The Okta app s Group Attribute Statements regex will be used later to map Okta groups to Argo CD RBAC roles 1 Create a new SAML application in Okta UI Okta SAML App 1 assets saml 1 png I ve disabled App Visibility because Dex doesn t support Provider initiated login flows Okta SAML App 2 assets saml 2 png 1 Click View setup instructions after creating the application in Okta Okta SAML App 3 assets saml 3 png 1 Copy the Argo CD URL to the argocd cm in the data url markdownlint disable MD046 yaml data url https argocd example com markdownlint disable MD046 1 Download the CA certificate to use in the argocd cm configuration If you are using this in the caData field you will need to pass the entire certificate including BEGIN CERTIFICATE and END CERTIFICATE stanzas through base64 encoding for example base64 my cert pem If you are using the ca field and storing the CA certificate separately as a secret you will need to mount the secret to the dex container in the argocd dex server Deployment Okta SAML App 4 assets saml 4 png 1 Edit the argocd cm and configure the data dex config section markdownlint disable MD046 yaml dex config logger level debug format json connectors type saml id okta name Okta config ssoURL https yourorganization oktapreview com app yourorganizationsandbox appnamesaml 2 rghdr9s6hg98s9dse sso saml You need caData OR ca but not both caData CA cert passed through base64 encoding You need caData OR ca but not both Path to mount the secret to the dex container ca path to ca pem redirectURI https ui argocd yourorganization net api dex callback usernameAttr email emailAttr email groupsAttr group markdownlint enable MD046 Private deployment It is possible to setup Okta SSO with a private Argo CD installation where the Okta callback URL is the only publicly exposed endpoint The settings are largely the same with a few changes in the Okta app configuration and the data dex config section of the argocd cm ConfigMap Using this deployment model the user connects to the private Argo CD UI and the Okta authentication flow seamlessly redirects back to the private UI URL Often this public endpoint is exposed through an Ingress object ingress private argo cd ui with multiple ingress objects and byo certificate 1 Update the URLs in the Okta app s General settings Okta SAML App Split assets saml split png The Single sign on URL field points to the public exposed endpoint and all other URL fields point to the internal endpoint 1 Update the data dex config section of the argocd cm ConfigMap with the external endpoint reference markdownlint disable MD046 yaml dex config logger level debug connectors type saml id okta name Okta config ssoURL https yourorganization oktapreview com app yourorganizationsandbox appnamesaml 2 rghdr9s6hg98s9dse sso saml You need caData OR ca but not both caData CA cert passed through base64 encoding You need caData OR ca but not both Path to mount the secret to the dex container ca path to ca pem redirectURI https external path to argocd io api dex callback usernameAttr email emailAttr email groupsAttr group markdownlint enable MD046 Connect Okta Groups to Argo CD Roles Argo CD is aware of user memberships of Okta groups that match the Group Attribute Statements regex The example above uses the argocd regex so Argo CD would be aware of a group named argocd admins Modify the argocd rbac cm ConfigMap to connect the argocd admins Okta group to the builtin Argo CD admin role markdownlint disable MD046 yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm data policy csv g argocd admins role admin scopes email groups OIDC without Dex warning Okta groups for RBAC If you want groups scope returned from Okta you will need to enable API Access Management with Okta https developer okta com docs concepts api access management This addon is free and automatically enabled on Okta developer edition However it s an optional add on for production environments with an additional associated cost You may alternately add a groups scope and claim to the default authorization server and then filter the claim in the Okta application configuration It s not clear if this requires the Authorization Server add on If this is not an option for you use the SAML with Dex saml with dex option above instead note These instructions and screenshots are of Okta version 2023 05 2 E You can find the current version in the Okta website footer First create the OIDC integration 1 On the Okta Admin page navigate to the Okta Applications at Applications Applications 1 Choose Create App Integration and choose OIDC and then Web Application in the resulting dialogues Okta OIDC app dialogue assets okta create oidc app png 1 Update the following 1 App Integration name and Logo set these to suit your needs they ll be displayed in the Okta catalogue 1 Sign in redirect URLs Add https argocd example com auth callback replacing argocd example com with your ArgoCD web interface URL 1 Sign out redirect URIs Add https argocd example com substituting the correct domain name as above 1 Either assign groups or choose to skip this step for now 1 Leave the rest of the options as is and save the integration Okta app settings assets okta app png 1 Copy the Client ID and the Client Secret from the newly created app you will need these later Next create a custom Authorization server 1 On the Okta Admin page navigate to the Okta API Management at Security API 1 Click Add Authorization Server and assign it a name and a description The Audience should match your ArgoCD URL https argocd example com 1 Click Scopes Add Scope 1 Add a scope called groups Leave the rest of the options as default Groups Scope assets okta groups scope png 1 Click Claims Add Claim 1 Add a claim called groups 1 Adjust the Include in token type to ID Token Always 1 Adjust the Value type to Groups 1 Add a filter that will match the Okta groups you want passed on to ArgoCD for example Regex argocd 1 Set Include in to groups the scope you created above Groups Claim assets okta groups claim png 1 Click on Access Policies Add Policy This policy will restrict how this authorization server is used 1 Add a name and description 1 Assign the policy to the client application integration you created above The field should auto complete as you type 1 Create the policy Auth Policy assets okta auth policy png 1 Add a rule to the policy 1 Add a name default is a reasonable name for this rule 1 Fine tune the settings to suit your organization s security posture Some ideas 1 uncheck all the grant types except the Authorization Code 1 Adjust the token lifetime to govern how long a session can last 1 Restrict refresh token lifetime or completely disable it Default rule assets okta auth rule png 1 Finally click Back to Authorization Servers and copy the Issuer URI You will need this later CLI login In order to login with the CLI argocd login https argocd example com sso Okta requires a separate dedicated App Integration 1 Create a new Create App Integration and choose OIDC and then Single Page Application 1 Update the following 1 App Integration name and Logo set these to suit your needs they ll be displayed in the Okta catalogue 1 Sign in redirect URLs Add http localhost 8085 auth callback 1 Sign out redirect URIs Add http localhost 8085 1 Either assign groups or choose to skip this step for now 1 Leave the rest of the options as is and save the integration 1 Copy the Client ID from the newly created app cliClientID Client ID will be used in your argocd cm ConfigMap 1 Edit your Authorization Server Access Policies 1 Navigate to the Okta API Management at Security API 1 Choose your existing Authorization Server that was created previously 1 Click Access Policies Edit Policy 1 Assign your newly created App Integration by filling in the text box and clicking Update Policy Edit Policy assets okta auth policy edit png If you haven t yet created Okta groups and assigned them to the application integration you should do that now 1 Go to Directory Groups 1 For each group you wish to add 1 Click Add Group and choose a meaningful name It should match the regex or pattern you added to your custom group claim 1 Click on the group refresh the page if the new group didn t show up in the list 1 Assign Okta users to the group 1 Click on Applications and assign the OIDC application integration you created to this group 1 Repeat as needed Finally configure ArgoCD itself Edit the argocd cm configmap markdownlint disable MD046 yaml url https argocd example com oidc config name Okta this is the authorization server URI issuer https example okta com oauth2 aus9abcdefgABCDEFGd7 clientID 0oa9abcdefgh123AB5d7 cliClientID gfedcba0987654321GEFDCBA Optional if using the CLI for SSO clientSecret ABCDEFG1234567890abcdefg requestedScopes openid profile email groups requestedIDTokenClaims groups essential true You may want to store the clientSecret in a Kubernetes secret see how to deal with SSO secrets index md sensitive data and sso client secrets for more details
argocd Once installed Argo CD has one built in user that has full access to the system It is recommended to use user only The local users accounts feature serves two main use cases Local users accounts for initial configuration and then switch to local users or configure SSO integration Overview Auth tokens for Argo CD management automation It is possible to configure an API account with limited permissions and generate an authentication token
# Overview Once installed Argo CD has one built-in `admin` user that has full access to the system. It is recommended to use `admin` user only for initial configuration and then switch to local users or configure SSO integration. ## Local users/accounts The local users/accounts feature serves two main use-cases: * Auth tokens for Argo CD management automation. It is possible to configure an API account with limited permissions and generate an authentication token. Such token can be used to automatically create applications, projects etc. * Additional users for a very small team where use of SSO integration might be considered an overkill. The local users don't provide advanced features such as groups, login history etc. So if you need such features it is strongly recommended to use SSO. !!! note When you create local users, each of those users will need additional [RBAC rules](../rbac.md) set up, otherwise they will fall back to the default policy specified by `policy.default` field of the `argocd-rbac-cm` ConfigMap. The maximum length of a local account's username is 32. ### Create new user New users should be defined in `argocd-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: # add an additional local user with apiKey and login capabilities # apiKey - allows generating API keys # login - allows to login using UI accounts.alice: apiKey, login # disables user. User is enabled by default accounts.alice.enabled: "false" ``` Each user might have two capabilities: * apiKey - allows generating authentication tokens for API access * login - allows to login using UI ### Delete user In order to delete a user, you must remove the corresponding entry defined in the `argocd-cm` ConfigMap: Example: ```bash kubectl patch -n argocd cm argocd-cm --type='json' -p='[{"op": "remove", "path": "/data/accounts.alice"}]' ``` It is recommended to also remove the password entry in the `argocd-secret` Secret: Example: ```bash kubectl patch -n argocd secrets argocd-secret --type='json' -p='[{"op": "remove", "path": "/data/accounts.alice.password"}]' ``` ### Disable admin user As soon as additional users are created it is recommended to disable `admin` user: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: admin.enabled: "false" ``` ### Manage users The Argo CD CLI provides set of commands to set user password and generate tokens. * Get full users list ```bash argocd account list ``` * Get specific user details ```bash argocd account get --account <username> ``` * Set user password ```bash # if you are managing users as the admin user, <current-user-password> should be the current admin password. argocd account update-password \ --account <name> \ --current-password <current-user-password> \ --new-password <new-user-password> ``` * Generate auth token ```bash # if flag --account is omitted then Argo CD generates token for current user argocd account generate-token --account <username> ``` ### Failed logins rate limiting Argo CD rejects login attempts after too many failed in order to prevent password brute-forcing. The following environments variables are available to control throttling settings: * `ARGOCD_SESSION_FAILURE_MAX_FAIL_COUNT`: Maximum number of failed logins before Argo CD starts rejecting login attempts. Default: 5. * `ARGOCD_SESSION_FAILURE_WINDOW_SECONDS`: Number of seconds for the failure window. Default: 300 (5 minutes). If this is set to 0, the failure window is disabled and the login attempts gets rejected after 10 consecutive logon failures, regardless of the time frame they happened. * `ARGOCD_SESSION_MAX_CACHE_SIZE`: Maximum number of entries allowed in the cache. Default: 1000 * `ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT`: Limits max number of concurrent login requests. If set to 0 then limit is disabled. Default: 50. ## SSO There are two ways that SSO can be configured: * [Bundled Dex OIDC provider](#dex) - use this option if your current provider does not support OIDC (e.g. SAML, LDAP) or if you wish to leverage any of Dex's connector features (e.g. the ability to map GitHub organizations and teams to OIDC groups claims). Dex also supports OIDC directly and can fetch user information from the identity provider when the groups cannot be included in the IDToken. * [Existing OIDC provider](#existing-oidc-provider) - use this if you already have an OIDC provider which you are using (e.g. [Okta](okta.md), [OneLogin](onelogin.md), [Auth0](auth0.md), [Microsoft](microsoft.md), [Keycloak](keycloak.md), [Google (G Suite)](google.md)), where you manage your users, groups, and memberships. ## Dex Argo CD embeds and bundles [Dex](https://github.com/dexidp/dex) as part of its installation, for the purpose of delegating authentication to an external identity provider. Multiple types of identity providers are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of Argo CD requires editing the `argocd-cm` ConfigMap with [Dex connector](https://dexidp.io/docs/connectors/) settings. This document describes how to configure Argo CD SSO using GitHub (OAuth2) as an example, but the steps should be similar for other identity providers. ### 1. Register the application in the identity provider In GitHub, register a new application. The callback address should be the `/api/dex/callback` endpoint of your Argo CD URL (e.g. `https://argocd.example.com/api/dex/callback`). ![Register OAuth App](../../assets/register-app.png "Register OAuth App") After registering the app, you will receive an OAuth2 client ID and secret. These values will be inputted into the Argo CD configmap. ![OAuth2 Client Config](../../assets/oauth2-config.png "OAuth2 Client Config") ### 2. Configure Argo CD for SSO Edit the argocd-cm configmap: ```bash kubectl edit configmap argocd-cm -n argocd ``` * In the `url` key, input the base URL of Argo CD. In this example, it is `https://argocd.example.com` * (Optional): If Argo CD should be accessible via multiple base URLs you may specify any additional base URLs via the `additionalUrls` key. * In the `dex.config` key, add the `github` connector to the `connectors` sub field. See Dex's [GitHub connector](https://github.com/dexidp/website/blob/main/content/docs/connectors/github.md) documentation for explanation of the fields. A minimal config should populate the clientID, clientSecret generated in Step 1. * You will very likely want to restrict logins to one or more GitHub organization. In the `connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will then be able to login to Argo CD to perform management tasks. ```yaml data: url: https://argocd.example.com dex.config: | connectors: # GitHub example - type: github id: github name: GitHub config: clientID: aabbccddeeff00112233 clientSecret: $dex.github.clientSecret # Alternatively $<some_K8S_secret>:dex.github.clientSecret orgs: - name: your-github-org # GitHub enterprise example - type: github id: acme-github name: Acme GitHub config: hostName: github.acme.example.com clientID: abcdefghijklmnopqrst clientSecret: $dex.acme.clientSecret # Alternatively $<some_K8S_secret>:dex.acme.clientSecret orgs: - name: your-github-org ``` After saving, the changes should take affect automatically. NOTES: * There is no need to set `redirectURI` in the `connectors.config` as shown in the dex documentation. Argo CD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the correct external callback URL (e.g. `https://argocd.example.com/api/dex/callback`) * When using a custom secret (e.g., `some_K8S_secret` above,) it *must* have the label `app.kubernetes.io/part-of: argocd`. ## OIDC Configuration with DEX Dex can be used for OIDC authentication instead of ArgoCD directly. This provides a separate set of features such as fetching information from the `UserInfo` endpoint and [federated tokens](https://dexidp.io/docs/custom-scopes-claims-clients/#cross-client-trust-and-authorized-party) ### Configuration: * In the `argocd-cm` ConfigMap add the `OIDC` connector to the `connectors` sub field inside `dex.config`. See Dex's [OIDC connect documentation](https://dexidp.io/docs/connectors/oidc/) to see what other configuration options might be useful. We're going to be using a minimal configuration here. * The issuer URL should be where Dex talks to the OIDC provider. There would normally be a `.well-known/openid-configuration` under this URL which has information about what the provider supports. e.g. https://accounts.google.com/.well-known/openid-configuration ```yaml data: url: "https://argocd.example.com" dex.config: | connectors: # OIDC - type: oidc id: oidc name: OIDC config: issuer: https://example-OIDC-provider.example.com clientID: aaaabbbbccccddddeee clientSecret: $dex.oidc.clientSecret ``` ### Requesting additional ID token claims By default Dex only retrieves the profile and email scopes. In order to retrieve more claims you can add them under the `scopes` entry in the Dex configuration. To enable group claims through Dex, `insecureEnableGroups` also needs to enabled. Group information is currently only refreshed at authentication time and support to refresh group information more dynamically can be tracked here: [dexidp/dex#1065](https://github.com/dexidp/dex/issues/1065). ```yaml data: url: "https://argocd.example.com" dex.config: | connectors: # OIDC - type: oidc id: oidc name: OIDC config: issuer: https://example-OIDC-provider.example.com clientID: aaaabbbbccccddddeee clientSecret: $dex.oidc.clientSecret insecureEnableGroups: true scopes: - profile - email - groups ``` !!! warning Because group information is only refreshed at authentication time just adding or removing an account from a group will not change a user's membership until they reauthenticate. Depending on your organization's needs this could be a security risk and could be mitigated by changing the authentication token's lifetime. ### Retrieving claims that are not in the token When an Idp does not or cannot support certain claims in an IDToken they can be retrieved separately using the UserInfo endpoint. Dex supports this functionality using the `getUserInfo` endpoint. One of the most common claims that is not supported in the IDToken is the `groups` claim and both `getUserInfo` and `insecureEnableGroups` must be set to true. ```yaml data: url: "https://argocd.example.com" dex.config: | connectors: # OIDC - type: oidc id: oidc name: OIDC config: issuer: https://example-OIDC-provider.example.com clientID: aaaabbbbccccddddeee clientSecret: $dex.oidc.clientSecret insecureEnableGroups: true scopes: - profile - email - groups getUserInfo: true ``` ## Existing OIDC Provider To configure Argo CD to delegate authentication to your existing OIDC provider, add the OAuth2 configuration to the `argocd-cm` ConfigMap under the `oidc.config` key: ```yaml data: url: https://argocd.example.com oidc.config: | name: Okta issuer: https://dev-123456.oktapreview.com clientID: aaaabbbbccccddddeee clientSecret: $oidc.okta.clientSecret # Optional list of allowed aud claims. If omitted or empty, defaults to the clientID value above (and the # cliClientID, if that is also specified). If you specify a list and want the clientID to be allowed, you must # explicitly include it in the list. # Token verification will pass if any of the token's audiences matches any of the audiences in this list. allowedAudiences: - aaaabbbbccccddddeee - qqqqwwwweeeerrrrttt # Optional. If false, tokens without an audience will always fail validation. If true, tokens without an audience # will always pass validation. # Defaults to true for Argo CD < 2.6.0. Defaults to false for Argo CD >= 2.6.0. skipAudienceCheckWhenTokenHasNoAudience: true # Optional set of OIDC scopes to request. If omitted, defaults to: ["openid", "profile", "email", "groups"] requestedScopes: ["openid", "profile", "email", "groups"] # Optional set of OIDC claims to request on the ID token. requestedIDTokenClaims: {"groups": {"essential": true}} # Some OIDC providers require a separate clientID for different callback URLs. # For example, if configuring Argo CD with self-hosted Dex, you will need a separate client ID # for the 'localhost' (CLI) client to Dex. This field is optional. If omitted, the CLI will # use the same clientID as the Argo CD server cliClientID: vvvvwwwwxxxxyyyyzzzz # PKCE authentication flow processes authorization flow from browser only - default false # uses the clientID # make sure the Identity Provider (IdP) is public and doesn't need clientSecret # make sure the Identity Provider (IdP) has this redirect URI registered: https://argocd.example.com/pkce/verify enablePKCEAuthentication: true ``` !!! note The callback address should be the /auth/callback endpoint of your Argo CD URL (e.g. https://argocd.example.com/auth/callback). ### Requesting additional ID token claims Not all OIDC providers support a special `groups` scope. E.g. Okta, OneLogin and Microsoft do support a special `groups` scope and will return group membership with the default `requestedScopes`. Other OIDC providers might be able to return a claim with group membership if explicitly requested to do so. Individual claims can be requested with `requestedIDTokenClaims`, see [OpenID Connect Claims Parameter](https://connect2id.com/products/server/docs/guides/requesting-openid-claims#claims-parameter) for details. The Argo CD configuration for claims is as follows: ```yaml oidc.config: | requestedIDTokenClaims: email: essential: true groups: essential: true value: org:myorg acr: essential: true values: - urn:mace:incommon:iap:silver - urn:mace:incommon:iap:bronze ``` For a simple case this can be: ```yaml oidc.config: | requestedIDTokenClaims: {"groups": {"essential": true}} ``` ### Retrieving group claims when not in the token Some OIDC providers don't return the group information for a user in the ID token, even if explicitly requested using the `requestedIDTokenClaims` setting (Okta for example). They instead provide the groups on the user info endpoint. With the following config, Argo CD queries the user info endpoint during login for groups information of a user: ```yaml oidc.config: | enableUserInfoGroups: true userInfoPath: /userinfo userInfoCacheExpiration: "5m" ``` **Note: If you omit the `userInfoCacheExpiration` setting or if it's greater than the expiration of the ID token, the argocd-server will cache group information as long as the ID token is valid!** ### Configuring a custom logout URL for your OIDC provider Optionally, if your OIDC provider exposes a logout API and you wish to configure a custom logout URL for the purposes of invalidating any active session post logout, you can do so by specifying it as follows: ```yaml oidc.config: | name: example-OIDC-provider issuer: https://example-OIDC-provider.example.com clientID: xxxxxxxxx clientSecret: xxxxxxxxx requestedScopes: ["openid", "profile", "email", "groups"] requestedIDTokenClaims: {"groups": {"essential": true}} logoutURL: https://example-OIDC-provider.example.com/logout?id_token_hint= ``` By default, this would take the user to their OIDC provider's login page after logout. If you also wish to redirect the user back to Argo CD after logout, you can specify the logout URL as follows: ```yaml ... logoutURL: https://example-OIDC-provider.example.com/logout?id_token_hint=&post_logout_redirect_uri= ``` You are not required to specify a logoutRedirectURL as this is automatically generated by ArgoCD as your base ArgoCD url + Rootpath !!! note The post logout redirect URI may need to be whitelisted against your OIDC provider's client settings for ArgoCD. ### Configuring a custom root CA certificate for communicating with the OIDC provider If your OIDC provider is setup with a certificate which is not signed by one of the well known certificate authorities you can provide a custom certificate which will be used in verifying the OIDC provider's TLS certificate when communicating with it. Add a `rootCA` to your `oidc.config` which contains the PEM encoded root certificate: ```yaml oidc.config: | ... rootCA: | -----BEGIN CERTIFICATE----- ... encoded certificate data here ... -----END CERTIFICATE----- ``` ## SSO Further Reading ### Sensitive Data and SSO Client Secrets `argocd-secret` can be used to store sensitive data which can be referenced by ArgoCD. Values starting with `$` in configmaps are interpreted as follows: - If value has the form: `$<secret>:a.key.in.k8s.secret`, look for a k8s secret with the name `<secret>` (minus the `$`), and read its value. - Otherwise, look for a key in the k8s secret named `argocd-secret`. #### Example SSO `clientSecret` can thus be stored as a Kubernetes secret with the following manifests `argocd-secret`: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-secret namespace: argocd labels: app.kubernetes.io/name: argocd-secret app.kubernetes.io/part-of: argocd type: Opaque data: ... # The secret value must be base64 encoded **once** # this value corresponds to: `printf "hello-world" | base64` oidc.auth0.clientSecret: "aGVsbG8td29ybGQ=" ... ``` `argocd-cm`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: ... oidc.config: | name: Auth0 clientID: aabbccddeeff00112233 # Reference key in argocd-secret clientSecret: $oidc.auth0.clientSecret ... ``` #### Alternative If you want to store sensitive data in **another** Kubernetes `Secret`, instead of `argocd-secret`. ArgoCD knows to check the keys under `data` in your Kubernetes `Secret` for a corresponding key whenever a value in a configmap or secret starts with `$`, then your Kubernetes `Secret` name and `:` (colon). Syntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>` > NOTE: Secret must have label `app.kubernetes.io/part-of: argocd` ##### Example `another-secret`: ```yaml apiVersion: v1 kind: Secret metadata: name: another-secret namespace: argocd labels: app.kubernetes.io/part-of: argocd type: Opaque data: ... # Store client secret like below. # Ensure the secret is base64 encoded oidc.auth0.clientSecret: <client-secret-base64-encoded> ... ``` `argocd-cm`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: ... oidc.config: | name: Auth0 clientID: aabbccddeeff00112233 # Reference key in another-secret (and not argocd-secret) clientSecret: $another-secret:oidc.auth0.clientSecret # Mind the ':' ... ``` ### Skipping certificate verification on OIDC provider connections By default, all connections made by the API server to OIDC providers (either external providers or the bundled Dex instance) must pass certificate validation. These connections occur when getting the OIDC provider's well-known configuration, when getting the OIDC provider's keys, and when exchanging an authorization code or verifying an ID token as part of an OIDC login flow. Disabling certificate verification might make sense if: * You are using the bundled Dex instance **and** your Argo CD instance has TLS configured with a self-signed certificate **and** you understand and accept the risks of skipping OIDC provider cert verification. * You are using an external OIDC provider **and** that provider uses an invalid certificate **and** you cannot solve the problem by setting `oidcConfig.rootCA` **and** you understand and accept the risks of skipping OIDC provider cert verification. If either of those two applies, then you can disable OIDC provider certificate verification by setting `oidc.tls.insecure.skip.verify` to `"true"` in the `argocd-cm` ConfigMap.
argocd
Overview Once installed Argo CD has one built in admin user that has full access to the system It is recommended to use admin user only for initial configuration and then switch to local users or configure SSO integration Local users accounts The local users accounts feature serves two main use cases Auth tokens for Argo CD management automation It is possible to configure an API account with limited permissions and generate an authentication token Such token can be used to automatically create applications projects etc Additional users for a very small team where use of SSO integration might be considered an overkill The local users don t provide advanced features such as groups login history etc So if you need such features it is strongly recommended to use SSO note When you create local users each of those users will need additional RBAC rules rbac md set up otherwise they will fall back to the default policy specified by policy default field of the argocd rbac cm ConfigMap The maximum length of a local account s username is 32 Create new user New users should be defined in argocd cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data add an additional local user with apiKey and login capabilities apiKey allows generating API keys login allows to login using UI accounts alice apiKey login disables user User is enabled by default accounts alice enabled false Each user might have two capabilities apiKey allows generating authentication tokens for API access login allows to login using UI Delete user In order to delete a user you must remove the corresponding entry defined in the argocd cm ConfigMap Example bash kubectl patch n argocd cm argocd cm type json p op remove path data accounts alice It is recommended to also remove the password entry in the argocd secret Secret Example bash kubectl patch n argocd secrets argocd secret type json p op remove path data accounts alice password Disable admin user As soon as additional users are created it is recommended to disable admin user yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data admin enabled false Manage users The Argo CD CLI provides set of commands to set user password and generate tokens Get full users list bash argocd account list Get specific user details bash argocd account get account username Set user password bash if you are managing users as the admin user current user password should be the current admin password argocd account update password account name current password current user password new password new user password Generate auth token bash if flag account is omitted then Argo CD generates token for current user argocd account generate token account username Failed logins rate limiting Argo CD rejects login attempts after too many failed in order to prevent password brute forcing The following environments variables are available to control throttling settings ARGOCD SESSION FAILURE MAX FAIL COUNT Maximum number of failed logins before Argo CD starts rejecting login attempts Default 5 ARGOCD SESSION FAILURE WINDOW SECONDS Number of seconds for the failure window Default 300 5 minutes If this is set to 0 the failure window is disabled and the login attempts gets rejected after 10 consecutive logon failures regardless of the time frame they happened ARGOCD SESSION MAX CACHE SIZE Maximum number of entries allowed in the cache Default 1000 ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT Limits max number of concurrent login requests If set to 0 then limit is disabled Default 50 SSO There are two ways that SSO can be configured Bundled Dex OIDC provider dex use this option if your current provider does not support OIDC e g SAML LDAP or if you wish to leverage any of Dex s connector features e g the ability to map GitHub organizations and teams to OIDC groups claims Dex also supports OIDC directly and can fetch user information from the identity provider when the groups cannot be included in the IDToken Existing OIDC provider existing oidc provider use this if you already have an OIDC provider which you are using e g Okta okta md OneLogin onelogin md Auth0 auth0 md Microsoft microsoft md Keycloak keycloak md Google G Suite google md where you manage your users groups and memberships Dex Argo CD embeds and bundles Dex https github com dexidp dex as part of its installation for the purpose of delegating authentication to an external identity provider Multiple types of identity providers are supported OIDC SAML LDAP GitHub etc SSO configuration of Argo CD requires editing the argocd cm ConfigMap with Dex connector https dexidp io docs connectors settings This document describes how to configure Argo CD SSO using GitHub OAuth2 as an example but the steps should be similar for other identity providers 1 Register the application in the identity provider In GitHub register a new application The callback address should be the api dex callback endpoint of your Argo CD URL e g https argocd example com api dex callback Register OAuth App assets register app png Register OAuth App After registering the app you will receive an OAuth2 client ID and secret These values will be inputted into the Argo CD configmap OAuth2 Client Config assets oauth2 config png OAuth2 Client Config 2 Configure Argo CD for SSO Edit the argocd cm configmap bash kubectl edit configmap argocd cm n argocd In the url key input the base URL of Argo CD In this example it is https argocd example com Optional If Argo CD should be accessible via multiple base URLs you may specify any additional base URLs via the additionalUrls key In the dex config key add the github connector to the connectors sub field See Dex s GitHub connector https github com dexidp website blob main content docs connectors github md documentation for explanation of the fields A minimal config should populate the clientID clientSecret generated in Step 1 You will very likely want to restrict logins to one or more GitHub organization In the connectors config orgs list add one or more GitHub organizations Any member of the org will then be able to login to Argo CD to perform management tasks yaml data url https argocd example com dex config connectors GitHub example type github id github name GitHub config clientID aabbccddeeff00112233 clientSecret dex github clientSecret Alternatively some K8S secret dex github clientSecret orgs name your github org GitHub enterprise example type github id acme github name Acme GitHub config hostName github acme example com clientID abcdefghijklmnopqrst clientSecret dex acme clientSecret Alternatively some K8S secret dex acme clientSecret orgs name your github org After saving the changes should take affect automatically NOTES There is no need to set redirectURI in the connectors config as shown in the dex documentation Argo CD will automatically use the correct redirectURI for any OAuth2 connectors to match the correct external callback URL e g https argocd example com api dex callback When using a custom secret e g some K8S secret above it must have the label app kubernetes io part of argocd OIDC Configuration with DEX Dex can be used for OIDC authentication instead of ArgoCD directly This provides a separate set of features such as fetching information from the UserInfo endpoint and federated tokens https dexidp io docs custom scopes claims clients cross client trust and authorized party Configuration In the argocd cm ConfigMap add the OIDC connector to the connectors sub field inside dex config See Dex s OIDC connect documentation https dexidp io docs connectors oidc to see what other configuration options might be useful We re going to be using a minimal configuration here The issuer URL should be where Dex talks to the OIDC provider There would normally be a well known openid configuration under this URL which has information about what the provider supports e g https accounts google com well known openid configuration yaml data url https argocd example com dex config connectors OIDC type oidc id oidc name OIDC config issuer https example OIDC provider example com clientID aaaabbbbccccddddeee clientSecret dex oidc clientSecret Requesting additional ID token claims By default Dex only retrieves the profile and email scopes In order to retrieve more claims you can add them under the scopes entry in the Dex configuration To enable group claims through Dex insecureEnableGroups also needs to enabled Group information is currently only refreshed at authentication time and support to refresh group information more dynamically can be tracked here dexidp dex 1065 https github com dexidp dex issues 1065 yaml data url https argocd example com dex config connectors OIDC type oidc id oidc name OIDC config issuer https example OIDC provider example com clientID aaaabbbbccccddddeee clientSecret dex oidc clientSecret insecureEnableGroups true scopes profile email groups warning Because group information is only refreshed at authentication time just adding or removing an account from a group will not change a user s membership until they reauthenticate Depending on your organization s needs this could be a security risk and could be mitigated by changing the authentication token s lifetime Retrieving claims that are not in the token When an Idp does not or cannot support certain claims in an IDToken they can be retrieved separately using the UserInfo endpoint Dex supports this functionality using the getUserInfo endpoint One of the most common claims that is not supported in the IDToken is the groups claim and both getUserInfo and insecureEnableGroups must be set to true yaml data url https argocd example com dex config connectors OIDC type oidc id oidc name OIDC config issuer https example OIDC provider example com clientID aaaabbbbccccddddeee clientSecret dex oidc clientSecret insecureEnableGroups true scopes profile email groups getUserInfo true Existing OIDC Provider To configure Argo CD to delegate authentication to your existing OIDC provider add the OAuth2 configuration to the argocd cm ConfigMap under the oidc config key yaml data url https argocd example com oidc config name Okta issuer https dev 123456 oktapreview com clientID aaaabbbbccccddddeee clientSecret oidc okta clientSecret Optional list of allowed aud claims If omitted or empty defaults to the clientID value above and the cliClientID if that is also specified If you specify a list and want the clientID to be allowed you must explicitly include it in the list Token verification will pass if any of the token s audiences matches any of the audiences in this list allowedAudiences aaaabbbbccccddddeee qqqqwwwweeeerrrrttt Optional If false tokens without an audience will always fail validation If true tokens without an audience will always pass validation Defaults to true for Argo CD 2 6 0 Defaults to false for Argo CD 2 6 0 skipAudienceCheckWhenTokenHasNoAudience true Optional set of OIDC scopes to request If omitted defaults to openid profile email groups requestedScopes openid profile email groups Optional set of OIDC claims to request on the ID token requestedIDTokenClaims groups essential true Some OIDC providers require a separate clientID for different callback URLs For example if configuring Argo CD with self hosted Dex you will need a separate client ID for the localhost CLI client to Dex This field is optional If omitted the CLI will use the same clientID as the Argo CD server cliClientID vvvvwwwwxxxxyyyyzzzz PKCE authentication flow processes authorization flow from browser only default false uses the clientID make sure the Identity Provider IdP is public and doesn t need clientSecret make sure the Identity Provider IdP has this redirect URI registered https argocd example com pkce verify enablePKCEAuthentication true note The callback address should be the auth callback endpoint of your Argo CD URL e g https argocd example com auth callback Requesting additional ID token claims Not all OIDC providers support a special groups scope E g Okta OneLogin and Microsoft do support a special groups scope and will return group membership with the default requestedScopes Other OIDC providers might be able to return a claim with group membership if explicitly requested to do so Individual claims can be requested with requestedIDTokenClaims see OpenID Connect Claims Parameter https connect2id com products server docs guides requesting openid claims claims parameter for details The Argo CD configuration for claims is as follows yaml oidc config requestedIDTokenClaims email essential true groups essential true value org myorg acr essential true values urn mace incommon iap silver urn mace incommon iap bronze For a simple case this can be yaml oidc config requestedIDTokenClaims groups essential true Retrieving group claims when not in the token Some OIDC providers don t return the group information for a user in the ID token even if explicitly requested using the requestedIDTokenClaims setting Okta for example They instead provide the groups on the user info endpoint With the following config Argo CD queries the user info endpoint during login for groups information of a user yaml oidc config enableUserInfoGroups true userInfoPath userinfo userInfoCacheExpiration 5m Note If you omit the userInfoCacheExpiration setting or if it s greater than the expiration of the ID token the argocd server will cache group information as long as the ID token is valid Configuring a custom logout URL for your OIDC provider Optionally if your OIDC provider exposes a logout API and you wish to configure a custom logout URL for the purposes of invalidating any active session post logout you can do so by specifying it as follows yaml oidc config name example OIDC provider issuer https example OIDC provider example com clientID xxxxxxxxx clientSecret xxxxxxxxx requestedScopes openid profile email groups requestedIDTokenClaims groups essential true logoutURL https example OIDC provider example com logout id token hint By default this would take the user to their OIDC provider s login page after logout If you also wish to redirect the user back to Argo CD after logout you can specify the logout URL as follows yaml logoutURL https example OIDC provider example com logout id token hint post logout redirect uri You are not required to specify a logoutRedirectURL as this is automatically generated by ArgoCD as your base ArgoCD url Rootpath note The post logout redirect URI may need to be whitelisted against your OIDC provider s client settings for ArgoCD Configuring a custom root CA certificate for communicating with the OIDC provider If your OIDC provider is setup with a certificate which is not signed by one of the well known certificate authorities you can provide a custom certificate which will be used in verifying the OIDC provider s TLS certificate when communicating with it Add a rootCA to your oidc config which contains the PEM encoded root certificate yaml oidc config rootCA BEGIN CERTIFICATE encoded certificate data here END CERTIFICATE SSO Further Reading Sensitive Data and SSO Client Secrets argocd secret can be used to store sensitive data which can be referenced by ArgoCD Values starting with in configmaps are interpreted as follows If value has the form secret a key in k8s secret look for a k8s secret with the name secret minus the and read its value Otherwise look for a key in the k8s secret named argocd secret Example SSO clientSecret can thus be stored as a Kubernetes secret with the following manifests argocd secret yaml apiVersion v1 kind Secret metadata name argocd secret namespace argocd labels app kubernetes io name argocd secret app kubernetes io part of argocd type Opaque data The secret value must be base64 encoded once this value corresponds to printf hello world base64 oidc auth0 clientSecret aGVsbG8td29ybGQ argocd cm yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data oidc config name Auth0 clientID aabbccddeeff00112233 Reference key in argocd secret clientSecret oidc auth0 clientSecret Alternative If you want to store sensitive data in another Kubernetes Secret instead of argocd secret ArgoCD knows to check the keys under data in your Kubernetes Secret for a corresponding key whenever a value in a configmap or secret starts with then your Kubernetes Secret name and colon Syntax k8s secret name a key in that k8s secret NOTE Secret must have label app kubernetes io part of argocd Example another secret yaml apiVersion v1 kind Secret metadata name another secret namespace argocd labels app kubernetes io part of argocd type Opaque data Store client secret like below Ensure the secret is base64 encoded oidc auth0 clientSecret client secret base64 encoded argocd cm yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io name argocd cm app kubernetes io part of argocd data oidc config name Auth0 clientID aabbccddeeff00112233 Reference key in another secret and not argocd secret clientSecret another secret oidc auth0 clientSecret Mind the Skipping certificate verification on OIDC provider connections By default all connections made by the API server to OIDC providers either external providers or the bundled Dex instance must pass certificate validation These connections occur when getting the OIDC provider s well known configuration when getting the OIDC provider s keys and when exchanging an authorization code or verifying an ID token as part of an OIDC login flow Disabling certificate verification might make sense if You are using the bundled Dex instance and your Argo CD instance has TLS configured with a self signed certificate and you understand and accept the risks of skipping OIDC provider cert verification You are using an external OIDC provider and that provider uses an invalid certificate and you cannot solve the problem by setting oidcConfig rootCA and you understand and accept the risks of skipping OIDC provider cert verification If either of those two applies then you can disable OIDC provider certificate verification by setting oidc tls insecure skip verify to true in the argocd cm ConfigMap
argocd note Entra ID was formerly known as Azure AD Entra ID SAML Enterprise App Auth using Dex Microsoft
# Microsoft !!! note "" Entra ID was formerly known as Azure AD. * [Entra ID SAML Enterprise App Auth using Dex](#entra-id-saml-enterprise-app-auth-using-dex) * [Entra ID App Registration Auth using OIDC](#entra-id-app-registration-auth-using-oidc) * [Entra ID App Registration Auth using Dex](#entra-id-app-registration-auth-using-dex) ## Entra ID SAML Enterprise App Auth using Dex ### Configure a new Entra ID Enterprise App 1. From the `Microsoft Entra ID` > `Enterprise applications` menu, choose `+ New application` 2. Select `Non-gallery application` 3. Enter a `Name` for the application (e.g. `Argo CD`), then choose `Add` 4. Once the application is created, open it from the `Enterprise applications` menu. 5. From the `Users and groups` menu of the app, add any users or groups requiring access to the service. ![Azure Enterprise SAML Users](../../assets/azure-enterprise-users.png "Azure Enterprise SAML Users") 6. From the `Single sign-on` menu, edit the `Basic SAML Configuration` section as follows (replacing `my-argo-cd-url` with your Argo URL): - **Identifier (Entity ID):** https://`<my-argo-cd-url>`/api/dex/callback - **Reply URL (Assertion Consumer Service URL):** https://`<my-argo-cd-url>`/api/dex/callback - **Sign on URL:** https://`<my-argo-cd-url>`/auth/login - **Relay State:** `<empty>` - **Logout Url:** `<empty>` ![Azure Enterprise SAML URLs](../../assets/azure-enterprise-saml-urls.png "Azure Enterprise SAML URLs") 7. From the `Single sign-on` menu, edit the `User Attributes & Claims` section to create the following claims: - `+ Add new claim` | **Name:** email | **Source:** Attribute | **Source attribute:** user.mail - `+ Add group claim` | **Which groups:** All groups | **Source attribute:** Group ID | **Customize:** True | **Name:** Group | **Namespace:** `<empty>` | **Emit groups as role claims:** False - *Note: The `Unique User Identifier` required claim can be left as the default `user.userprincipalname`* ![Azure Enterprise SAML Claims](../../assets/azure-enterprise-claims.png "Azure Enterprise SAML Claims") 8. From the `Single sign-on` menu, download the SAML Signing Certificate (Base64) - Base64 encode the contents of the downloaded certificate file, for example: - `$ cat ArgoCD.cer | base64` - *Keep a copy of the encoded output to be used in the next section.* 9. From the `Single sign-on` menu, copy the `Login URL` parameter, to be used in the next section. ### Configure Argo to use the new Entra ID Enterprise App 1. Edit `argocd-cm` and add the following `dex.config` to the data section, replacing the `caData`, `my-argo-cd-url` and `my-login-url` your values from the Entra ID App: data: url: https://my-argo-cd-url dex.config: | logger: level: debug format: json connectors: - type: saml id: saml name: saml config: entityIssuer: https://my-argo-cd-url/api/dex/callback ssoURL: https://my-login-url (e.g. https://login.microsoftonline.com/xxxxx/a/saml2) caData: | MY-BASE64-ENCODED-CERTIFICATE-DATA redirectURI: https://my-argo-cd-url/api/dex/callback usernameAttr: email emailAttr: email groupsAttr: Group 2. Edit `argocd-rbac-cm` to configure permissions, similar to example below. - Use Entra ID `Group IDs` for assigning roles. - See [RBAC Configurations](../rbac.md) for more detailed scenarios. # example policy policy.default: role:readonly policy.csv: | p, role:org-admin, applications, *, */*, allow p, role:org-admin, clusters, get, *, allow p, role:org-admin, repositories, get, *, allow p, role:org-admin, repositories, create, *, allow p, role:org-admin, repositories, update, *, allow p, role:org-admin, repositories, delete, *, allow g, "84ce98d1-e359-4f3b-85af-985b458de3c6", role:org-admin # (azure group assigned to role) ## Entra ID App Registration Auth using OIDC ### Configure a new Entra ID App registration #### Add a new Entra ID App registration 1. From the `Microsoft Entra ID` > `App registrations` menu, choose `+ New registration` 2. Enter a `Name` for the application (e.g. `Argo CD`). 3. Specify who can use the application (e.g. `Accounts in this organizational directory only`). 4. Enter Redirect URI (optional) as follows (replacing `my-argo-cd-url` with your Argo URL), then choose `Add`. - **Platform:** `Web` - **Redirect URI:** https://`<my-argo-cd-url>`/auth/callback 5. When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID. ![Azure App registration's Overview](../../assets/azure-app-registration-overview.png "Azure App registration's Overview") #### Configure additional platform settings for ArgoCD CLI 1. In the Azure portal, in App registrations, select your application. 2. Under Manage, select Authentication. 3. Under Platform configurations, select Add a platform. 4. Under Configure platforms, select the "Mobile and desktop applications" tile. Use the below value. You shouldn't change it. - **Redirect URI:** `http://localhost:8085/auth/callback` ![Azure App registration's Authentication](../../assets/azure-app-registration-authentication.png "Azure App registration's Authentication") #### Add credentials a new Entra ID App registration 1. From the `Certificates & secrets` menu, choose `+ New client secret` 2. Enter a `Name` for the secret (e.g. `ArgoCD-SSO`). - Make sure to copy and save generated value. This is a value for the `client_secret`. ![Azure App registration's Secret](../../assets/azure-app-registration-secret.png "Azure App registration's Secret") #### Setup permissions for Entra ID Application 1. From the `API permissions` menu, choose `+ Add a permission` 2. Find `User.Read` permission (under `Microsoft Graph`) and grant it to the created application: ![Entra ID API permissions](../../assets/azure-api-permissions.png "Entra ID API permissions") 3. From the `Token Configuration` menu, choose `+ Add groups claim` ![Entra ID token configuration](../../assets/azure-token-configuration.png "Entra ID token configuration") ### Associate an Entra ID group to your Entra ID App registration 1. From the `Microsoft Entra ID` > `Enterprise applications` menu, search the App that you created (e.g. `Argo CD`). - An Enterprise application with the same name of the Entra ID App registration is created when you add a new Entra ID App registration. 2. From the `Users and groups` menu of the app, add any users or groups requiring access to the service. ![Azure Enterprise SAML Users](../../assets/azure-enterprise-users.png "Azure Enterprise SAML Users") ### Configure Argo to use the new Entra ID App registration 1. Edit `argocd-cm` and configure the `data.oidc.config` and `data.url` section: ConfigMap -> argocd-cm data: url: https://argocd.example.com/ # Replace with the external base URL of your Argo CD oidc.config: | name: Azure issuer: https://login.microsoftonline.com/{directory_tenant_id}/v2.0 clientID: {azure_ad_application_client_id} clientSecret: $oidc.azure.clientSecret requestedIDTokenClaims: groups: essential: true value: "SecurityGroup" requestedScopes: - openid - profile - email 2. Edit `argocd-secret` and configure the `data.oidc.azure.clientSecret` section: Secret -> argocd-secret data: oidc.azure.clientSecret: {client_secret | base64_encoded} 3. Edit `argocd-rbac-cm` to configure permissions. Use group ID from Azure for assigning roles [RBAC Configurations](../rbac.md) ConfigMap -> argocd-rbac-cm policy.default: role:readonly policy.csv: | p, role:org-admin, applications, *, */*, allow p, role:org-admin, clusters, get, *, allow p, role:org-admin, repositories, get, *, allow p, role:org-admin, repositories, create, *, allow p, role:org-admin, repositories, update, *, allow p, role:org-admin, repositories, delete, *, allow g, "84ce98d1-e359-4f3b-85af-985b458de3c6", role:org-admin 4. Mapping role from jwt token to argo. If you want to map the roles from the jwt token to match the default roles (readonly and admin) then you must change the scope variable in the rbac-configmap. policy.default: role:readonly policy.csv: | p, role:org-admin, applications, *, */*, allow p, role:org-admin, clusters, get, *, allow p, role:org-admin, repositories, get, *, allow p, role:org-admin, repositories, create, *, allow p, role:org-admin, repositories, update, *, allow p, role:org-admin, repositories, delete, *, allow g, "84ce98d1-e359-4f3b-85af-985b458de3c6", role:org-admin scopes: '[groups, email]' Refer to [operator-manual/argocd-rbac-cm.yaml](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-rbac-cm.yaml) for all of the available variables. ## Entra ID App Registration Auth using Dex Configure a new AD App Registration, as above. Then, add the `dex.config` to `argocd-cm`: ```yaml ConfigMap -> argocd-cm data: dex.config: | connectors: - type: microsoft id: microsoft name: Your Company GmbH config: clientID: $MICROSOFT_APPLICATION_ID clientSecret: $MICROSOFT_CLIENT_SECRET redirectURI: http://localhost:8080/api/dex/callback tenant: ffffffff-ffff-ffff-ffff-ffffffffffff groups: - DevOps ``` ## Validation ### Log in to ArgoCD UI using SSO 1. Open a new browser tab and enter your ArgoCD URI: https://`<my-argo-cd-url>` ![Azure SSO Web Log In](../../assets/azure-sso-web-log-in-via-azure.png "Azure SSO Web Log In") 3. Click `LOGIN VIA AZURE` button to log in with your Microsoft Entra ID account. You’ll see the ArgoCD applications screen. ![Azure SSO Web Application](../../assets/azure-sso-web-application.png "Azure SSO Web Application") 4. Navigate to User Info and verify Group ID. Groups will have your group’s Object ID that you added in the `Setup permissions for Entra ID Application` step. ![Azure SSO Web User Info](../../assets/azure-sso-web-user-info.png "Azure SSO Web User Info") ### Log in to ArgoCD using CLI 1. Open terminal, execute the below command. argocd login <my-argo-cd-url> --grpc-web-root-path / --sso 2. You will see the below message after entering your credentials from the browser. ![Azure SSO CLI Log In](../../assets/azure-sso-cli-log-in-success.png "Azure SSO CLI Log In") 3. Your terminal output will be similar as below. WARNING: server certificate had error: x509: certificate is valid for ingress.local, not my-argo-cd-url. Proceed insecurely (y/n)? y Opening browser for authentication INFO[0003] RequestedClaims: map[groups:essential:true ] Performing authorization_code flow login: https://login.microsoftonline.com/XXXXXXXXXXXXX/oauth2/v2.0/authorize?access_type=offline&claims=%7B%22id_token%22%3A%7B%22groups%22%3A%7B%22essential%22%3Atrue%7D%7D%7D&client_id=XXXXXXXXXXXXX&code_challenge=XXXXXXXXXXXXX&code_challenge_method=S256&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2Fauth%2Fcallback&response_type=code&scope=openid+profile+email+offline_access&state=XXXXXXXX Authentication successful '[email protected]' logged in successfully Context 'my-argo-cd-url' updated You may get an warning if you are not using a correctly signed certs. Refer to [Why Am I Getting x509: certificate signed by unknown authority When Using The CLI?](https://argo-cd.readthedocs.io/en/stable/faq/#why-am-i-getting-x509-certificate-signed-by-unknown-authority-when-using-the-cli).
argocd
Microsoft note Entra ID was formerly known as Azure AD Entra ID SAML Enterprise App Auth using Dex entra id saml enterprise app auth using dex Entra ID App Registration Auth using OIDC entra id app registration auth using oidc Entra ID App Registration Auth using Dex entra id app registration auth using dex Entra ID SAML Enterprise App Auth using Dex Configure a new Entra ID Enterprise App 1 From the Microsoft Entra ID Enterprise applications menu choose New application 2 Select Non gallery application 3 Enter a Name for the application e g Argo CD then choose Add 4 Once the application is created open it from the Enterprise applications menu 5 From the Users and groups menu of the app add any users or groups requiring access to the service Azure Enterprise SAML Users assets azure enterprise users png Azure Enterprise SAML Users 6 From the Single sign on menu edit the Basic SAML Configuration section as follows replacing my argo cd url with your Argo URL Identifier Entity ID https my argo cd url api dex callback Reply URL Assertion Consumer Service URL https my argo cd url api dex callback Sign on URL https my argo cd url auth login Relay State empty Logout Url empty Azure Enterprise SAML URLs assets azure enterprise saml urls png Azure Enterprise SAML URLs 7 From the Single sign on menu edit the User Attributes Claims section to create the following claims Add new claim Name email Source Attribute Source attribute user mail Add group claim Which groups All groups Source attribute Group ID Customize True Name Group Namespace empty Emit groups as role claims False Note The Unique User Identifier required claim can be left as the default user userprincipalname Azure Enterprise SAML Claims assets azure enterprise claims png Azure Enterprise SAML Claims 8 From the Single sign on menu download the SAML Signing Certificate Base64 Base64 encode the contents of the downloaded certificate file for example cat ArgoCD cer base64 Keep a copy of the encoded output to be used in the next section 9 From the Single sign on menu copy the Login URL parameter to be used in the next section Configure Argo to use the new Entra ID Enterprise App 1 Edit argocd cm and add the following dex config to the data section replacing the caData my argo cd url and my login url your values from the Entra ID App data url https my argo cd url dex config logger level debug format json connectors type saml id saml name saml config entityIssuer https my argo cd url api dex callback ssoURL https my login url e g https login microsoftonline com xxxxx a saml2 caData MY BASE64 ENCODED CERTIFICATE DATA redirectURI https my argo cd url api dex callback usernameAttr email emailAttr email groupsAttr Group 2 Edit argocd rbac cm to configure permissions similar to example below Use Entra ID Group IDs for assigning roles See RBAC Configurations rbac md for more detailed scenarios example policy policy default role readonly policy csv p role org admin applications allow p role org admin clusters get allow p role org admin repositories get allow p role org admin repositories create allow p role org admin repositories update allow p role org admin repositories delete allow g 84ce98d1 e359 4f3b 85af 985b458de3c6 role org admin azure group assigned to role Entra ID App Registration Auth using OIDC Configure a new Entra ID App registration Add a new Entra ID App registration 1 From the Microsoft Entra ID App registrations menu choose New registration 2 Enter a Name for the application e g Argo CD 3 Specify who can use the application e g Accounts in this organizational directory only 4 Enter Redirect URI optional as follows replacing my argo cd url with your Argo URL then choose Add Platform Web Redirect URI https my argo cd url auth callback 5 When registration finishes the Azure portal displays the app registration s Overview pane You see the Application client ID Azure App registration s Overview assets azure app registration overview png Azure App registration s Overview Configure additional platform settings for ArgoCD CLI 1 In the Azure portal in App registrations select your application 2 Under Manage select Authentication 3 Under Platform configurations select Add a platform 4 Under Configure platforms select the Mobile and desktop applications tile Use the below value You shouldn t change it Redirect URI http localhost 8085 auth callback Azure App registration s Authentication assets azure app registration authentication png Azure App registration s Authentication Add credentials a new Entra ID App registration 1 From the Certificates secrets menu choose New client secret 2 Enter a Name for the secret e g ArgoCD SSO Make sure to copy and save generated value This is a value for the client secret Azure App registration s Secret assets azure app registration secret png Azure App registration s Secret Setup permissions for Entra ID Application 1 From the API permissions menu choose Add a permission 2 Find User Read permission under Microsoft Graph and grant it to the created application Entra ID API permissions assets azure api permissions png Entra ID API permissions 3 From the Token Configuration menu choose Add groups claim Entra ID token configuration assets azure token configuration png Entra ID token configuration Associate an Entra ID group to your Entra ID App registration 1 From the Microsoft Entra ID Enterprise applications menu search the App that you created e g Argo CD An Enterprise application with the same name of the Entra ID App registration is created when you add a new Entra ID App registration 2 From the Users and groups menu of the app add any users or groups requiring access to the service Azure Enterprise SAML Users assets azure enterprise users png Azure Enterprise SAML Users Configure Argo to use the new Entra ID App registration 1 Edit argocd cm and configure the data oidc config and data url section ConfigMap argocd cm data url https argocd example com Replace with the external base URL of your Argo CD oidc config name Azure issuer https login microsoftonline com directory tenant id v2 0 clientID azure ad application client id clientSecret oidc azure clientSecret requestedIDTokenClaims groups essential true value SecurityGroup requestedScopes openid profile email 2 Edit argocd secret and configure the data oidc azure clientSecret section Secret argocd secret data oidc azure clientSecret client secret base64 encoded 3 Edit argocd rbac cm to configure permissions Use group ID from Azure for assigning roles RBAC Configurations rbac md ConfigMap argocd rbac cm policy default role readonly policy csv p role org admin applications allow p role org admin clusters get allow p role org admin repositories get allow p role org admin repositories create allow p role org admin repositories update allow p role org admin repositories delete allow g 84ce98d1 e359 4f3b 85af 985b458de3c6 role org admin 4 Mapping role from jwt token to argo If you want to map the roles from the jwt token to match the default roles readonly and admin then you must change the scope variable in the rbac configmap policy default role readonly policy csv p role org admin applications allow p role org admin clusters get allow p role org admin repositories get allow p role org admin repositories create allow p role org admin repositories update allow p role org admin repositories delete allow g 84ce98d1 e359 4f3b 85af 985b458de3c6 role org admin scopes groups email Refer to operator manual argocd rbac cm yaml https github com argoproj argo cd blob master docs operator manual argocd rbac cm yaml for all of the available variables Entra ID App Registration Auth using Dex Configure a new AD App Registration as above Then add the dex config to argocd cm yaml ConfigMap argocd cm data dex config connectors type microsoft id microsoft name Your Company GmbH config clientID MICROSOFT APPLICATION ID clientSecret MICROSOFT CLIENT SECRET redirectURI http localhost 8080 api dex callback tenant ffffffff ffff ffff ffff ffffffffffff groups DevOps Validation Log in to ArgoCD UI using SSO 1 Open a new browser tab and enter your ArgoCD URI https my argo cd url Azure SSO Web Log In assets azure sso web log in via azure png Azure SSO Web Log In 3 Click LOGIN VIA AZURE button to log in with your Microsoft Entra ID account You ll see the ArgoCD applications screen Azure SSO Web Application assets azure sso web application png Azure SSO Web Application 4 Navigate to User Info and verify Group ID Groups will have your group s Object ID that you added in the Setup permissions for Entra ID Application step Azure SSO Web User Info assets azure sso web user info png Azure SSO Web User Info Log in to ArgoCD using CLI 1 Open terminal execute the below command argocd login my argo cd url grpc web root path sso 2 You will see the below message after entering your credentials from the browser Azure SSO CLI Log In assets azure sso cli log in success png Azure SSO CLI Log In 3 Your terminal output will be similar as below WARNING server certificate had error x509 certificate is valid for ingress local not my argo cd url Proceed insecurely y n y Opening browser for authentication INFO 0003 RequestedClaims map groups essential true Performing authorization code flow login https login microsoftonline com XXXXXXXXXXXXX oauth2 v2 0 authorize access type offline claims 7B 22id token 22 3A 7B 22groups 22 3A 7B 22essential 22 3Atrue 7D 7D 7D client id XXXXXXXXXXXXX code challenge XXXXXXXXXXXXX code challenge method S256 redirect uri http 3A 2F 2Flocalhost 3A8085 2Fauth 2Fcallback response type code scope openid profile email offline access state XXXXXXXX Authentication successful yourid example com logged in successfully Context my argo cd url updated You may get an warning if you are not using a correctly signed certs Refer to Why Am I Getting x509 certificate signed by unknown authority When Using The CLI https argo cd readthedocs io en stable faq why am i getting x509 certificate signed by unknown authority when using the cli
argocd Google Dex Also you won t get Google Groups membership information through this method This is the recommended login method if you don t need information about the groups the user s belongs to Google doesn t expose the claim via oidc so you won t be able to use Google Groups membership information for RBAC There are three different ways to integrate Argo CD login with your Google Workspace users Generally the OpenID Connect oidc method would be the recommended way of doing this integration and easier as well but depending on your needs you may choose a different option This is the recommended method if you need to use Google Groups membership in your RBAC configuration
# Google There are three different ways to integrate Argo CD login with your Google Workspace users. Generally the OpenID Connect (_oidc_) method would be the recommended way of doing this integration (and easier, as well...), but depending on your needs, you may choose a different option. * [OpenID Connect using Dex](#openid-connect-using-dex) This is the recommended login method if you don't need information about the groups the user's belongs to. Google doesn't expose the `groups` claim via _oidc_, so you won't be able to use Google Groups membership information for RBAC. * [SAML App Auth using Dex](#saml-app-auth-using-dex) Dex [recommends avoiding this method](https://dexidp.io/docs/connectors/saml/#warning). Also, you won't get Google Groups membership information through this method. * [OpenID Connect plus Google Groups using Dex](#openid-connect-plus-google-groups-using-dex) This is the recommended method if you need to use Google Groups membership in your RBAC configuration. Once you've set up one of the above integrations, be sure to edit `argo-rbac-cm` to configure permissions (as in the example below). See [RBAC Configurations](../rbac.md) for more detailed scenarios. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd data: policy.default: role:readonly ``` ## OpenID Connect using Dex ### Configure your OAuth consent screen If you've never configured this, you'll be redirected straight to this if you try to create an OAuth Client ID 1. Go to your [OAuth Consent](https://console.cloud.google.com/apis/credentials/consent) configuration. If you still haven't created one, select `Internal` or `External` and click `Create` 2. Go and [edit your OAuth consent screen](https://console.cloud.google.com/apis/credentials/consent/edit) Verify you're in the correct project! 3. Configure a name for your login app and a user support email address 4. The app logo and filling the information links is not mandatory, but it's a nice touch for the login page 5. In "Authorized domains" add the domains who are allowed to log in to ArgoCD (e.g. if you add `example.com`, all Google Workspace users with an `@example.com` address will be able to log in) 6. Save to continue to the "Scopes" section 7. Click on "Add or remove scopes" and add the `.../auth/userinfo.profile` and the `openid` scopes 8. Save, review the summary of your changes and finish ### Configure a new OAuth Client ID 1. Go to your [Google API Credentials](https://console.cloud.google.com/apis/credentials) console, and make sure you're in the correct project. 2. Click on "+Create Credentials"/"OAuth Client ID" 3. Select "Web Application" in the Application Type drop down menu, and enter an identifying name for your app (e.g. `Argo CD`) 4. Fill "Authorized JavaScript origins" with your Argo CD URL, e.g. `https://argocd.example.com` 5. Fill "Authorized redirect URIs" with your Argo CD URL plus `/api/dex/callback`, e.g. `https://argocd.example.com/api/dex/callback` ![](../../assets/google-admin-oidc-uris.png) 6. Click "Create" and save your "Client ID" and your "Client Secret" for later ### Configure Argo to use OpenID Connect Edit `argocd-cm` and add the following `dex.config` to the data section, replacing `clientID` and `clientSecret` with the values you saved before: ```yaml data: url: https://argocd.example.com dex.config: | connectors: - config: issuer: https://accounts.google.com clientID: XXXXXXXXXXXXX.apps.googleusercontent.com clientSecret: XXXXXXXXXXXXX type: oidc id: google name: Google ``` ### References - [Dex oidc connector docs](https://dexidp.io/docs/connectors/oidc/) ## SAML App Auth using Dex ### Configure a new SAML App --- !!! warning "Deprecation Warning" Note that, according to [Dex documentation](https://dexidp.io/docs/connectors/saml/#warning), SAML is considered unsafe and they are planning to deprecate that module. --- 1. In the [Google admin console](https://admin.google.com), open the left-side menu and select `Apps` > `SAML Apps` ![Google Admin Apps Menu](../../assets/google-admin-saml-apps-menu.png "Google Admin menu with the Apps / SAML Apps path selected") 2. Under `Add App` select `Add custom SAML app` ![Google Admin Add Custom SAML App](../../assets/google-admin-saml-add-app-menu.png "Add apps menu with add custom SAML app highlighted") 3. Enter a `Name` for the application (e.g. `Argo CD`), then choose `Continue` ![Google Admin Apps Menu](../../assets/google-admin-saml-app-details.png "Add apps menu with add custom SAML app highlighted") 4. Download the metadata or copy the `SSO URL`, `Certificate`, and optionally `Entity ID` from the identity provider details for use in the next section. Choose `continue`. - Base64 encode the contents of the certificate file, for example: - `$ cat ArgoCD.cer | base64` - *Keep a copy of the encoded output to be used in the next section.* - *Ensure that the certificate is in PEM format before base64 encoding* ![Google Admin IdP Metadata](../../assets/google-admin-idp-metadata.png "A screenshot of the Google IdP metadata") 5. For both the `ACS URL` and `Entity ID`, use your Argo Dex Callback URL, for example: `https://argocd.example.com/api/dex/callback` ![Google Admin Service Provider Details](../../assets/google-admin-service-provider-details.png "A screenshot of the Google Service Provider Details") 6. Add SAML Attribute Mapping, Map `Primary email` to `name` and `Primary Email` to `email`. and click `ADD MAPPING` button. ![Google Admin SAML Attribute Mapping Details](../../assets/google-admin-saml-attribute-mapping-details.png "A screenshot of the Google Admin SAML Attribute Mapping Details") 7. Finish creating the application. ### Configure Argo to use the new Google SAML App Edit `argocd-cm` and add the following `dex.config` to the data section, replacing the `caData`, `argocd.example.com`, `sso-url`, and optionally `google-entity-id` with your values from the Google SAML App: ```yaml data: url: https://argocd.example.com dex.config: | connectors: - type: saml id: saml name: saml config: ssoURL: https://sso-url (e.g. https://accounts.google.com/o/saml2/idp?idpid=Abcde0) entityIssuer: https://argocd.example.com/api/dex/callback caData: | BASE64-ENCODED-CERTIFICATE-DATA redirectURI: https://argocd.example.com/api/dex/callback usernameAttr: name emailAttr: email # optional ssoIssuer: https://google-entity-id (e.g. https://accounts.google.com/o/saml2?idpid=Abcde0) ``` ### References - [Dex SAML connector docs](https://dexidp.io/docs/connectors/saml/) - [Google's SAML error messages](https://support.google.com/a/answer/6301076?hl=en) ## OpenID Connect plus Google Groups using Dex We're going to use Dex's `google` connector to get additional Google Groups information from your users, allowing you to use group membership on your RBAC, i.e., giving `admin` role to the whole `[email protected]` group. This connector uses two different credentials: - An oidc client ID and secret Same as when you're configuring an [OpenID connection](#openid-connect-using-dex), this authenticates your users - A Google service account This is used to connect to the Google Directory API and pull information about your user's group membership Also, you'll need the email address for an admin user on this domain. Dex will impersonate that user identity to fetch user information from the API. ### Configure OpenID Connect Go through the same steps as in [OpenID Connect using Dex](#openid-connect-using-dex), except for configuring `argocd-cm`. We'll do that later. ### Set up Directory API access 1. Follow [Google instructions to create a service account with Domain-Wide Delegation](https://developers.google.com/admin-sdk/directory/v1/guides/delegation) - When assigning API scopes to the service account assign **only** the `https://www.googleapis.com/auth/admin.directory.group.readonly` scope and nothing else. If you assign any other scopes, you won't be able to fetch information from the API - Create the credentials in JSON format and store them in a safe place, we'll need them later 2. Enable the [Admin SDK](https://console.developers.google.com/apis/library/admin.googleapis.com/) ### Configure Dex 1. Create a secret with the contents of the previous json file encoded in base64, like this: apiVersion: v1 kind: Secret metadata: name: argocd-google-groups-json namespace: argocd data: googleAuth.json: JSON_FILE_BASE64_ENCODED 2. Edit your `argocd-dex-server` deployment to mount that secret as a file - Add a volume mount in `/spec/template/spec/containers/0/volumeMounts/` like this. Be aware of editing the running container and not the init container! volumeMounts: - mountPath: /shared name: static-files - mountPath: /tmp name: dexconfig - mountPath: /tmp/oidc name: google-json readOnly: true - Add a volume in `/spec/template/spec/volumes/` like this: volumes: - emptyDir: {} name: static-files - emptyDir: {} name: dexconfig - name: google-json secret: defaultMode: 420 secretName: argocd-google-groups-json 3. Edit `argocd-cm` and add the following `dex.config` to the data section, replacing `clientID` and `clientSecret` with the values you saved before, `adminEmail` with the address for the admin user you're going to impersonate, and editing `redirectURI` with your Argo CD domain (note that the `type` is now `google` instead of `oidc`): dex.config: | connectors: - config: redirectURI: https://argocd.example.com/api/dex/callback clientID: XXXXXXXXXXXXX.apps.googleusercontent.com clientSecret: XXXXXXXXXXXXX serviceAccountFilePath: /tmp/oidc/googleAuth.json adminEmail: [email protected] type: google id: google name: Google 4. Restart your `argocd-dex-server` deployment to be sure it's using the latest configuration 5. Login to Argo CD and go to the "User info" section, were you should see the groups you're member ![User info](../../assets/google-groups-membership.png) 6. Now you can use groups email addresses to give RBAC permissions 7. Dex (> v2.31.0) can also be configure to fetch transitive group membership as follows: dex.config: | connectors: - config: redirectURI: https://argocd.example.com/api/dex/callback clientID: XXXXXXXXXXXXX.apps.googleusercontent.com clientSecret: XXXXXXXXXXXXX serviceAccountFilePath: /tmp/oidc/googleAuth.json adminEmail: [email protected] fetchTransitiveGroupMembership: True type: google id: google name: Google ### References - [Dex Google connector docs](https://dexidp.io/docs/connectors/google/)
argocd
Google There are three different ways to integrate Argo CD login with your Google Workspace users Generally the OpenID Connect oidc method would be the recommended way of doing this integration and easier as well but depending on your needs you may choose a different option OpenID Connect using Dex openid connect using dex This is the recommended login method if you don t need information about the groups the user s belongs to Google doesn t expose the groups claim via oidc so you won t be able to use Google Groups membership information for RBAC SAML App Auth using Dex saml app auth using dex Dex recommends avoiding this method https dexidp io docs connectors saml warning Also you won t get Google Groups membership information through this method OpenID Connect plus Google Groups using Dex openid connect plus google groups using dex This is the recommended method if you need to use Google Groups membership in your RBAC configuration Once you ve set up one of the above integrations be sure to edit argo rbac cm to configure permissions as in the example below See RBAC Configurations rbac md for more detailed scenarios yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm namespace argocd data policy default role readonly OpenID Connect using Dex Configure your OAuth consent screen If you ve never configured this you ll be redirected straight to this if you try to create an OAuth Client ID 1 Go to your OAuth Consent https console cloud google com apis credentials consent configuration If you still haven t created one select Internal or External and click Create 2 Go and edit your OAuth consent screen https console cloud google com apis credentials consent edit Verify you re in the correct project 3 Configure a name for your login app and a user support email address 4 The app logo and filling the information links is not mandatory but it s a nice touch for the login page 5 In Authorized domains add the domains who are allowed to log in to ArgoCD e g if you add example com all Google Workspace users with an example com address will be able to log in 6 Save to continue to the Scopes section 7 Click on Add or remove scopes and add the auth userinfo profile and the openid scopes 8 Save review the summary of your changes and finish Configure a new OAuth Client ID 1 Go to your Google API Credentials https console cloud google com apis credentials console and make sure you re in the correct project 2 Click on Create Credentials OAuth Client ID 3 Select Web Application in the Application Type drop down menu and enter an identifying name for your app e g Argo CD 4 Fill Authorized JavaScript origins with your Argo CD URL e g https argocd example com 5 Fill Authorized redirect URIs with your Argo CD URL plus api dex callback e g https argocd example com api dex callback assets google admin oidc uris png 6 Click Create and save your Client ID and your Client Secret for later Configure Argo to use OpenID Connect Edit argocd cm and add the following dex config to the data section replacing clientID and clientSecret with the values you saved before yaml data url https argocd example com dex config connectors config issuer https accounts google com clientID XXXXXXXXXXXXX apps googleusercontent com clientSecret XXXXXXXXXXXXX type oidc id google name Google References Dex oidc connector docs https dexidp io docs connectors oidc SAML App Auth using Dex Configure a new SAML App warning Deprecation Warning Note that according to Dex documentation https dexidp io docs connectors saml warning SAML is considered unsafe and they are planning to deprecate that module 1 In the Google admin console https admin google com open the left side menu and select Apps SAML Apps Google Admin Apps Menu assets google admin saml apps menu png Google Admin menu with the Apps SAML Apps path selected 2 Under Add App select Add custom SAML app Google Admin Add Custom SAML App assets google admin saml add app menu png Add apps menu with add custom SAML app highlighted 3 Enter a Name for the application e g Argo CD then choose Continue Google Admin Apps Menu assets google admin saml app details png Add apps menu with add custom SAML app highlighted 4 Download the metadata or copy the SSO URL Certificate and optionally Entity ID from the identity provider details for use in the next section Choose continue Base64 encode the contents of the certificate file for example cat ArgoCD cer base64 Keep a copy of the encoded output to be used in the next section Ensure that the certificate is in PEM format before base64 encoding Google Admin IdP Metadata assets google admin idp metadata png A screenshot of the Google IdP metadata 5 For both the ACS URL and Entity ID use your Argo Dex Callback URL for example https argocd example com api dex callback Google Admin Service Provider Details assets google admin service provider details png A screenshot of the Google Service Provider Details 6 Add SAML Attribute Mapping Map Primary email to name and Primary Email to email and click ADD MAPPING button Google Admin SAML Attribute Mapping Details assets google admin saml attribute mapping details png A screenshot of the Google Admin SAML Attribute Mapping Details 7 Finish creating the application Configure Argo to use the new Google SAML App Edit argocd cm and add the following dex config to the data section replacing the caData argocd example com sso url and optionally google entity id with your values from the Google SAML App yaml data url https argocd example com dex config connectors type saml id saml name saml config ssoURL https sso url e g https accounts google com o saml2 idp idpid Abcde0 entityIssuer https argocd example com api dex callback caData BASE64 ENCODED CERTIFICATE DATA redirectURI https argocd example com api dex callback usernameAttr name emailAttr email optional ssoIssuer https google entity id e g https accounts google com o saml2 idpid Abcde0 References Dex SAML connector docs https dexidp io docs connectors saml Google s SAML error messages https support google com a answer 6301076 hl en OpenID Connect plus Google Groups using Dex We re going to use Dex s google connector to get additional Google Groups information from your users allowing you to use group membership on your RBAC i e giving admin role to the whole sysadmins yourcompany com group This connector uses two different credentials An oidc client ID and secret Same as when you re configuring an OpenID connection openid connect using dex this authenticates your users A Google service account This is used to connect to the Google Directory API and pull information about your user s group membership Also you ll need the email address for an admin user on this domain Dex will impersonate that user identity to fetch user information from the API Configure OpenID Connect Go through the same steps as in OpenID Connect using Dex openid connect using dex except for configuring argocd cm We ll do that later Set up Directory API access 1 Follow Google instructions to create a service account with Domain Wide Delegation https developers google com admin sdk directory v1 guides delegation When assigning API scopes to the service account assign only the https www googleapis com auth admin directory group readonly scope and nothing else If you assign any other scopes you won t be able to fetch information from the API Create the credentials in JSON format and store them in a safe place we ll need them later 2 Enable the Admin SDK https console developers google com apis library admin googleapis com Configure Dex 1 Create a secret with the contents of the previous json file encoded in base64 like this apiVersion v1 kind Secret metadata name argocd google groups json namespace argocd data googleAuth json JSON FILE BASE64 ENCODED 2 Edit your argocd dex server deployment to mount that secret as a file Add a volume mount in spec template spec containers 0 volumeMounts like this Be aware of editing the running container and not the init container volumeMounts mountPath shared name static files mountPath tmp name dexconfig mountPath tmp oidc name google json readOnly true Add a volume in spec template spec volumes like this volumes emptyDir name static files emptyDir name dexconfig name google json secret defaultMode 420 secretName argocd google groups json 3 Edit argocd cm and add the following dex config to the data section replacing clientID and clientSecret with the values you saved before adminEmail with the address for the admin user you re going to impersonate and editing redirectURI with your Argo CD domain note that the type is now google instead of oidc dex config connectors config redirectURI https argocd example com api dex callback clientID XXXXXXXXXXXXX apps googleusercontent com clientSecret XXXXXXXXXXXXX serviceAccountFilePath tmp oidc googleAuth json adminEmail admin email example com type google id google name Google 4 Restart your argocd dex server deployment to be sure it s using the latest configuration 5 Login to Argo CD and go to the User info section were you should see the groups you re member User info assets google groups membership png 6 Now you can use groups email addresses to give RBAC permissions 7 Dex v2 31 0 can also be configure to fetch transitive group membership as follows dex config connectors config redirectURI https argocd example com api dex callback clientID XXXXXXXXXXXXX apps googleusercontent com clientSecret XXXXXXXXXXXXX serviceAccountFilePath tmp oidc googleAuth json adminEmail admin email example com fetchTransitiveGroupMembership True type google id google name Google References Dex Google connector docs https dexidp io docs connectors google
argocd The following steps are required to integrate ArgoCD with Zitadel 4 Set up an action in Zitadel These instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD 3 Set up roles in Zitadel Zitadel Please also consult the Integrating Zitadel and ArgoCD 1 Create a new project and a new application in Zitadel 2 Configure the application in Zitadel
# Zitadel Please also consult the [Zitadel Documentation](https://zitadel.com/docs). ## Integrating Zitadel and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel. You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD. The following steps are required to integrate ArgoCD with Zitadel: 1. Create a new project and a new application in Zitadel 2. Configure the application in Zitadel 3. Set up roles in Zitadel 4. Set up an action in Zitadel 5. Configure ArgoCD configmaps 6. Test the setup The following values will be used in this example: - Zitadel FQDN: `auth.example.com` - Zitadel Project: `argocd-project` - Zitadel Application: `argocd-application` - Zitadel Action: `groupsClaim` - ArgoCD FQDN: `argocd.example.com` - ArgoCD Administrator Role: `argocd_administrators` - ArgoCD User Role: `argocd_users` You may choose different values in your setup; these are used to keep the guide consistent. ## Setting up your project and application in Zitadel First, we will create a new project within Zitadel. Go to **Projects** and select **Create New Project**. You should now see the following screen. ![Zitadel Project](../../assets/zitadel-project.png "Zitadel Project") Check the following options: - Assert Roles on Authentication - Check authorization on Authentication ![Zitadel Project Settings](../../assets/zitadel-project-settings.png "Zitadel Project Settings") ### Roles Go to **Roles** and click **New**. Create the following two roles. Use the specified values below for both fields **Key** and **Group**. - `argocd_administrators` - `argocd_users` Your roles should now look like this: ![Zitadel Project Roles](../../assets/zitadel-project-roles.png "Zitadel Project Roles") ### Authorizations Next, go to **Authorizations** and assign your user the role `argocd_administrators`. Click **New**, enter the name of your user and click **Continue**. Select the role `argocd_administrators` and click **Save**. Your authorizations should now look like this: ![Zitadel Project Authorizations](../../assets/zitadel-project-authorizations.png "Zitadel Project Authorizations") ### Creating an application Go to **General** and create a new application. Name the application `argocd-application`. For type of the application, select **WEB** and click continue. ![Zitadel Application Setup Step 1](../../assets/zitadel-application-1.png "Zitadel Application Setup Step 1") Select **CODE** and continue. ![Zitadel Application Setup Step 2](../../assets/zitadel-application-2.png "Zitadel Application Setup Step 2") Next, we will set up the redirect and post-logout URIs. Set the following values: - Redirect URI: `https://argocd.example.com/auth/callback` - Post Logout URI: `https://argocd.example.com` The post logout URI is optional. In the example setup users will be taken back to the ArgoCD login page after logging out. ![Zitadel Application Setup Step 3](../../assets/zitadel-application-3.png "Zitadel Application Setup Step 3") Verify your configuration on the next screen and click **Create** to create the application. ![Zitadel Application Setup Step 4](../../assets/zitadel-application-4.png "Zitadel Application Setup Step 4") After clicking **Create** you will be shown the `ClientId` and the `ClientSecret` for your application. Make sure to copy the ClientSecret as you will not be able to retrieve it after closing this window. For our example, the following values are used: - ClientId: `227060711795262483@argocd-project` - ClientSecret: `UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0` ![Zitadel Application Secrets](../../assets/zitadel-application-secrets.png "Zitadel Application Secrets") Once you have saved the ClientSecret in a safe place, click **Close** to complete creating the application. Go to **Token Settings** and enable the following options: - User roles inside ID Token - User Info inside ID Token ![Zitadel Application Settings](../../assets/zitadel-application-settings.png "Zitadel Application Settings") ## Setting up an action in Zitadel To include the role of the user in the token issued by Zitadel, we will need to set up a Zitadel Action. The authorization in ArgoCD will be determined by the role contained within the auth token. Go to **Actions**, click **New** and choose `groupsClaim` as the name of your action. Paste the following code into the action: ```javascript /** * sets the roles an additional claim in the token with roles as value an project as key * * The role claims of the token look like the following: * * // added by the code below * "groups": ["{roleName}", "{roleName}", ...], * * Flow: Complement token, Triggers: Pre Userinfo creation, Pre access token creation * * @param ctx * @param api */ function groupsClaim(ctx, api) { if (ctx.v1.user.grants === undefined || ctx.v1.user.grants.count == 0) { return; } let grants = []; ctx.v1.user.grants.grants.forEach((claim) => { claim.roles.forEach((role) => { grants.push(role); }); }); api.v1.claims.setClaim("groups", grants); } ``` Check **Allowed To Fail** and click **Add** to add your action. *Note: If **Allowed To Fail** is not checked and a user does not have a role assigned, it may be possible that the user is no longer able to log in to Zitadel as the login flow fails when the action fails.* Next, add your action to the **Complement Token** flow. Select the **Complement Token** flow from the dropdown and click **Add trigger**. Add your action to both triggers **Pre Userinfo creation** and **Pre access token creation**. Your Actions page should now look like the following screenshot: ![Zitadel Actions](../../assets/zitadel-actions.png "Zitadel Actions") ## Configuring the ArgoCD configmaps Next, we will configure two ArgoCD configmaps: - [argocd-cm.yaml](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-cm.yaml) - [argocd-rbac-cm.yaml](https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/argocd-rbac-cm.yaml) Configure your configmaps as follows while making sure to replace the relevant values such as `url`, `issuer`, `clientID`, `clientSecret` and `logoutURL` with ones matching your setup. ### argocd-cm.yaml ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/part-of: argocd data: admin.enabled: "false" url: https://argocd.example.com oidc.config: | name: Zitadel issuer: https://auth.example.com clientID: 227060711795262483@argocd-project clientSecret: UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0 requestedScopes: - openid - profile - email - groups logoutURL: https://auth.example.com/oidc/v1/end_session ``` ### argocd-rbac-cm.yaml ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm namespace: argocd labels: app.kubernetes.io/part-of: argocd data: scopes: '[groups]' policy.csv: | g, argocd_administrators, role:admin g, argocd_users, role:readonly policy.default: '' ``` The roles specified under `policy.csv` must match the roles configured in Zitadel. The Zitadel role `argocd_administrators` will be assigned the ArgoCD role `admin` granting admin access to ArgoCD. The Zitadel role `argocd_users` will be assigned the ArgoCD role `readonly` granting read-only access to ArgoCD. Deploy your ArgoCD configmaps. ArgoCD and Zitadel should now be set up correctly to allow users to log in to ArgoCD using Zitadel. ## Testing the setup Go to your ArgoCD instance. You should now see the **LOG IN WITH ZITADEL** button above the usual username/password login. ![Zitadel ArgoCD Login](../../assets/zitadel-argocd-login.png "Zitadel ArgoCD Login") After logging in with your Zitadel user go to **User Info**. If everything is set up correctly you should now see the group `argocd_administrators` as shown below. ![Zitadel ArgoCD User Info](../../assets/zitadel-argocd-user-info.png "Zitadel ArgoCD User Info")
argocd
Zitadel Please also consult the Zitadel Documentation https zitadel com docs Integrating Zitadel and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD The following steps are required to integrate ArgoCD with Zitadel 1 Create a new project and a new application in Zitadel 2 Configure the application in Zitadel 3 Set up roles in Zitadel 4 Set up an action in Zitadel 5 Configure ArgoCD configmaps 6 Test the setup The following values will be used in this example Zitadel FQDN auth example com Zitadel Project argocd project Zitadel Application argocd application Zitadel Action groupsClaim ArgoCD FQDN argocd example com ArgoCD Administrator Role argocd administrators ArgoCD User Role argocd users You may choose different values in your setup these are used to keep the guide consistent Setting up your project and application in Zitadel First we will create a new project within Zitadel Go to Projects and select Create New Project You should now see the following screen Zitadel Project assets zitadel project png Zitadel Project Check the following options Assert Roles on Authentication Check authorization on Authentication Zitadel Project Settings assets zitadel project settings png Zitadel Project Settings Roles Go to Roles and click New Create the following two roles Use the specified values below for both fields Key and Group argocd administrators argocd users Your roles should now look like this Zitadel Project Roles assets zitadel project roles png Zitadel Project Roles Authorizations Next go to Authorizations and assign your user the role argocd administrators Click New enter the name of your user and click Continue Select the role argocd administrators and click Save Your authorizations should now look like this Zitadel Project Authorizations assets zitadel project authorizations png Zitadel Project Authorizations Creating an application Go to General and create a new application Name the application argocd application For type of the application select WEB and click continue Zitadel Application Setup Step 1 assets zitadel application 1 png Zitadel Application Setup Step 1 Select CODE and continue Zitadel Application Setup Step 2 assets zitadel application 2 png Zitadel Application Setup Step 2 Next we will set up the redirect and post logout URIs Set the following values Redirect URI https argocd example com auth callback Post Logout URI https argocd example com The post logout URI is optional In the example setup users will be taken back to the ArgoCD login page after logging out Zitadel Application Setup Step 3 assets zitadel application 3 png Zitadel Application Setup Step 3 Verify your configuration on the next screen and click Create to create the application Zitadel Application Setup Step 4 assets zitadel application 4 png Zitadel Application Setup Step 4 After clicking Create you will be shown the ClientId and the ClientSecret for your application Make sure to copy the ClientSecret as you will not be able to retrieve it after closing this window For our example the following values are used ClientId 227060711795262483 argocd project ClientSecret UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0 Zitadel Application Secrets assets zitadel application secrets png Zitadel Application Secrets Once you have saved the ClientSecret in a safe place click Close to complete creating the application Go to Token Settings and enable the following options User roles inside ID Token User Info inside ID Token Zitadel Application Settings assets zitadel application settings png Zitadel Application Settings Setting up an action in Zitadel To include the role of the user in the token issued by Zitadel we will need to set up a Zitadel Action The authorization in ArgoCD will be determined by the role contained within the auth token Go to Actions click New and choose groupsClaim as the name of your action Paste the following code into the action javascript sets the roles an additional claim in the token with roles as value an project as key The role claims of the token look like the following added by the code below groups roleName roleName Flow Complement token Triggers Pre Userinfo creation Pre access token creation param ctx param api function groupsClaim ctx api if ctx v1 user grants undefined ctx v1 user grants count 0 return let grants ctx v1 user grants grants forEach claim claim roles forEach role grants push role api v1 claims setClaim groups grants Check Allowed To Fail and click Add to add your action Note If Allowed To Fail is not checked and a user does not have a role assigned it may be possible that the user is no longer able to log in to Zitadel as the login flow fails when the action fails Next add your action to the Complement Token flow Select the Complement Token flow from the dropdown and click Add trigger Add your action to both triggers Pre Userinfo creation and Pre access token creation Your Actions page should now look like the following screenshot Zitadel Actions assets zitadel actions png Zitadel Actions Configuring the ArgoCD configmaps Next we will configure two ArgoCD configmaps argocd cm yaml https github com argoproj argo cd blob master docs operator manual argocd cm yaml argocd rbac cm yaml https github com argoproj argo cd blob master docs operator manual argocd rbac cm yaml Configure your configmaps as follows while making sure to replace the relevant values such as url issuer clientID clientSecret and logoutURL with ones matching your setup argocd cm yaml yaml apiVersion v1 kind ConfigMap metadata name argocd cm namespace argocd labels app kubernetes io part of argocd data admin enabled false url https argocd example com oidc config name Zitadel issuer https auth example com clientID 227060711795262483 argocd project clientSecret UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0 requestedScopes openid profile email groups logoutURL https auth example com oidc v1 end session argocd rbac cm yaml yaml apiVersion v1 kind ConfigMap metadata name argocd rbac cm namespace argocd labels app kubernetes io part of argocd data scopes groups policy csv g argocd administrators role admin g argocd users role readonly policy default The roles specified under policy csv must match the roles configured in Zitadel The Zitadel role argocd administrators will be assigned the ArgoCD role admin granting admin access to ArgoCD The Zitadel role argocd users will be assigned the ArgoCD role readonly granting read only access to ArgoCD Deploy your ArgoCD configmaps ArgoCD and Zitadel should now be set up correctly to allow users to log in to ArgoCD using Zitadel Testing the setup Go to your ArgoCD instance You should now see the LOG IN WITH ZITADEL button above the usual username password login Zitadel ArgoCD Login assets zitadel argocd login png Zitadel ArgoCD Login After logging in with your Zitadel user go to User Info If everything is set up correctly you should now see the group argocd administrators as shown below Zitadel ArgoCD User Info assets zitadel argocd user info png Zitadel ArgoCD User Info
argocd apiVersion v1 kind ConfigMap The notification template is used to generate the notification content and is configured in the ConfigMap The template is leveraging the golang package and allows customization of the notification message metadata Templates are meant to be reusable and can be referenced by multiple triggers yaml The following template is used to notify the user about application sync status
The notification template is used to generate the notification content and is configured in the `argocd-notifications-cm` ConfigMap. The template is leveraging the [html/template](https://golang.org/pkg/html/template/) golang package and allows customization of the notification message. Templates are meant to be reusable and can be referenced by multiple triggers. The following template is used to notify the user about application sync status. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: template.my-custom-template-slack-template: | message: | Application sync is . Application details: /applications/. ``` Each template has access to the following fields: - `app` holds the application object. - `context` is a user-defined string map and might include any string keys and values. - `secrets` provides access to sensitive data stored in `argocd-notifications-secret` - `serviceType` holds the notification service type name (such as "slack" or "email). The field can be used to conditionally render service-specific fields. - `recipient` holds the recipient name. ## Defining user-defined `context` It is possible to define some shared context between all notification templates by setting a top-level YAML document of key-value pairs, which can then be used within templates, like so: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: context: | region: east environmentName: staging template.a-slack-template-with-context: | message: "Something happened in in the data center!" ``` ## Defining and using secrets within notification templates Some notification service use cases will require the use of secrets within templates. This can be achieved with the use of the `secrets` data variable available within the templates. Given that we have the following `argocd-notifications-secret`: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-notifications-secret stringData: sampleWebhookToken: secret-token type: Opaque ``` We can use the defined `sampleWebhookToken` in a template as such: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: template.trigger-webhook: | webhook: sample-webhook: method: POST path: 'webhook/endpoint/with/auth' body: 'token=&variables[APP_SOURCE_PATH]= ``` ## Notification Service Specific Fields The `message` field of the template definition allows creating a basic notification for any notification service. You can leverage notification service-specific fields to create complex notifications. For example using service-specific you can add blocks and attachments for Slack, subject for Email or URL path, and body for Webhook. See corresponding service [documentation](services/overview.md) for more information. ## Change the timezone You can change the timezone to show in notifications as follows. 1. Call time functions. ``` ``` 2. Set the `TZ` environment variable on the argocd-notifications-controller container. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: argocd-notifications-controller spec: template: spec: containers: - name: argocd-notifications-controller env: - name: TZ value: Asia/Tokyo ``` ## Functions Templates have access to the set of built-in functions: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: template.my-custom-template-slack-template: | message: "Author: " ``` {!docs/operator-manual/notifications/functions.md!}
argocd
The notification template is used to generate the notification content and is configured in the argocd notifications cm ConfigMap The template is leveraging the html template https golang org pkg html template golang package and allows customization of the notification message Templates are meant to be reusable and can be referenced by multiple triggers The following template is used to notify the user about application sync status yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data template my custom template slack template message Application sync is Application details applications Each template has access to the following fields app holds the application object context is a user defined string map and might include any string keys and values secrets provides access to sensitive data stored in argocd notifications secret serviceType holds the notification service type name such as slack or email The field can be used to conditionally render service specific fields recipient holds the recipient name Defining user defined context It is possible to define some shared context between all notification templates by setting a top level YAML document of key value pairs which can then be used within templates like so yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data context region east environmentName staging template a slack template with context message Something happened in in the data center Defining and using secrets within notification templates Some notification service use cases will require the use of secrets within templates This can be achieved with the use of the secrets data variable available within the templates Given that we have the following argocd notifications secret yaml apiVersion v1 kind Secret metadata name argocd notifications secret stringData sampleWebhookToken secret token type Opaque We can use the defined sampleWebhookToken in a template as such yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data template trigger webhook webhook sample webhook method POST path webhook endpoint with auth body token variables APP SOURCE PATH Notification Service Specific Fields The message field of the template definition allows creating a basic notification for any notification service You can leverage notification service specific fields to create complex notifications For example using service specific you can add blocks and attachments for Slack subject for Email or URL path and body for Webhook See corresponding service documentation services overview md for more information Change the timezone You can change the timezone to show in notifications as follows 1 Call time functions 2 Set the TZ environment variable on the argocd notifications controller container yaml apiVersion apps v1 kind Deployment metadata name argocd notifications controller spec template spec containers name argocd notifications controller env name TZ value Asia Tokyo Functions Templates have access to the set of built in functions yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data template my custom template slack template message Author docs operator manual notifications functions md
argocd Prints information about configured templates argocd admin notifications template get Examples argocd admin notifications template get flags
## argocd admin notifications template get Prints information about configured templates ``` argocd admin notifications template get [flags] ``` ### Examples ``` # prints all templates argocd admin notifications template get # print YAML formatted app-sync-succeeded template definition argocd admin notifications template get app-sync-succeeded -o=yaml ``` ### Options ``` -h, --help help for get -o, --output string Output format. One of:json|yaml|wide|name (default "wide") ``` ### Options inherited from parent commands ``` --argocd-repo-server string Argo CD repo server address (default "argocd-repo-server:8081") --argocd-repo-server-plaintext Use a plaintext client (non-TLS) to connect to repository server --argocd-repo-server-strict-tls Perform strict validation of TLS certificates when connecting to repo server --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --config-map string argocd-notifications-cm.yaml file path --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to a kube config. Only required if out-of-cluster -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") --secret string argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty' --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` ## argocd admin notifications template notify Generates notification using the specified template and send it to specified recipients ``` argocd admin notifications template notify NAME RESOURCE_NAME [flags] ``` ### Examples ``` # Trigger notification using in-cluster config map and secret argocd admin notifications template notify app-sync-succeeded guestbook --recipient slack:my-slack-channel # Render notification render generated notification in console argocd admin notifications template notify app-sync-succeeded guestbook ``` ### Options ``` -h, --help help for notify --recipient stringArray List of recipients (default [console:stdout]) ``` ### Options inherited from parent commands ``` --argocd-repo-server string Argo CD repo server address (default "argocd-repo-server:8081") --argocd-repo-server-plaintext Use a plaintext client (non-TLS) to connect to repository server --argocd-repo-server-strict-tls Perform strict validation of TLS certificates when connecting to repo server --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --config-map string argocd-notifications-cm.yaml file path --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to a kube config. Only required if out-of-cluster -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") --secret string argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty' --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` ## argocd admin notifications trigger get Prints information about configured triggers ``` argocd admin notifications trigger get [flags] ``` ### Examples ``` # prints all triggers argocd admin notifications trigger get # print YAML formatted on-sync-failed trigger definition argocd admin notifications trigger get on-sync-failed -o=yaml ``` ### Options ``` -h, --help help for get -o, --output string Output format. One of:json|yaml|wide|name (default "wide") ``` ### Options inherited from parent commands ``` --argocd-repo-server string Argo CD repo server address (default "argocd-repo-server:8081") --argocd-repo-server-plaintext Use a plaintext client (non-TLS) to connect to repository server --argocd-repo-server-strict-tls Perform strict validation of TLS certificates when connecting to repo server --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --config-map string argocd-notifications-cm.yaml file path --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to a kube config. Only required if out-of-cluster -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") --secret string argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty' --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` ## argocd admin notifications trigger run Evaluates specified trigger condition and prints the result ``` argocd admin notifications trigger run NAME RESOURCE_NAME [flags] ``` ### Examples ``` # Execute trigger configured in 'argocd-notification-cm' ConfigMap argocd admin notifications trigger run on-sync-status-unknown ./sample-app.yaml # Execute trigger using my-config-map.yaml instead of 'argocd-notifications-cm' ConfigMap argocd admin notifications trigger run on-sync-status-unknown ./sample-app.yaml \ --config-map ./my-config-map.yaml ``` ### Options ``` -h, --help help for run ``` ### Options inherited from parent commands ``` --argocd-repo-server string Argo CD repo server address (default "argocd-repo-server:8081") --argocd-repo-server-plaintext Use a plaintext client (non-TLS) to connect to repository server --argocd-repo-server-strict-tls Perform strict validation of TLS certificates when connecting to repo server --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --config-map string argocd-notifications-cm.yaml file path --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to a kube config. Only required if out-of-cluster -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --proxy-url string If provided, this URL will be used to connect via proxy --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") --secret string argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty' --server string The address and port of the Kubernetes API server --tls-server-name string If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used. --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ```
argocd
argocd admin notifications template get Prints information about configured templates argocd admin notifications template get flags Examples prints all templates argocd admin notifications template get print YAML formatted app sync succeeded template definition argocd admin notifications template get app sync succeeded o yaml Options h help help for get o output string Output format One of json yaml wide name default wide Options inherited from parent commands argocd repo server string Argo CD repo server address default argocd repo server 8081 argocd repo server plaintext Use a plaintext client non TLS to connect to repository server argocd repo server strict tls Perform strict validation of TLS certificates when connecting to repo server as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups as uid string UID to impersonate for the operation certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use config map string argocd notifications cm yaml file path context string The name of the kubeconfig context to use disable compression If true opt out of response compression for all requests to the server insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to a kube config Only required if out of cluster n namespace string If present the namespace scope for this CLI request password string Password for basic authentication to the API server proxy url string If provided this URL will be used to connect via proxy request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 secret string argocd notifications secret yaml file path Use empty secret if provided value is empty server string The address and port of the Kubernetes API server tls server name string If provided this name will be used to validate server certificate If this is not provided hostname used to contact the server is used token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use username string Username for basic authentication to the API server argocd admin notifications template notify Generates notification using the specified template and send it to specified recipients argocd admin notifications template notify NAME RESOURCE NAME flags Examples Trigger notification using in cluster config map and secret argocd admin notifications template notify app sync succeeded guestbook recipient slack my slack channel Render notification render generated notification in console argocd admin notifications template notify app sync succeeded guestbook Options h help help for notify recipient stringArray List of recipients default console stdout Options inherited from parent commands argocd repo server string Argo CD repo server address default argocd repo server 8081 argocd repo server plaintext Use a plaintext client non TLS to connect to repository server argocd repo server strict tls Perform strict validation of TLS certificates when connecting to repo server as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups as uid string UID to impersonate for the operation certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use config map string argocd notifications cm yaml file path context string The name of the kubeconfig context to use disable compression If true opt out of response compression for all requests to the server insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to a kube config Only required if out of cluster n namespace string If present the namespace scope for this CLI request password string Password for basic authentication to the API server proxy url string If provided this URL will be used to connect via proxy request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 secret string argocd notifications secret yaml file path Use empty secret if provided value is empty server string The address and port of the Kubernetes API server tls server name string If provided this name will be used to validate server certificate If this is not provided hostname used to contact the server is used token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use username string Username for basic authentication to the API server argocd admin notifications trigger get Prints information about configured triggers argocd admin notifications trigger get flags Examples prints all triggers argocd admin notifications trigger get print YAML formatted on sync failed trigger definition argocd admin notifications trigger get on sync failed o yaml Options h help help for get o output string Output format One of json yaml wide name default wide Options inherited from parent commands argocd repo server string Argo CD repo server address default argocd repo server 8081 argocd repo server plaintext Use a plaintext client non TLS to connect to repository server argocd repo server strict tls Perform strict validation of TLS certificates when connecting to repo server as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups as uid string UID to impersonate for the operation certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use config map string argocd notifications cm yaml file path context string The name of the kubeconfig context to use disable compression If true opt out of response compression for all requests to the server insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to a kube config Only required if out of cluster n namespace string If present the namespace scope for this CLI request password string Password for basic authentication to the API server proxy url string If provided this URL will be used to connect via proxy request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 secret string argocd notifications secret yaml file path Use empty secret if provided value is empty server string The address and port of the Kubernetes API server tls server name string If provided this name will be used to validate server certificate If this is not provided hostname used to contact the server is used token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use username string Username for basic authentication to the API server argocd admin notifications trigger run Evaluates specified trigger condition and prints the result argocd admin notifications trigger run NAME RESOURCE NAME flags Examples Execute trigger configured in argocd notification cm ConfigMap argocd admin notifications trigger run on sync status unknown sample app yaml Execute trigger using my config map yaml instead of argocd notifications cm ConfigMap argocd admin notifications trigger run on sync status unknown sample app yaml config map my config map yaml Options h help help for run Options inherited from parent commands argocd repo server string Argo CD repo server address default argocd repo server 8081 argocd repo server plaintext Use a plaintext client non TLS to connect to repository server argocd repo server strict tls Perform strict validation of TLS certificates when connecting to repo server as string Username to impersonate for the operation as group stringArray Group to impersonate for the operation this flag can be repeated to specify multiple groups as uid string UID to impersonate for the operation certificate authority string Path to a cert file for the certificate authority client certificate string Path to a client certificate file for TLS client key string Path to a client key file for TLS cluster string The name of the kubeconfig cluster to use config map string argocd notifications cm yaml file path context string The name of the kubeconfig context to use disable compression If true opt out of response compression for all requests to the server insecure skip tls verify If true the server s certificate will not be checked for validity This will make your HTTPS connections insecure kubeconfig string Path to a kube config Only required if out of cluster n namespace string If present the namespace scope for this CLI request password string Password for basic authentication to the API server proxy url string If provided this URL will be used to connect via proxy request timeout string The length of time to wait before giving up on a single server request Non zero values should contain a corresponding time unit e g 1s 2m 3h A value of zero means don t timeout requests default 0 secret string argocd notifications secret yaml file path Use empty secret if provided value is empty server string The address and port of the Kubernetes API server tls server name string If provided this name will be used to validate server certificate If this is not provided hostname used to contact the server is used token string Bearer token for authentication to the API server user string The name of the kubeconfig user to use username string Username for basic authentication to the API server
argocd apiVersion v1 and notification templates reference The condition is a predicate expression that returns true if the notification should be sent The trigger condition evaluation is powered by The condition language syntax is described at The trigger is configured in the ConfigMap For example the following trigger sends a notification when application sync status changes to using the template The trigger defines the condition when the notification should be sent The definition includes name condition yaml
The trigger defines the condition when the notification should be sent. The definition includes name, condition and notification templates reference. The condition is a predicate expression that returns true if the notification should be sent. The trigger condition evaluation is powered by [antonmedv/expr](https://github.com/antonmedv/expr). The condition language syntax is described at [language-definition.md](https://github.com/antonmedv/expr/blob/master/docs/language-definition.md). The trigger is configured in the `argocd-notifications-cm` ConfigMap. For example the following trigger sends a notification when application sync status changes to `Unknown` using the `app-sync-status` template: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: trigger.on-sync-status-unknown: | - when: app.status.sync.status == 'Unknown' # trigger condition send: [app-sync-status, github-commit-status] # template names ``` Each condition might use several templates. Typically, each template is responsible for generating a service-specific notification part. In the example above, the `app-sync-status` template "knows" how to create email and Slack notification, and `github-commit-status` knows how to generate the payload for GitHub webhook. ## Conditions Bundles Triggers are typically managed by administrators and encapsulate information about when and which notification should be sent. The end users just need to subscribe to the trigger and specify the notification destination. In order to improve user experience triggers might include multiple conditions with a different set of templates for each condition. For example, the following trigger covers all stages of sync status operation and use a different template for different cases: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: trigger.sync-operation-change: | - when: app.status.operationState.phase in ['Succeeded'] send: [github-commit-status] - when: app.status.operationState.phase in ['Running'] send: [github-commit-status] - when: app.status.operationState.phase in ['Error', 'Failed'] send: [app-sync-failed, github-commit-status] ``` ## Avoid Sending Same Notification Too Often In some cases, the trigger condition might be "flapping". The example below illustrates the problem. The trigger is supposed to generate a notification once when Argo CD application is successfully synchronized and healthy. However, the application health status might intermittently switch to `Progressing` and then back to `Healthy` so the trigger might unnecessarily generate multiple notifications. The `oncePer` field configures triggers to generate the notification only when the corresponding application field changes. The `on-deployed` trigger from the example below sends the notification only once per observed Git revision of the deployment repository. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: # Optional 'oncePer' property ensure that notification is sent only once per specified field value # E.g. following is triggered once per sync revision trigger.on-deployed: | when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy' oncePer: app.status.sync.revision send: [app-sync-succeeded] ``` **Mono Repo Usage** When one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision. ### oncePer The `oncePer` field is supported like as follows. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: example.com/version: v0.1 ``` ```yaml oncePer: app.metadata.annotations["example.com/version"] ``` ## Default Triggers You can use `defaultTriggers` field instead of specifying individual triggers to the annotations. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: # Holds list of triggers that are used by default if trigger is not specified explicitly in the subscription defaultTriggers: | - on-sync-status-unknown defaultTriggers.mattermost: | - on-sync-running - on-sync-succeeded ``` Specify the annotations as follows to use `defaultTriggers`. In this example, `slack` sends when `on-sync-status-unknown`, and `mattermost` sends when `on-sync-running` and `on-sync-succeeded`. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.slack: my-channel notifications.argoproj.io/subscribe.mattermost: my-mattermost-channel ``` ## Functions Triggers have access to the set of built-in functions. Example: ```yaml when: time.Now().Sub(time.Parse(app.status.operationState.startedAt)).Minutes() >= 5 ``` {!docs/operator-manual/notifications/functions.md!}
argocd
The trigger defines the condition when the notification should be sent The definition includes name condition and notification templates reference The condition is a predicate expression that returns true if the notification should be sent The trigger condition evaluation is powered by antonmedv expr https github com antonmedv expr The condition language syntax is described at language definition md https github com antonmedv expr blob master docs language definition md The trigger is configured in the argocd notifications cm ConfigMap For example the following trigger sends a notification when application sync status changes to Unknown using the app sync status template yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data trigger on sync status unknown when app status sync status Unknown trigger condition send app sync status github commit status template names Each condition might use several templates Typically each template is responsible for generating a service specific notification part In the example above the app sync status template knows how to create email and Slack notification and github commit status knows how to generate the payload for GitHub webhook Conditions Bundles Triggers are typically managed by administrators and encapsulate information about when and which notification should be sent The end users just need to subscribe to the trigger and specify the notification destination In order to improve user experience triggers might include multiple conditions with a different set of templates for each condition For example the following trigger covers all stages of sync status operation and use a different template for different cases yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data trigger sync operation change when app status operationState phase in Succeeded send github commit status when app status operationState phase in Running send github commit status when app status operationState phase in Error Failed send app sync failed github commit status Avoid Sending Same Notification Too Often In some cases the trigger condition might be flapping The example below illustrates the problem The trigger is supposed to generate a notification once when Argo CD application is successfully synchronized and healthy However the application health status might intermittently switch to Progressing and then back to Healthy so the trigger might unnecessarily generate multiple notifications The oncePer field configures triggers to generate the notification only when the corresponding application field changes The on deployed trigger from the example below sends the notification only once per observed Git revision of the deployment repository yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data Optional oncePer property ensure that notification is sent only once per specified field value E g following is triggered once per sync revision trigger on deployed when app status operationState phase in Succeeded and app status health status Healthy oncePer app status sync revision send app sync succeeded Mono Repo Usage When one repo is used to sync multiple applications the oncePer app status sync revision field will trigger a notification for each commit For mono repos the better approach will be using oncePer app status operationState syncResult revision statement This way a notification will be sent only for a particular Application s revision oncePer The oncePer field is supported like as follows yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations example com version v0 1 yaml oncePer app metadata annotations example com version Default Triggers You can use defaultTriggers field instead of specifying individual triggers to the annotations yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data Holds list of triggers that are used by default if trigger is not specified explicitly in the subscription defaultTriggers on sync status unknown defaultTriggers mattermost on sync running on sync succeeded Specify the annotations as follows to use defaultTriggers In this example slack sends when on sync status unknown and mattermost sends when on sync running and on sync succeeded yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscribe slack my channel notifications argoproj io subscribe mattermost my mattermost channel Functions Triggers have access to the set of built in functions Example yaml when time Now Sub time Parse app status operationState startedAt Minutes 5 docs operator manual notifications functions md
argocd and you can configure when the notification should be sent as users about important changes in the application state Using a flexible mechanism of well as notification content Argo CD Notifications includes the of useful triggers and templates Getting Started So you can just use them instead of reinventing new ones Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify Notifications Overview
# Notifications Overview Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify users about important changes in the application state. Using a flexible mechanism of [triggers](triggers.md) and [templates](templates.md) you can configure when the notification should be sent as well as notification content. Argo CD Notifications includes the [catalog](catalog.md) of useful triggers and templates. So you can just use them instead of reinventing new ones. ## Getting Started * Install Triggers and Templates from the catalog ```bash kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/notifications_catalog/install.yaml ``` * Add Email username and password token to `argocd-notifications-secret` secret ```bash EMAIL_USER=<your-username> PASSWORD=<your-password> kubectl apply -n argocd -f - << EOF apiVersion: v1 kind: Secret metadata: name: argocd-notifications-secret stringData: email-username: $EMAIL_USER email-password: $PASSWORD type: Opaque EOF ``` * Register Email notification service ```bash kubectl patch cm argocd-notifications-cm -n argocd --type merge -p '{"data": {"service.email.gmail": "{ username: $email-username, password: $email-password, host: smtp.gmail.com, port: 465, from: $email-username }" }}' ``` * Subscribe to notifications by adding the `notifications.argoproj.io/subscribe.on-sync-succeeded.slack` annotation to the Argo CD application or project: ```bash kubectl patch app <my-app> -n argocd -p '{"metadata": {"annotations": {"notifications.argoproj.io/subscribe.on-sync-succeeded.slack":"<my-channel>"}}}' --type merge ``` Try syncing an application to get notified when the sync is completed. ## Namespace based configuration A common installation method for Argo CD Notifications is to install it in a dedicated namespace to manage a whole cluster. In this case, the administrator is the only person who can configure notifications in that namespace generally. However, in some cases, it is required to allow end-users to configure notifications for their Argo CD applications. For example, the end-user can configure notifications for their Argo CD application in the namespace where they have access to and their Argo CD application is running in. This feature is based on applications in any namespace. See [applications in any namespace](../app-any-namespace.md) page for more information. In order to enable this feature, the Argo CD administrator must reconfigure the argocd-notification-controller workloads to add `--application-namespaces` and `--self-service-notification-enabled` parameters to the container's startup command. `--application-namespaces` controls the list of namespaces that Argo CD applications are in. `--self-service-notification-enabled` turns on this feature. The startup parameters for both can also be conveniently set up and kept in sync by specifying the `application.namespaces` and `notificationscontroller.selfservice.enabled` in the argocd-cmd-params-cm ConfigMap instead of changing the manifests for the respective workloads. For example: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cmd-params-cm data: application.namespaces: app-team-one, app-team-two notificationscontroller.selfservice.enabled: "true" ``` To use this feature, you can deploy configmap named `argocd-notifications-cm` and possibly a secret `argocd-notifications-secret` in the namespace where the Argo CD application lives. When it is configured this way the controller will send notifications using both the controller level configuration (the configmap located in the same namespaces as the controller) as well as the configuration located in the same namespace where the Argo CD application is at. Example: Application team wants to receive notifications using PagerDutyV2, when the controller level configuration is only supporting Slack. The following two resources are deployed in the namespace where the Argo CD application lives. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.pagerdutyv2: | serviceKeys: my-service: $pagerduty-key-my-service ... ``` ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-notifications-secret type: Opaque data: pagerduty-key-my-service: <pd-integration-key> ``` When an Argo CD application has the following subscriptions, user receives application sync failure message from pager duty. ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-failed.pagerdutyv2: "<serviceID for Pagerduty>" ``` !!! note When the same notification service and trigger are defined in controller level configuration and application level configuration, both notifications will be sent according to its own configuration. [Defining and using secrets within notification templates](templates.md/#defining-and-using-secrets-within-notification-templates) function is not available when flag `--self-service-notification-enable` is on.
argocd
Notifications Overview Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify users about important changes in the application state Using a flexible mechanism of triggers triggers md and templates templates md you can configure when the notification should be sent as well as notification content Argo CD Notifications includes the catalog catalog md of useful triggers and templates So you can just use them instead of reinventing new ones Getting Started Install Triggers and Templates from the catalog bash kubectl apply n argocd f https raw githubusercontent com argoproj argo cd stable notifications catalog install yaml Add Email username and password token to argocd notifications secret secret bash EMAIL USER your username PASSWORD your password kubectl apply n argocd f EOF apiVersion v1 kind Secret metadata name argocd notifications secret stringData email username EMAIL USER email password PASSWORD type Opaque EOF Register Email notification service bash kubectl patch cm argocd notifications cm n argocd type merge p data service email gmail username email username password email password host smtp gmail com port 465 from email username Subscribe to notifications by adding the notifications argoproj io subscribe on sync succeeded slack annotation to the Argo CD application or project bash kubectl patch app my app n argocd p metadata annotations notifications argoproj io subscribe on sync succeeded slack my channel type merge Try syncing an application to get notified when the sync is completed Namespace based configuration A common installation method for Argo CD Notifications is to install it in a dedicated namespace to manage a whole cluster In this case the administrator is the only person who can configure notifications in that namespace generally However in some cases it is required to allow end users to configure notifications for their Argo CD applications For example the end user can configure notifications for their Argo CD application in the namespace where they have access to and their Argo CD application is running in This feature is based on applications in any namespace See applications in any namespace app any namespace md page for more information In order to enable this feature the Argo CD administrator must reconfigure the argocd notification controller workloads to add application namespaces and self service notification enabled parameters to the container s startup command application namespaces controls the list of namespaces that Argo CD applications are in self service notification enabled turns on this feature The startup parameters for both can also be conveniently set up and kept in sync by specifying the application namespaces and notificationscontroller selfservice enabled in the argocd cmd params cm ConfigMap instead of changing the manifests for the respective workloads For example yaml apiVersion v1 kind ConfigMap metadata name argocd cmd params cm data application namespaces app team one app team two notificationscontroller selfservice enabled true To use this feature you can deploy configmap named argocd notifications cm and possibly a secret argocd notifications secret in the namespace where the Argo CD application lives When it is configured this way the controller will send notifications using both the controller level configuration the configmap located in the same namespaces as the controller as well as the configuration located in the same namespace where the Argo CD application is at Example Application team wants to receive notifications using PagerDutyV2 when the controller level configuration is only supporting Slack The following two resources are deployed in the namespace where the Argo CD application lives yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service pagerdutyv2 serviceKeys my service pagerduty key my service yaml apiVersion v1 kind Secret metadata name argocd notifications secret type Opaque data pagerduty key my service pd integration key When an Argo CD application has the following subscriptions user receives application sync failure message from pager duty yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscribe on sync failed pagerdutyv2 serviceID for Pagerduty note When the same notification service and trigger are defined in controller level configuration and application level configuration both notifications will be sent according to its own configuration Defining and using secrets within notification templates templates md defining and using secrets within notification templates function is not available when flag self service notification enable is on
argocd hr Golang Time related functions Executes function built in Golang function Returns an instance of time
### **time** Time related functions. <hr> **`time.Now() Time`** Executes function built-in Golang [time.Now](https://golang.org/pkg/time/#Now) function. Returns an instance of Golang [Time](https://golang.org/pkg/time/#Time). <hr> **`time.Parse(val string) Time`** Parses specified string using RFC3339 layout. Returns an instance of Golang [Time](https://golang.org/pkg/time/#Time). <hr> Time related constants. **Durations** ``` time.Nanosecond = 1 time.Microsecond = 1000 * Nanosecond time.Millisecond = 1000 * Microsecond time.Second = 1000 * Millisecond time.Minute = 60 * Second time.Hour = 60 * Minute ``` **Timestamps** Used when formatting time instances as strings (e.g. `time.Now().Format(time.RFC3339)`). ``` time.Layout = "01/02 03:04:05PM '06 -0700" // The reference time, in numerical order. time.ANSIC = "Mon Jan _2 15:04:05 2006" time.UnixDate = "Mon Jan _2 15:04:05 MST 2006" time.RubyDate = "Mon Jan 02 15:04:05 -0700 2006" time.RFC822 = "02 Jan 06 15:04 MST" time.RFC822Z = "02 Jan 06 15:04 -0700" // RFC822 with numeric zone time.RFC850 = "Monday, 02-Jan-06 15:04:05 MST" time.RFC1123 = "Mon, 02 Jan 2006 15:04:05 MST" time.RFC1123Z = "Mon, 02 Jan 2006 15:04:05 -0700" // RFC1123 with numeric zone time.RFC3339 = "2006-01-02T15:04:05Z07:00" time.RFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00" time.Kitchen = "3:04PM" // Handy time stamps. time.Stamp = "Jan _2 15:04:05" time.StampMilli = "Jan _2 15:04:05.000" time.StampMicro = "Jan _2 15:04:05.000000" time.StampNano = "Jan _2 15:04:05.000000000" ``` ### **strings** String related functions. <hr> **`strings.ReplaceAll() string`** Executes function built-in Golang [strings.ReplaceAll](https://pkg.go.dev/strings#ReplaceAll) function. <hr> **`strings.ToUpper() string`** Executes function built-in Golang [strings.ToUpper](https://pkg.go.dev/strings#ToUpper) function. <hr> **`strings.ToLower() string`** Executes function built-in Golang [strings.ToLower](https://pkg.go.dev/strings#ToLower) function. ### **sync** <hr> **`sync.GetInfoItem(app map, name string) string`** Returns the `info` item value by given name stored in the Argo CD App sync operation. ### **repo** Functions that provide additional information about Application source repository. <hr> **`repo.RepoURLToHTTPS(url string) string`** Transforms given GIT URL into HTTPs format. <hr> **`repo.FullNameByRepoURL(url string) string`** Returns repository URL full name `(<owner>/<repoName>)`. Currently supports only Github, GitLab and Bitbucket. <hr> **`repo.QueryEscape(s string) string`** QueryEscape escapes the string, so it can be safely placed inside a URL Example: ``` /projects//merge_requests ``` <hr> **`repo.GetCommitMetadata(sha string) CommitMetadata`** Returns commit metadata. The commit must belong to the application source repository. `CommitMetadata` fields: * `Message string` commit message * `Author string` - commit author * `Date time.Time` - commit creation date * `Tags []string` - Associated tags <hr> **`repo.GetAppDetails() AppDetail`** Returns application details. `AppDetail` fields: * `Type string` - AppDetail type * `Helm HelmAppSpec` - Helm details * Fields : * `Name string` * `ValueFiles []string` * `Parameters []*v1alpha1.HelmParameter` * `Values string` * `FileParameters []*v1alpha1.HelmFileParameter` * Methods : * `GetParameterValueByName(Name string)` Retrieve value by name in Parameters field * `GetFileParameterPathByName(Name string)` Retrieve path by name in FileParameters field * * `Kustomize *apiclient.KustomizeAppSpec` - Kustomize details * `Directory *apiclient.DirectoryAppSpec` - Directory details
argocd
time Time related functions hr time Now Time Executes function built in Golang time Now https golang org pkg time Now function Returns an instance of Golang Time https golang org pkg time Time hr time Parse val string Time Parses specified string using RFC3339 layout Returns an instance of Golang Time https golang org pkg time Time hr Time related constants Durations time Nanosecond 1 time Microsecond 1000 Nanosecond time Millisecond 1000 Microsecond time Second 1000 Millisecond time Minute 60 Second time Hour 60 Minute Timestamps Used when formatting time instances as strings e g time Now Format time RFC3339 time Layout 01 02 03 04 05PM 06 0700 The reference time in numerical order time ANSIC Mon Jan 2 15 04 05 2006 time UnixDate Mon Jan 2 15 04 05 MST 2006 time RubyDate Mon Jan 02 15 04 05 0700 2006 time RFC822 02 Jan 06 15 04 MST time RFC822Z 02 Jan 06 15 04 0700 RFC822 with numeric zone time RFC850 Monday 02 Jan 06 15 04 05 MST time RFC1123 Mon 02 Jan 2006 15 04 05 MST time RFC1123Z Mon 02 Jan 2006 15 04 05 0700 RFC1123 with numeric zone time RFC3339 2006 01 02T15 04 05Z07 00 time RFC3339Nano 2006 01 02T15 04 05 999999999Z07 00 time Kitchen 3 04PM Handy time stamps time Stamp Jan 2 15 04 05 time StampMilli Jan 2 15 04 05 000 time StampMicro Jan 2 15 04 05 000000 time StampNano Jan 2 15 04 05 000000000 strings String related functions hr strings ReplaceAll string Executes function built in Golang strings ReplaceAll https pkg go dev strings ReplaceAll function hr strings ToUpper string Executes function built in Golang strings ToUpper https pkg go dev strings ToUpper function hr strings ToLower string Executes function built in Golang strings ToLower https pkg go dev strings ToLower function sync hr sync GetInfoItem app map name string string Returns the info item value by given name stored in the Argo CD App sync operation repo Functions that provide additional information about Application source repository hr repo RepoURLToHTTPS url string string Transforms given GIT URL into HTTPs format hr repo FullNameByRepoURL url string string Returns repository URL full name owner repoName Currently supports only Github GitLab and Bitbucket hr repo QueryEscape s string string QueryEscape escapes the string so it can be safely placed inside a URL Example projects merge requests hr repo GetCommitMetadata sha string CommitMetadata Returns commit metadata The commit must belong to the application source repository CommitMetadata fields Message string commit message Author string commit author Date time Time commit creation date Tags string Associated tags hr repo GetAppDetails AppDetail Returns application details AppDetail fields Type string AppDetail type Helm HelmAppSpec Helm details Fields Name string ValueFiles string Parameters v1alpha1 HelmParameter Values string FileParameters v1alpha1 HelmFileParameter Methods GetParameterValueByName Name string Retrieve value by name in Parameters field GetFileParameterPathByName Name string Retrieve path by name in FileParameters field Kustomize apiclient KustomizeAppSpec Kustomize details Directory apiclient DirectoryAppSpec Directory details
argocd Triggers and Templates Catalog bash Getting Started kubectl apply n argocd f https raw githubusercontent com argoproj argo cd stable notificationscatalog install yaml Triggers NAME DESCRIPTION TEMPLATE on created Application is created Install Triggers and Templates from the catalog
# Triggers and Templates Catalog ## Getting Started * Install Triggers and Templates from the catalog ```bash kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/notifications_catalog/install.yaml ``` ## Triggers | NAME | DESCRIPTION | TEMPLATE | |------------------------|---------------------------------------------------------------|-----------------------------------------------------| | on-created | Application is created. | [app-created](#app-created) | | on-deleted | Application is deleted. | [app-deleted](#app-deleted) | | on-deployed | Application is synced and healthy. Triggered once per commit. | [app-deployed](#app-deployed) | | on-health-degraded | Application has degraded | [app-health-degraded](#app-health-degraded) | | on-sync-failed | Application syncing has failed | [app-sync-failed](#app-sync-failed) | | on-sync-running | Application is being synced | [app-sync-running](#app-sync-running) | | on-sync-status-unknown | Application status is 'Unknown' | [app-sync-status-unknown](#app-sync-status-unknown) | | on-sync-succeeded | Application syncing has succeeded | [app-sync-succeeded](#app-sync-succeeded) | ## Templates ### app-created **definition**: ```yaml email: subject: Application has been created. message: Application has been created. teams: title: Application has been created. ``` ### app-deleted **definition**: ```yaml email: subject: Application has been deleted. message: Application has been deleted. teams: title: Application has been deleted. ``` ### app-deployed **definition**: ```yaml email: subject: New version of an application is up and running. message: | :white_check_mark: Application is now running new version of deployments manifests. slack: attachments: | [{ "title": "", "title_link":"/applications/", "color": "#18be52", "fields": [ { "title": "Sync Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true }, { "title": "Revision", "value": "", "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , }, { "name": "Revision", "value": "" } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Operation Application", "targets":[{ "os":"default", "uri":"/applications/" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] themeColor: '#000080' title: New version of an application is up and running. ``` ### app-health-degraded **definition**: ```yaml email: subject: Application has degraded. message: | :exclamation: Application has degraded. Application details: /applications/. slack: attachments: | [{ "title": "", "title_link": "/applications/", "color": "#f4c030", "fields": [ { "title": "Health Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Health Status", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Open Application", "targets":[{ "os":"default", "uri":"/applications/" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] themeColor: '#FF0000' title: Application has degraded. ``` ### app-sync-failed **definition**: ```yaml email: subject: Failed to sync application . message: | :exclamation: The sync operation of application has failed at with the following error: Sync operation details are available at: /applications/?operation=true . slack: attachments: | [{ "title": "", "title_link":"/applications/", "color": "#E96D76", "fields": [ { "title": "Sync Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Failed at", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Open Operation", "targets":[{ "os":"default", "uri":"/applications/?operation=true" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] themeColor: '#FF0000' title: Failed to sync application . ``` ### app-sync-running **definition**: ```yaml email: subject: Start syncing application . message: | The sync operation of application has started at . Sync operation details are available at: /applications/?operation=true . slack: attachments: | [{ "title": "", "title_link":"/applications/", "color": "#0DADEA", "fields": [ { "title": "Sync Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Started at", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Open Operation", "targets":[{ "os":"default", "uri":"/applications/?operation=true" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] title: Start syncing application . ``` ### app-sync-status-unknown **definition**: ```yaml email: subject: Application sync status is 'Unknown' message: | :exclamation: Application sync is 'Unknown'. Application details: /applications/. * slack: attachments: | [{ "title": "", "title_link":"/applications/", "color": "#E96D76", "fields": [ { "title": "Sync Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Open Application", "targets":[{ "os":"default", "uri":"/applications/" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] title: Application sync status is 'Unknown' ``` ### app-sync-succeeded **definition**: ```yaml email: subject: Application has been successfully synced. message: | :white_check_mark: Application has been successfully synced at . Sync operation details are available at: /applications/?operation=true . slack: attachments: | [{ "title": "", "title_link":"/applications/", "color": "#18be52", "fields": [ { "title": "Sync Status", "value": "", "short": true }, { "title": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , "short": true } , { "title": "", "value": "", "short": true } ] }] deliveryPolicy: Post groupingKey: "" notifyBroadcast: false teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Synced at", "value": "" }, { "name": "Repository" "Repositories" , "value": ":arrow_heading_up: " "\n:arrow_heading_up: " , } , { "name": "", "value": "" } ] potentialAction: | [{ "@type":"OpenUri", "name":"Operation Details", "targets":[{ "os":"default", "uri":"/applications/?operation=true" }] }, { "@type":"OpenUri", "name":"Open Repository", "targets":[{ "os":"default", "uri": ":arrow_heading_up: " "\n:arrow_heading_up: " , }] }] themeColor: '#000080' title: Application has been successfully synced ```
argocd
Triggers and Templates Catalog Getting Started Install Triggers and Templates from the catalog bash kubectl apply n argocd f https raw githubusercontent com argoproj argo cd stable notifications catalog install yaml Triggers NAME DESCRIPTION TEMPLATE on created Application is created app created app created on deleted Application is deleted app deleted app deleted on deployed Application is synced and healthy Triggered once per commit app deployed app deployed on health degraded Application has degraded app health degraded app health degraded on sync failed Application syncing has failed app sync failed app sync failed on sync running Application is being synced app sync running app sync running on sync status unknown Application status is Unknown app sync status unknown app sync status unknown on sync succeeded Application syncing has succeeded app sync succeeded app sync succeeded Templates app created definition yaml email subject Application has been created message Application has been created teams title Application has been created app deleted definition yaml email subject Application has been deleted message Application has been deleted teams title Application has been deleted app deployed definition yaml email subject New version of an application is up and running message white check mark Application is now running new version of deployments manifests slack attachments title title link applications color 18be52 fields title Sync Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title Revision value short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Sync Status value name Repository Repositories value arrow heading up n arrow heading up name Revision value name value potentialAction type OpenUri name Operation Application targets os default uri applications type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up themeColor 000080 title New version of an application is up and running app health degraded definition yaml email subject Application has degraded message exclamation Application has degraded Application details applications slack attachments title title link applications color f4c030 fields title Health Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Health Status value name Repository Repositories value arrow heading up n arrow heading up name value potentialAction type OpenUri name Open Application targets os default uri applications type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up themeColor FF0000 title Application has degraded app sync failed definition yaml email subject Failed to sync application message exclamation The sync operation of application has failed at with the following error Sync operation details are available at applications operation true slack attachments title title link applications color E96D76 fields title Sync Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Sync Status value name Failed at value name Repository Repositories value arrow heading up n arrow heading up name value potentialAction type OpenUri name Open Operation targets os default uri applications operation true type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up themeColor FF0000 title Failed to sync application app sync running definition yaml email subject Start syncing application message The sync operation of application has started at Sync operation details are available at applications operation true slack attachments title title link applications color 0DADEA fields title Sync Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Sync Status value name Started at value name Repository Repositories value arrow heading up n arrow heading up name value potentialAction type OpenUri name Open Operation targets os default uri applications operation true type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up title Start syncing application app sync status unknown definition yaml email subject Application sync status is Unknown message exclamation Application sync is Unknown Application details applications slack attachments title title link applications color E96D76 fields title Sync Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Sync Status value name Repository Repositories value arrow heading up n arrow heading up name value potentialAction type OpenUri name Open Application targets os default uri applications type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up title Application sync status is Unknown app sync succeeded definition yaml email subject Application has been successfully synced message white check mark Application has been successfully synced at Sync operation details are available at applications operation true slack attachments title title link applications color 18be52 fields title Sync Status value short true title Repository Repositories value arrow heading up n arrow heading up short true title value short true deliveryPolicy Post groupingKey notifyBroadcast false teams facts name Sync Status value name Synced at value name Repository Repositories value arrow heading up n arrow heading up name value potentialAction type OpenUri name Operation Details targets os default uri applications operation true type OpenUri name Open Repository targets os default uri arrow heading up n arrow heading up themeColor 000080 title Application has been successfully synced
argocd apiVersion v1 incorrect Failed to parse new settings YAML syntax is incorrect error converting YAML to JSON yaml
## Failed to parse new settings ### error converting YAML to JSON YAML syntax is incorrect. **incorrect:** ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.slack: | token: $slack-token icon: :rocket: ``` **correct:** ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.slack: | token: $slack-token icon: ":rocket:" # <- diff here ``` ### service type 'xxxx' is not supported Check the `argocd-notifications` controller version. For example, the Teams integration support started in `v1.1.0`. ## Failed to notify recipient ### notification service 'xxxx' is not supported You have not defined `xxxx` in `argocd-notifications-cm` or parsing failed. ### GitHub.repoURL (\u003cno value\u003e) does not have a / using the configuration Likely caused by an Application with [multiple sources](https://argo-cd.readthedocs.io/en/stable/user-guide/multiple_sources/): ```yaml spec: sources: # <- multiple sources - repoURL: https://github.com/exampleOrg/first.git path: sources/example - repoURL: https://github.com/exampleOrg/second.git targetRevision: "" ``` The standard notification template only supports a single source (``). Use an index to specify the source in the array: ```yaml template.example: | github: repoURLPath: "" ``` ### Error message `POST https://api.github.com/repos/xxxx/yyyy/statuses/: 404 Not Found` This case is similar to the previous one, you have multiple sources in the Application manifest. Default `revisionPath` template `` is for an Application with single source. Multi-source applications report application statuses in an array: ```yaml status: operationState: syncResult: revisions: - 38cfa22edf9148caabfecb288bfb47dc4352dfc6 - 38cfa22edf9148caabfecb288bfb47dc4352dfc6 Quick fix for this is to use `index` function to get the first revision: ```yaml template.example: | github: revisionPath: "" ``` ## config referenced xxx, but key does not exist in secret - If you are using a custom secret, check that the secret is in the same namespace - You have added the label: `app.kubernetes.io/part-of: argocd` to the secret - You have tried restarting `argocd-notifications` controller ### Example: Secret: ```yaml apiVersion: v1 kind: Secret metadata: name: argocd-slackbot namespace: <the namespace where argocd is installed> labels: app.kubernetes.io/part-of: argocd type: Opaque data: slack-token: <base64encryptedtoken> ``` ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.slack: | token: $argocd-slackbot:slack-token ```
argocd
Failed to parse new settings error converting YAML to JSON YAML syntax is incorrect incorrect yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service slack token slack token icon rocket correct yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service slack token slack token icon rocket diff here service type xxxx is not supported Check the argocd notifications controller version For example the Teams integration support started in v1 1 0 Failed to notify recipient notification service xxxx is not supported You have not defined xxxx in argocd notifications cm or parsing failed GitHub repoURL u003cno value u003e does not have a using the configuration Likely caused by an Application with multiple sources https argo cd readthedocs io en stable user guide multiple sources yaml spec sources multiple sources repoURL https github com exampleOrg first git path sources example repoURL https github com exampleOrg second git targetRevision The standard notification template only supports a single source Use an index to specify the source in the array yaml template example github repoURLPath Error message POST https api github com repos xxxx yyyy statuses 404 Not Found This case is similar to the previous one you have multiple sources in the Application manifest Default revisionPath template is for an Application with single source Multi source applications report application statuses in an array yaml status operationState syncResult revisions 38cfa22edf9148caabfecb288bfb47dc4352dfc6 38cfa22edf9148caabfecb288bfb47dc4352dfc6 Quick fix for this is to use index function to get the first revision yaml template example github revisionPath config referenced xxx but key does not exist in secret If you are using a custom secret check that the secret is in the same namespace You have added the label app kubernetes io part of argocd to the secret You have tried restarting argocd notifications controller Example Secret yaml apiVersion v1 kind Secret metadata name argocd slackbot namespace the namespace where argocd is installed labels app kubernetes io part of argocd type Opaque data slack token base64encryptedtoken ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service slack token argocd slackbot slack token
argocd Parameters Option Required Type Description Example The Slack notification service configuration includes following settings If you want to send message using incoming webhook you can use Slack
# Slack If you want to send message using incoming webhook, you can use [webhook](./webhook.md#send-slack). ## Parameters The Slack notification service configuration includes following settings: | **Option** | **Required** | **Type** | **Description** | **Example** | | -------------------- | ------------ | -------------- | --------------- | ----------- | | `apiURL` | False | `string` | The server URL. | `https://example.com/api` | | `channels` | False | `list[string]` | | `["my-channel-1", "my-channel-2"]` | | `icon` | False | `string` | The app icon. | `:robot_face:` or `https://example.com/image.png` | | `insecureSkipVerify` | False | `bool` | | `true` | | `signingSecret` | False | `string` | | `8f742231b10e8888abcd99yyyzzz85a5` | | `token` | **True** | `string` | The app's OAuth access token. | `xoxb-1234567890-1234567890123-5n38u5ed63fgzqlvuyxvxcx6` | | `username` | False | `string` | The app username. | `argocd` | | `disableUnfurl` | False | `bool` | Disable slack unfurling links in messages | `true` | ## Configuration 1. Create Slack Application using https://api.slack.com/apps?new_app=1 ![1](https://user-images.githubusercontent.com/426437/73604308-4cb0c500-4543-11ea-9092-6ca6bae21cbb.png) 1. Once application is created navigate to `OAuth & Permissions` ![2](https://user-images.githubusercontent.com/426437/73604309-4d495b80-4543-11ea-9908-4dea403d3399.png) 1. Go to `Scopes` > `Bot Token Scopes` > `Add an OAuth Scope`. Add `chat:write` scope. To use the optional username and icon overrides in the Slack notification service also add the `chat:write.customize` scope. ![3](https://user-images.githubusercontent.com/426437/73604310-4d495b80-4543-11ea-8576-09cd91aea0e5.png) 1. `OAuth & Permission` > `OAuth Tokens for Your Workspace` > `Install to Workspace` ![4](https://user-images.githubusercontent.com/426437/73604311-4d495b80-4543-11ea-9155-9d216b20ec86.png) 1. Once installation is completed copy the OAuth token. ![5](https://user-images.githubusercontent.com/426437/73604312-4d495b80-4543-11ea-832b-a9d9d5e4bc29.png) 1. Create a public or private channel, for this example `my_channel` 1. Invite your slack bot to this channel **otherwise slack bot won't be able to deliver notifications to this channel** 1. Store Oauth access token in `argocd-notifications-secret` secret ```yaml apiVersion: v1 kind: Secret metadata: name: <secret-name> stringData: slack-token: <Oauth-access-token> ``` 1. Define service type slack in data section of `argocd-notifications-cm` configmap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.slack: | token: $slack-token ``` 1. Add annotation in application yaml file to enable notifications for specific argocd app. The following example uses the [on-sync-succeeded trigger](../catalog.md#triggers): ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.slack: my_channel ``` 1. Annotation with more than one [trigger](../catalog.md#triggers), with multiple destinations and recipients ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscriptions: | - trigger: [on-scaling-replica-set, on-rollout-updated, on-rollout-step-completed] destinations: - service: slack recipients: [my-channel-1, my-channel-2] - service: email recipients: [recipient-1, recipient-2, recipient-3 ] - trigger: [on-rollout-aborted, on-analysis-run-failed, on-analysis-run-error] destinations: - service: slack recipients: [my-channel-21, my-channel-22] ``` ## Templates [Notification templates](../templates.md) can be customized to leverage slack message blocks and attachments [feature](https://api.slack.com/messaging/composing/layouts). ![](https://user-images.githubusercontent.com/426437/72776856-6dcef880-3bc8-11ea-8e3b-c72df16ee8e6.png) The message blocks and attachments can be specified in `blocks` and `attachments` string fields under `slack` field: ```yaml template.app-sync-status: | message: | Application sync is . Application details: /applications/. slack: attachments: | [{ "title": "", "title_link": "/applications/", "color": "#18be52", "fields": [{ "title": "Sync Status", "value": "", "short": true }, { "title": "Repository", "value": "", "short": true }] }] ``` If you want to specify an icon and username for each message, you can specify values for `username` and `icon` in the `slack` field. For icon you can specify emoji and image URL, just like in the service definition. If you set `username` and `icon` in template, the values set in template will be used even if values are specified in the service definition. ```yaml template.app-sync-status: | message: | Application sync is . Application details: /applications/. slack: username: "testbot" icon: https://example.com/image.png attachments: | [{ "title": "", "title_link": "/applications/", "color": "#18be52", "fields": [{ "title": "Sync Status", "value": "", "short": true }, { "title": "Repository", "value": "", "short": true }] }] ``` The messages can be aggregated to the slack threads by grouping key which can be specified in a `groupingKey` string field under `slack` field. `groupingKey` is used across each template and works independently on each slack channel. When multiple applications will be updated at the same time or frequently, the messages in slack channel can be easily read by aggregating with git commit hash, application name, etc. Furthermore, the messages can be broadcast to the channel at the specific template by `notifyBroadcast` field. ```yaml template.app-sync-status: | message: | Application sync is . Application details: /applications/. slack: attachments: | [{ "title": "", "title_link": "/applications/", "color": "#18be52", "fields": [{ "title": "Sync Status", "value": "", "short": true }, { "title": "Repository", "value": "", "short": true }] }] # Aggregate the messages to the thread by git commit hash groupingKey: "" notifyBroadcast: false template.app-sync-failed: | message: | Application sync is . Application details: /applications/. slack: attachments: | [{ "title": "", "title_link": "/applications/", "color": "#ff0000", "fields": [{ "title": "Sync Status", "value": "", "short": true }, { "title": "Repository", "value": "", "short": true }] }] # Aggregate the messages to the thread by git commit hash groupingKey: "" notifyBroadcast: true ``` The message is sent according to the `deliveryPolicy` string field under the `slack` field. The available modes are `Post` (default), `PostAndUpdate`, and `Update`. The `PostAndUpdate` and `Update` settings require `groupingKey` to be set.
argocd
Slack If you want to send message using incoming webhook you can use webhook webhook md send slack Parameters The Slack notification service configuration includes following settings Option Required Type Description Example apiURL False string The server URL https example com api channels False list string my channel 1 my channel 2 icon False string The app icon robot face or https example com image png insecureSkipVerify False bool true signingSecret False string 8f742231b10e8888abcd99yyyzzz85a5 token True string The app s OAuth access token xoxb 1234567890 1234567890123 5n38u5ed63fgzqlvuyxvxcx6 username False string The app username argocd disableUnfurl False bool Disable slack unfurling links in messages true Configuration 1 Create Slack Application using https api slack com apps new app 1 1 https user images githubusercontent com 426437 73604308 4cb0c500 4543 11ea 9092 6ca6bae21cbb png 1 Once application is created navigate to OAuth Permissions 2 https user images githubusercontent com 426437 73604309 4d495b80 4543 11ea 9908 4dea403d3399 png 1 Go to Scopes Bot Token Scopes Add an OAuth Scope Add chat write scope To use the optional username and icon overrides in the Slack notification service also add the chat write customize scope 3 https user images githubusercontent com 426437 73604310 4d495b80 4543 11ea 8576 09cd91aea0e5 png 1 OAuth Permission OAuth Tokens for Your Workspace Install to Workspace 4 https user images githubusercontent com 426437 73604311 4d495b80 4543 11ea 9155 9d216b20ec86 png 1 Once installation is completed copy the OAuth token 5 https user images githubusercontent com 426437 73604312 4d495b80 4543 11ea 832b a9d9d5e4bc29 png 1 Create a public or private channel for this example my channel 1 Invite your slack bot to this channel otherwise slack bot won t be able to deliver notifications to this channel 1 Store Oauth access token in argocd notifications secret secret yaml apiVersion v1 kind Secret metadata name secret name stringData slack token Oauth access token 1 Define service type slack in data section of argocd notifications cm configmap yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service slack token slack token 1 Add annotation in application yaml file to enable notifications for specific argocd app The following example uses the on sync succeeded trigger catalog md triggers yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscribe on sync succeeded slack my channel 1 Annotation with more than one trigger catalog md triggers with multiple destinations and recipients yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscriptions trigger on scaling replica set on rollout updated on rollout step completed destinations service slack recipients my channel 1 my channel 2 service email recipients recipient 1 recipient 2 recipient 3 trigger on rollout aborted on analysis run failed on analysis run error destinations service slack recipients my channel 21 my channel 22 Templates Notification templates templates md can be customized to leverage slack message blocks and attachments feature https api slack com messaging composing layouts https user images githubusercontent com 426437 72776856 6dcef880 3bc8 11ea 8e3b c72df16ee8e6 png The message blocks and attachments can be specified in blocks and attachments string fields under slack field yaml template app sync status message Application sync is Application details applications slack attachments title title link applications color 18be52 fields title Sync Status value short true title Repository value short true If you want to specify an icon and username for each message you can specify values for username and icon in the slack field For icon you can specify emoji and image URL just like in the service definition If you set username and icon in template the values set in template will be used even if values are specified in the service definition yaml template app sync status message Application sync is Application details applications slack username testbot icon https example com image png attachments title title link applications color 18be52 fields title Sync Status value short true title Repository value short true The messages can be aggregated to the slack threads by grouping key which can be specified in a groupingKey string field under slack field groupingKey is used across each template and works independently on each slack channel When multiple applications will be updated at the same time or frequently the messages in slack channel can be easily read by aggregating with git commit hash application name etc Furthermore the messages can be broadcast to the channel at the specific template by notifyBroadcast field yaml template app sync status message Application sync is Application details applications slack attachments title title link applications color 18be52 fields title Sync Status value short true title Repository value short true Aggregate the messages to the thread by git commit hash groupingKey notifyBroadcast false template app sync failed message Application sync is Application details applications slack attachments title title link applications color ff0000 fields title Sync Status value short true title Repository value short true Aggregate the messages to the thread by git commit hash groupingKey notifyBroadcast true The message is sent according to the deliveryPolicy string field under the slack field The available modes are Post default PostAndUpdate and Update The PostAndUpdate and Update settings require groupingKey to be set
argocd Parameters The webhook notification service allows sending a generic HTTP request using the templatized request body and URL Webhook the url to send the webhook to Using Webhook you might trigger a Jenkins job update GitHub commit status The Webhook notification service configuration includes following settings
# Webhook The webhook notification service allows sending a generic HTTP request using the templatized request body and URL. Using Webhook you might trigger a Jenkins job, update GitHub commit status. ## Parameters The Webhook notification service configuration includes following settings: - `url` - the url to send the webhook to - `headers` - optional, the headers to pass along with the webhook - `basicAuth` - optional, the basic authentication to pass along with the webhook - `insecureSkipVerify` - optional bool, true or false - `retryWaitMin` - Optional, the minimum wait time between retries. Default value: 1s. - `retryWaitMax` - Optional, the maximum wait time between retries. Default value: 5s. - `retryMax` - Optional, the maximum number of retries. Default value: 3. ## Retry Behavior The webhook service will automatically retry the request if it fails due to network errors or if the server returns a 5xx status code. The number of retries and the wait time between retries can be configured using the `retryMax`, `retryWaitMin`, and `retryWaitMax` parameters. The wait time between retries is between `retryWaitMin` and `retryWaitMax`. If all retries fail, the `Send` method will return an error. ## Configuration Use the following steps to configure webhook: 1 Register webhook in `argocd-notifications-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.<webhook-name>: | url: https://<hostname>/<optional-path> headers: #optional headers - name: <header-name> value: <header-value> basicAuth: #optional username password username: <username> password: <api-key> insecureSkipVerify: true #optional bool ``` 2 Define template that customizes webhook request method, path and body: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: template.github-commit-status: | webhook: <webhook-name>: method: POST # one of: GET, POST, PUT, PATCH. Default value: GET path: <optional-path-template> body: | <optional-body-template> trigger.<trigger-name>: | - when: app.status.operationState.phase in ['Succeeded'] send: [github-commit-status] ``` 3 Create subscription for webhook integration: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.<trigger-name>.<webhook-name>: "" ``` ## Examples ### Set GitHub commit status ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.github: | url: https://api.github.com headers: #optional headers - name: Authorization value: token $github-token ``` 2 Define template that customizes webhook request method, path and body: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.github: | url: https://api.github.com headers: #optional headers - name: Authorization value: token $github-token template.github-commit-status: | webhook: github: method: POST path: /repos//statuses/ body: | { "state": "pending" "state": "success" "state": "error" "state": "error", "description": "ArgoCD", "target_url": "/applications/", "context": "continuous-delivery/" } ``` ### Start Jenkins Job ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.jenkins: | url: http://<jenkins-host>/job/<job-name>/build?token=<job-secret> basicAuth: username: <username> password: <api-key> type: Opaque ``` ### Send form-data ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.form: | url: https://form.example.com headers: - name: Content-Type value: application/x-www-form-urlencoded template.form-data: | webhook: form: method: POST body: key1=value1&key2=value2 ``` ### Send Slack ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.webhook.slack_webhook: | url: https://hooks.slack.com/services/xxxxx headers: - name: Content-Type value: application/json template.send-slack: | webhook: slack_webhook: method: POST body: | { "attachments": [{ "title": "", "title_link": "/applications/", "color": "#18be52", "fields": [{ "title": "Sync Status", "value": "", "short": true }, { "title": "Repository", "value": "", "short": true }] }] } ```
argocd
Webhook The webhook notification service allows sending a generic HTTP request using the templatized request body and URL Using Webhook you might trigger a Jenkins job update GitHub commit status Parameters The Webhook notification service configuration includes following settings url the url to send the webhook to headers optional the headers to pass along with the webhook basicAuth optional the basic authentication to pass along with the webhook insecureSkipVerify optional bool true or false retryWaitMin Optional the minimum wait time between retries Default value 1s retryWaitMax Optional the maximum wait time between retries Default value 5s retryMax Optional the maximum number of retries Default value 3 Retry Behavior The webhook service will automatically retry the request if it fails due to network errors or if the server returns a 5xx status code The number of retries and the wait time between retries can be configured using the retryMax retryWaitMin and retryWaitMax parameters The wait time between retries is between retryWaitMin and retryWaitMax If all retries fail the Send method will return an error Configuration Use the following steps to configure webhook 1 Register webhook in argocd notifications cm ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook webhook name url https hostname optional path headers optional headers name header name value header value basicAuth optional username password username username password api key insecureSkipVerify true optional bool 2 Define template that customizes webhook request method path and body yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data template github commit status webhook webhook name method POST one of GET POST PUT PATCH Default value GET path optional path template body optional body template trigger trigger name when app status operationState phase in Succeeded send github commit status 3 Create subscription for webhook integration yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscribe trigger name webhook name Examples Set GitHub commit status yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook github url https api github com headers optional headers name Authorization value token github token 2 Define template that customizes webhook request method path and body yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook github url https api github com headers optional headers name Authorization value token github token template github commit status webhook github method POST path repos statuses body state pending state success state error state error description ArgoCD target url applications context continuous delivery Start Jenkins Job yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook jenkins url http jenkins host job job name build token job secret basicAuth username username password api key type Opaque Send form data yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook form url https form example com headers name Content Type value application x www form urlencoded template form data webhook form method POST body key1 value1 key2 value2 Send Slack yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service webhook slack webhook url https hooks slack com services xxxxx headers name Content Type value application json template send slack webhook slack webhook method POST body attachments title title link applications color 18be52 fields title Sync Status value short true title Repository value short true
argocd Parameters AWS SQS name of the queue you are intending to send messages to Can be overridden with target destination annotation optional aws access key must be either referenced from a secret via variable or via env variable AWSACCESSKEYID region of the sqs queue can be provided via env variable AWSDEFAULTREGION optional aws access secret must be either referenced from a secret via variable or via env variable AWSSECRETACCESSKEY This notification service is capable of sending simple messages to AWS SQS queue
# AWS SQS ## Parameters This notification service is capable of sending simple messages to AWS SQS queue. * `queue` - name of the queue you are intending to send messages to. Can be overridden with target destination annotation. * `region` - region of the sqs queue can be provided via env variable AWS_DEFAULT_REGION * `key` - optional, aws access key must be either referenced from a secret via variable or via env variable AWS_ACCESS_KEY_ID * `secret` - optional, aws access secret must be either referenced from a secret via variable or via env variable AWS_SECRET_ACCESS_KEY * `account` optional, external accountId of the queue * `endpointUrl` optional, useful for development with localstack ## Example ### Using Secret for credential retrieval: Resource Annotation: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment annotations: notifications.argoproj.io/subscribe.on-deployment-ready.awssqs: "overwrite-myqueue" ``` * ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.awssqs: | region: "us-east-2" queue: "myqueue" account: "1234567" key: "$awsaccess_key" secret: "$awsaccess_secret" template.deployment-ready: | message: | Deployment is ready! trigger.on-deployment-ready: | - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'}) send: [deployment-ready] - oncePer: obj.metadata.annotations["generation"] ``` Secret ```yaml apiVersion: v1 kind: Secret metadata: name: <secret-name> stringData: awsaccess_key: test awsaccess_secret: test ``` ### Minimal configuration using AWS Env variables Ensure the following list of environment variables are injected via OIDC, or another method. And assuming SQS is local to the account. You may skip usage of secret for sensitive data and omit other parameters. (Setting parameters via ConfigMap takes precedent.) Variables: ```bash export AWS_ACCESS_KEY_ID="test" export AWS_SECRET_ACCESS_KEY="test" export AWS_DEFAULT_REGION="us-east-1" ``` Resource Annotation: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment annotations: notifications.argoproj.io/subscribe.on-deployment-ready.awssqs: "" ``` * ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.awssqs: | queue: "myqueue" template.deployment-ready: | message: | Deployment is ready! trigger.on-deployment-ready: | - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'}) send: [deployment-ready] - oncePer: obj.metadata.annotations["generation"] ``` ## FIFO SQS Queues FIFO queues require a [MessageGroupId](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html#SQS-SendMessage-request-MessageGroupId) to be sent along with every message, every message with a matching MessageGroupId will be processed one by one in order. To send to a FIFO SQS Queue you must include a `messageGroupId` in the template such as in the example below: ```yaml template.deployment-ready: | message: | Deployment is ready! messageGroupId: -deployment ```
argocd
AWS SQS Parameters This notification service is capable of sending simple messages to AWS SQS queue queue name of the queue you are intending to send messages to Can be overridden with target destination annotation region region of the sqs queue can be provided via env variable AWS DEFAULT REGION key optional aws access key must be either referenced from a secret via variable or via env variable AWS ACCESS KEY ID secret optional aws access secret must be either referenced from a secret via variable or via env variable AWS SECRET ACCESS KEY account optional external accountId of the queue endpointUrl optional useful for development with localstack Example Using Secret for credential retrieval Resource Annotation yaml apiVersion apps v1 kind Deployment metadata name nginx deployment annotations notifications argoproj io subscribe on deployment ready awssqs overwrite myqueue ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service awssqs region us east 2 queue myqueue account 1234567 key awsaccess key secret awsaccess secret template deployment ready message Deployment is ready trigger on deployment ready when any obj status conditions type Available status True send deployment ready oncePer obj metadata annotations generation Secret yaml apiVersion v1 kind Secret metadata name secret name stringData awsaccess key test awsaccess secret test Minimal configuration using AWS Env variables Ensure the following list of environment variables are injected via OIDC or another method And assuming SQS is local to the account You may skip usage of secret for sensitive data and omit other parameters Setting parameters via ConfigMap takes precedent Variables bash export AWS ACCESS KEY ID test export AWS SECRET ACCESS KEY test export AWS DEFAULT REGION us east 1 Resource Annotation yaml apiVersion apps v1 kind Deployment metadata name nginx deployment annotations notifications argoproj io subscribe on deployment ready awssqs ConfigMap yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service awssqs queue myqueue template deployment ready message Deployment is ready trigger on deployment ready when any obj status conditions type Available status True send deployment ready oncePer obj metadata annotations generation FIFO SQS Queues FIFO queues require a MessageGroupId https docs aws amazon com AWSSimpleQueueService latest APIReference API SendMessage html SQS SendMessage request MessageGroupId to be sent along with every message every message with a matching MessageGroupId will be processed one by one in order To send to a FIFO SQS Queue you must include a messageGroupId in the template such as in the example below yaml template deployment ready message Deployment is ready messageGroupId deployment
argocd Parameters optional default is api v2 alerts optional default is false when scheme is https whether to skip the verification of ca Alertmanager the alertmanager service address array type The notification service is used to push events to and the following settings need to be specified optional default is http e g http or https
# Alertmanager ## Parameters The notification service is used to push events to [Alertmanager](https://github.com/prometheus/alertmanager), and the following settings need to be specified: * `targets` - the alertmanager service address, array type * `scheme` - optional, default is "http", e.g. http or https * `apiPath` - optional, default is "/api/v2/alerts" * `insecureSkipVerify` - optional, default is "false", when scheme is https whether to skip the verification of ca * `basicAuth` - optional, server auth * `bearerToken` - optional, server auth * `timeout` - optional, the timeout in seconds used when sending alerts, default is "3 seconds" `basicAuth` or `bearerToken` is used for authentication, you can choose one. If the two are set at the same time, `basicAuth` takes precedence over `bearerToken`. ## Example ### Prometheus Alertmanager config ```yaml global: resolve_timeout: 5m route: group_by: ['alertname'] group_wait: 10s group_interval: 10s repeat_interval: 1h receiver: 'default' receivers: - name: 'default' webhook_configs: - send_resolved: false url: 'http://10.5.39.39:10080/api/alerts/webhook' ``` You should turn off "send_resolved" or you will receive unnecessary recovery notifications after "resolve_timeout". ### Send one alertmanager without auth ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:9093 ``` ### Send alertmanager cluster with custom api path If your alertmanager has changed the default api, you can customize "apiPath". ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:443 scheme: https apiPath: /api/events insecureSkipVerify: true ``` ### Send high availability alertmanager with auth Store auth token in `argocd-notifications-secret` Secret and use configure in `argocd-notifications-cm` ConfigMap. ```yaml apiVersion: v1 kind: Secret metadata: name: <secret-name> stringData: alertmanager-username: <username> alertmanager-password: <password> alertmanager-bearer-token: <token> ``` - with basicAuth ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:19093 - 10.5.39.39:29093 - 10.5.39.39:39093 scheme: https apiPath: /api/v2/alerts insecureSkipVerify: true basicAuth: username: $alertmanager-username password: $alertmanager-password ``` - with bearerToken ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:19093 - 10.5.39.39:29093 - 10.5.39.39:39093 scheme: https apiPath: /api/v2/alerts insecureSkipVerify: true bearerToken: $alertmanager-bearer-token ``` ## Templates * `labels` - at least one label pair required, implement different notification strategies according to alertmanager routing * `annotations` - optional, specifies a set of information labels, which can be used to store longer additional information, but only for display * `generatorURL` - optional, default is '', backlink used to identify the entity that caused this alert in the client the `label` or `annotations` or `generatorURL` values can be templated. ```yaml context: | argocdUrl: https://example.com/argocd template.app-deployed: | message: Application has been healthy. alertmanager: labels: fault_priority: "P5" event_bucket: "deploy" event_status: "succeed" recipient: "" annotations: application: '<a href="/applications/"></a>' author: "" message: "" ``` You can do targeted push on [Alertmanager](https://github.com/prometheus/alertmanager) according to labels. ```yaml template.app-deployed: | message: Application has been healthy. alertmanager: labels: alertname: app-deployed fault_priority: "P5" event_bucket: "deploy" ``` There is a special label `alertname`. If you don’t set its value, it will be equal to the template name by default
argocd
Alertmanager Parameters The notification service is used to push events to Alertmanager https github com prometheus alertmanager and the following settings need to be specified targets the alertmanager service address array type scheme optional default is http e g http or https apiPath optional default is api v2 alerts insecureSkipVerify optional default is false when scheme is https whether to skip the verification of ca basicAuth optional server auth bearerToken optional server auth timeout optional the timeout in seconds used when sending alerts default is 3 seconds basicAuth or bearerToken is used for authentication you can choose one If the two are set at the same time basicAuth takes precedence over bearerToken Example Prometheus Alertmanager config yaml global resolve timeout 5m route group by alertname group wait 10s group interval 10s repeat interval 1h receiver default receivers name default webhook configs send resolved false url http 10 5 39 39 10080 api alerts webhook You should turn off send resolved or you will receive unnecessary recovery notifications after resolve timeout Send one alertmanager without auth yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service alertmanager targets 10 5 39 39 9093 Send alertmanager cluster with custom api path If your alertmanager has changed the default api you can customize apiPath yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service alertmanager targets 10 5 39 39 443 scheme https apiPath api events insecureSkipVerify true Send high availability alertmanager with auth Store auth token in argocd notifications secret Secret and use configure in argocd notifications cm ConfigMap yaml apiVersion v1 kind Secret metadata name secret name stringData alertmanager username username alertmanager password password alertmanager bearer token token with basicAuth yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service alertmanager targets 10 5 39 39 19093 10 5 39 39 29093 10 5 39 39 39093 scheme https apiPath api v2 alerts insecureSkipVerify true basicAuth username alertmanager username password alertmanager password with bearerToken yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service alertmanager targets 10 5 39 39 19093 10 5 39 39 29093 10 5 39 39 39093 scheme https apiPath api v2 alerts insecureSkipVerify true bearerToken alertmanager bearer token Templates labels at least one label pair required implement different notification strategies according to alertmanager routing annotations optional specifies a set of information labels which can be used to store longer additional information but only for display generatorURL optional default is backlink used to identify the entity that caused this alert in the client the label or annotations or generatorURL values can be templated yaml context argocdUrl https example com argocd template app deployed message Application has been healthy alertmanager labels fault priority P5 event bucket deploy event status succeed recipient annotations application a href applications a author message You can do targeted push on Alertmanager https github com prometheus alertmanager according to labels yaml template app deployed message Application has been healthy alertmanager labels alertname app deployed fault priority P5 event bucket deploy There is a special label alertname If you don t set its value it will be equal to the template name by default
argocd Parameters the webhook url map e g The Teams notification service send message notifications using Teams bot and requires specifying the following settings Configuration Teams
# Teams ## Parameters The Teams notification service send message notifications using Teams bot and requires specifying the following settings: * `recipientUrls` - the webhook url map, e.g. `channelName: https://example.com` ## Configuration 1. Open `Teams` and goto `Apps` 2. Find `Incoming Webhook` microsoft app and click on it 3. Press `Add to a team` -> select team and channel -> press `Set up a connector` 4. Enter webhook name and upload image (optional) 5. Press `Create` then copy webhook url and store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.teams: | recipientUrls: channelName: $channel-teams-url ``` ```yaml apiVersion: v1 kind: Secret metadata: name: <secret-name> stringData: channel-teams-url: https://example.com ``` 6. Create subscription for your Teams integration: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.teams: channelName ``` ## Templates ![](https://user-images.githubusercontent.com/18019529/114271500-9d2b8880-9a4c-11eb-85c1-f6935f0431d5.png) [Notification templates](../templates.md) can be customized to leverage teams message sections, facts, themeColor, summary and potentialAction [feature](https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using). ```yaml template.app-sync-succeeded: | teams: themeColor: "#000080" sections: | [{ "facts": [ { "name": "Sync Status", "value": "" }, { "name": "Repository", "value": "" } ] }] potentialAction: |- [{ "@type":"OpenUri", "name":"Operation Details", "targets":[{ "os":"default", "uri":"/applications/?operation=true" }] }] title: Application has been successfully synced text: Application has been successfully synced at . summary: " sync succeeded" ``` ### facts field You can use `facts` field instead of `sections` field. ```yaml template.app-sync-succeeded: | teams: facts: | [{ "name": "Sync Status", "value": "" }, { "name": "Repository", "value": "" }] ``` ### theme color field You can set theme color as hex string for the message. ![](https://user-images.githubusercontent.com/1164159/114864810-0718a900-9e24-11eb-8127-8d95da9544c1.png) ```yaml template.app-sync-succeeded: | teams: themeColor: "#000080" ``` ### summary field You can set a summary of the message that will be shown on Notification & Activity Feed ![](https://user-images.githubusercontent.com/6957724/116587921-84c4d480-a94d-11eb-9da4-f365151a12e7.jpg) ![](https://user-images.githubusercontent.com/6957724/116588002-99a16800-a94d-11eb-807f-8626eb53b980.jpg) ```yaml template.app-sync-succeeded: | teams: summary: "Sync Succeeded" ```
argocd
Teams Parameters The Teams notification service send message notifications using Teams bot and requires specifying the following settings recipientUrls the webhook url map e g channelName https example com Configuration 1 Open Teams and goto Apps 2 Find Incoming Webhook microsoft app and click on it 3 Press Add to a team select team and channel press Set up a connector 4 Enter webhook name and upload image optional 5 Press Create then copy webhook url and store it in argocd notifications secret and define it in argocd notifications cm yaml apiVersion v1 kind ConfigMap metadata name argocd notifications cm data service teams recipientUrls channelName channel teams url yaml apiVersion v1 kind Secret metadata name secret name stringData channel teams url https example com 6 Create subscription for your Teams integration yaml apiVersion argoproj io v1alpha1 kind Application metadata annotations notifications argoproj io subscribe on sync succeeded teams channelName Templates https user images githubusercontent com 18019529 114271500 9d2b8880 9a4c 11eb 85c1 f6935f0431d5 png Notification templates templates md can be customized to leverage teams message sections facts themeColor summary and potentialAction feature https docs microsoft com en us microsoftteams platform webhooks and connectors how to connectors using yaml template app sync succeeded teams themeColor 000080 sections facts name Sync Status value name Repository value potentialAction type OpenUri name Operation Details targets os default uri applications operation true title Application has been successfully synced text Application has been successfully synced at summary sync succeeded facts field You can use facts field instead of sections field yaml template app sync succeeded teams facts name Sync Status value name Repository value theme color field You can set theme color as hex string for the message https user images githubusercontent com 1164159 114864810 0718a900 9e24 11eb 8127 8d95da9544c1 png yaml template app sync succeeded teams themeColor 000080 summary field You can set a summary of the message that will be shown on Notification Activity Feed https user images githubusercontent com 6957724 116587921 84c4d480 a94d 11eb 9da4 f365151a12e7 jpg https user images githubusercontent com 6957724 116588002 99a16800 a94d 11eb 807f 8626eb53b980 jpg yaml template app sync succeeded teams summary Sync Succeeded
cilium imminent and atomic if the deletion request is valid and Flag Name Description provided properties DeleteEndpointID Deletes the endpoint specified by the ID Deletion is I flags are compatible with the flag DeleteEndpoint Deletes a list of endpoints that have endpoints matching the
.. <!-- This file was autogenerated via api-flaggen, do not edit manually--> Cilium Agent API ================ The following API flags are compatible with the ``cilium-agent`` flag ``enable-cilium-api-server-access``. ===================== ==================== Flag Name Description ===================== ==================== DeleteEndpoint Deletes a list of endpoints that have endpoints matching the provided properties DeleteEndpointID Deletes the endpoint specified by the ID. Deletion is imminent and atomic, if the deletion request is valid and the endpoint exists, deletion will occur even if errors are encountered in the process. If errors have been encountered, the code 202 will be returned, otherwise 200 on success. All resources associated with the endpoint will be freed and the workload represented by the endpoint will be disconnected.It will no longer be able to initiate or receive communications of any sort. DeleteFqdnCache Deletes matching DNS lookups from the cache, optionally restricted by DNS name. The removed IP data will no longer be used in generated policies. DeleteIPAMIP - DeletePolicy - DeletePrefilter - DeleteRecorderID - DeleteServiceID - GetBGPPeers Retrieves current operational state of BGP peers created by Cilium BGP virtual router. This includes session state, uptime, information per address family, etc. GetBGPRoutePolicies Retrieves route policies from BGP Control Plane. GetBGPRoutes Retrieves routes from BGP Control Plane RIB filtered by parameters you specify GetCgroupDumpMetadata - GetClusterNodes - GetConfig Returns the configuration of the Cilium daemon. GetDebuginfo - GetEndpoint Retrieves a list of endpoints that have metadata matching the provided parameters, or all endpoints if no parameters provided. GetEndpointID Returns endpoint information GetEndpointIDConfig Retrieves the configuration of the specified endpoint. GetEndpointIDHealthz - GetEndpointIDLabels - GetEndpointIDLog - GetFqdnCache Retrieves the list of DNS lookups intercepted from endpoints, optionally filtered by DNS name, CIDR IP range or source. GetFqdnCacheID Retrieves the list of DNS lookups intercepted from the specific endpoint, optionally filtered by endpoint id, DNS name, CIDR IP range or source. GetFqdnNames Retrieves the list of DNS-related fields (names to poll, selectors and their corresponding regexes). GetHealthz Returns health and status information of the Cilium daemon and related components such as the local container runtime, connected datastore, Kubernetes integration and Hubble. GetIP Retrieves a list of IPs with known associated information such as their identities, host addresses, Kubernetes pod names, etc. The list can optionally filtered by a CIDR IP range. GetIdentity Retrieves a list of identities that have metadata matching the provided parameters, or all identities if no parameters are provided. GetIdentityEndpoints - GetIdentityID - GetLRP - GetMap - GetMapName - GetMapNameEvents - GetNodeIds Retrieves a list of node IDs allocated by the agent and their associated node IP addresses. GetPolicy Returns the entire policy tree with all children. GetPolicySelectors - GetPrefilter - GetRecorder - GetRecorderID - GetRecorderMasks - GetService - GetServiceID - PatchConfig Updates the daemon configuration by applying the provided ConfigurationMap and regenerates & recompiles all required datapath components. PatchEndpointID Applies the endpoint change request to an existing endpoint PatchEndpointIDConfig Update the configuration of an existing endpoint and regenerates & recompiles the corresponding programs automatically. PatchEndpointIDLabels Sets labels associated with an endpoint. These can be user provided or derived from the orchestration system. PatchPrefilter - PostIPAM - PostIPAMIP - PutEndpointID Creates a new endpoint PutPolicy - PutRecorderID - PutServiceID - ===================== ==================== Cilium Agent Clusterwide Health API =================================== The following API flags are compatible with the ``cilium-agent`` flag ``enable-cilium-health-api-server-access``. ===================== ==================== Flag Name Description ===================== ==================== GetHealthz Returns health and status information of the local node including load and uptime, as well as the status of related components including the Cilium daemon. GetStatus Returns the connectivity status to all other cilium-health instances using interval-based probing. PutStatusProbe Runs a synchronous probe to all other cilium-health instances and returns the connectivity status. ===================== ==================== Cilium Operator API =================== The following API flags are compatible with the ``cilium-operator`` flag ``enable-cilium-operator-server-access``. ===================== ==================== Flag Name Description ===================== ==================== GetCluster Returns the list of remote clusters and their status. GetHealthz Returns the status of cilium operator instance. GetMetrics Returns the metrics exposed by the Cilium operator. ===================== ====================
cilium
This file was autogenerated via api flaggen do not edit manually Cilium Agent API The following API flags are compatible with the cilium agent flag enable cilium api server access Flag Name Description DeleteEndpoint Deletes a list of endpoints that have endpoints matching the provided properties DeleteEndpointID Deletes the endpoint specified by the ID Deletion is imminent and atomic if the deletion request is valid and the endpoint exists deletion will occur even if errors are encountered in the process If errors have been encountered the code 202 will be returned otherwise 200 on success All resources associated with the endpoint will be freed and the workload represented by the endpoint will be disconnected It will no longer be able to initiate or receive communications of any sort DeleteFqdnCache Deletes matching DNS lookups from the cache optionally restricted by DNS name The removed IP data will no longer be used in generated policies DeleteIPAMIP DeletePolicy DeletePrefilter DeleteRecorderID DeleteServiceID GetBGPPeers Retrieves current operational state of BGP peers created by Cilium BGP virtual router This includes session state uptime information per address family etc GetBGPRoutePolicies Retrieves route policies from BGP Control Plane GetBGPRoutes Retrieves routes from BGP Control Plane RIB filtered by parameters you specify GetCgroupDumpMetadata GetClusterNodes GetConfig Returns the configuration of the Cilium daemon GetDebuginfo GetEndpoint Retrieves a list of endpoints that have metadata matching the provided parameters or all endpoints if no parameters provided GetEndpointID Returns endpoint information GetEndpointIDConfig Retrieves the configuration of the specified endpoint GetEndpointIDHealthz GetEndpointIDLabels GetEndpointIDLog GetFqdnCache Retrieves the list of DNS lookups intercepted from endpoints optionally filtered by DNS name CIDR IP range or source GetFqdnCacheID Retrieves the list of DNS lookups intercepted from the specific endpoint optionally filtered by endpoint id DNS name CIDR IP range or source GetFqdnNames Retrieves the list of DNS related fields names to poll selectors and their corresponding regexes GetHealthz Returns health and status information of the Cilium daemon and related components such as the local container runtime connected datastore Kubernetes integration and Hubble GetIP Retrieves a list of IPs with known associated information such as their identities host addresses Kubernetes pod names etc The list can optionally filtered by a CIDR IP range GetIdentity Retrieves a list of identities that have metadata matching the provided parameters or all identities if no parameters are provided GetIdentityEndpoints GetIdentityID GetLRP GetMap GetMapName GetMapNameEvents GetNodeIds Retrieves a list of node IDs allocated by the agent and their associated node IP addresses GetPolicy Returns the entire policy tree with all children GetPolicySelectors GetPrefilter GetRecorder GetRecorderID GetRecorderMasks GetService GetServiceID PatchConfig Updates the daemon configuration by applying the provided ConfigurationMap and regenerates recompiles all required datapath components PatchEndpointID Applies the endpoint change request to an existing endpoint PatchEndpointIDConfig Update the configuration of an existing endpoint and regenerates recompiles the corresponding programs automatically PatchEndpointIDLabels Sets labels associated with an endpoint These can be user provided or derived from the orchestration system PatchPrefilter PostIPAM PostIPAMIP PutEndpointID Creates a new endpoint PutPolicy PutRecorderID PutServiceID Cilium Agent Clusterwide Health API The following API flags are compatible with the cilium agent flag enable cilium health api server access Flag Name Description GetHealthz Returns health and status information of the local node including load and uptime as well as the status of related components including the Cilium daemon GetStatus Returns the connectivity status to all other cilium health instances using interval based probing PutStatusProbe Runs a synchronous probe to all other cilium health instances and returns the connectivity status Cilium Operator API The following API flags are compatible with the cilium operator flag enable cilium operator server access Flag Name Description GetCluster Returns the list of remote clusters and their status GetHealthz Returns the status of cilium operator instance GetMetrics Returns the metrics exposed by the Cilium operator
cilium There have been reports from users hitting issues with Argo CD This documentation page outlines some of the known issues and their solutions docs cilium io argocdissues Troubleshooting Cilium deployed with Argo CD
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _argocd_issues: ******************************************** Troubleshooting Cilium deployed with Argo CD ******************************************** There have been reports from users hitting issues with Argo CD. This documentation page outlines some of the known issues and their solutions. Argo CD deletes CustomResourceDefinitions ========================================= When deploying Cilium with Argo CD, some users have reported that Cilium-generated custom resources disappear, causing one or more of the following issues: - ``ciliumid`` not found (:gh-issue:`17614`) - Argo CD Out-of-sync issues for hubble-generate-certs (:gh-issue:`14550`) - Out-of-sync issues for Cilium using Argo CD (:gh-issue:`18298`) Solution -------- To prevent these issues, declare resource exclusions in the Argo CD ``ConfigMap`` by following `these instructions <https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#resource-exclusioninclusion>`__. Here is an example snippet: .. code-block:: yaml resource.exclusions: | - apiGroups: - cilium.io kinds: - CiliumIdentity clusters: - "*" Also, it has been reported that the problem may affect all workloads you deploy with Argo CD in a cluster running Cilium, not just Cilium itself. If so, you will need the following exclusions in your Argo CD application definition to avoid getting “out of sync” when Hubble rotates its certificates. .. code-block:: yaml ignoreDifferences: - group: "" kind: ConfigMap name: hubble-ca-cert jsonPointers: - /data/ca.crt - group: "" kind: Secret name: hubble-relay-client-certs jsonPointers: - /data/ca.crt - /data/tls.crt - /data/tls.key - group: "" kind: Secret name: hubble-server-certs jsonPointers: - /data/ca.crt - /data/tls.crt - /data/tls.key .. note:: After applying the above configurations, for the settings to take effect, you will need to restart the Argo CD deployments. Helm template with serviceMonitor enabled fails =============================================== Some users have reported that when they install Cilium using Argo CD and run ``helm template`` with ``serviceMonitor`` enabled, it fails. It fails because Argo CD CLI doesn't pass the ``--api-versions`` flag to Helm upon deployment. Solution -------- This `pull request <https://github.com/argoproj/argo-cd/pull/8371>`__ fixed this issue in Argo CD's `v2.3.0 release <https://github.com/argoproj/argo-cd/releases/tag/v2.3.0>`__. Upgrade your Argo CD and check if ``helm template`` with ``serviceMonitor`` enabled still fails. .. note:: When using ``helm template``, it is highly recommended you set ``--kube-version`` and ``--api-versions`` with the values matching your target Kubernetes cluster. Helm charts such as Cilium's often conditionally enable certain Kubernetes features based on their availability (beta vs stable) on the target cluster. By specifying ``--api-versions=monitoring.coreos.com/v1`` you should be able to pass validation with ``helm template``. If you have an issue with Argo CD that's not outlined above, check this `list of Argo CD related issues on GitHub <https://github.com/cilium/cilium/issues?q=is%3Aissue+argocd>`__. If you can't find an issue that relates to yours, create one and/or seek help on `Cilium Slack`_. Application chart for Cilium deployed to Talos Linux fails with: field not declared in schema ============================================================================================= When deploying Cilium to Talos Linux with ArgoCD, some users have reported issues due to Talos Security configuration. ArgoCD may fail to deploy the application with the message:: Failed to compare desired state to live state: failed to calculate diff: error calculating structured merge diff: error building typed value from live resource: .spec.template.spec.securityContext.appArmorProfile: field not declared in schema Solution -------- Add option ``ServerSideApply=true`` to list ``syncPolicy.syncOptions`` for the Application. .. code-block:: yaml apiVersion: argoproj.io/v1alpha1 kind: Application spec: syncPolicy: syncOptions: - ServerSideApply=true Visit the `ArgoCD documentation <https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#server-side-apply>`__ for further details.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io argocd issues Troubleshooting Cilium deployed with Argo CD There have been reports from users hitting issues with Argo CD This documentation page outlines some of the known issues and their solutions Argo CD deletes CustomResourceDefinitions When deploying Cilium with Argo CD some users have reported that Cilium generated custom resources disappear causing one or more of the following issues ciliumid not found gh issue 17614 Argo CD Out of sync issues for hubble generate certs gh issue 14550 Out of sync issues for Cilium using Argo CD gh issue 18298 Solution To prevent these issues declare resource exclusions in the Argo CD ConfigMap by following these instructions https argo cd readthedocs io en stable operator manual declarative setup resource exclusioninclusion Here is an example snippet code block yaml resource exclusions apiGroups cilium io kinds CiliumIdentity clusters Also it has been reported that the problem may affect all workloads you deploy with Argo CD in a cluster running Cilium not just Cilium itself If so you will need the following exclusions in your Argo CD application definition to avoid getting out of sync when Hubble rotates its certificates code block yaml ignoreDifferences group kind ConfigMap name hubble ca cert jsonPointers data ca crt group kind Secret name hubble relay client certs jsonPointers data ca crt data tls crt data tls key group kind Secret name hubble server certs jsonPointers data ca crt data tls crt data tls key note After applying the above configurations for the settings to take effect you will need to restart the Argo CD deployments Helm template with serviceMonitor enabled fails Some users have reported that when they install Cilium using Argo CD and run helm template with serviceMonitor enabled it fails It fails because Argo CD CLI doesn t pass the api versions flag to Helm upon deployment Solution This pull request https github com argoproj argo cd pull 8371 fixed this issue in Argo CD s v2 3 0 release https github com argoproj argo cd releases tag v2 3 0 Upgrade your Argo CD and check if helm template with serviceMonitor enabled still fails note When using helm template it is highly recommended you set kube version and api versions with the values matching your target Kubernetes cluster Helm charts such as Cilium s often conditionally enable certain Kubernetes features based on their availability beta vs stable on the target cluster By specifying api versions monitoring coreos com v1 you should be able to pass validation with helm template If you have an issue with Argo CD that s not outlined above check this list of Argo CD related issues on GitHub https github com cilium cilium issues q is 3Aissue argocd If you can t find an issue that relates to yours create one and or seek help on Cilium Slack Application chart for Cilium deployed to Talos Linux fails with field not declared in schema When deploying Cilium to Talos Linux with ArgoCD some users have reported issues due to Talos Security configuration ArgoCD may fail to deploy the application with the message Failed to compare desired state to live state failed to calculate diff error calculating structured merge diff error building typed value from live resource spec template spec securityContext appArmorProfile field not declared in schema Solution Add option ServerSideApply true to list syncPolicy syncOptions for the Application code block yaml apiVersion argoproj io v1alpha1 kind Application spec syncPolicy syncOptions ServerSideApply true Visit the ArgoCD documentation https argo cd readthedocs io en stable user guide sync options server side apply for further details
cilium on a per node basis This allows overriding docs cilium io The Cilium agent process a k a DaemonSet supports setting configuration per node configuration Per node configuration
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _per-node-configuration: ********************** Per-node configuration ********************** The Cilium agent process (a.k.a. DaemonSet) supports setting configuration on a per-node basis. This allows overriding :ref:`cilium-config-configmap` for a node or set of nodes. It is managed by CiliumNodeConfig objects. This feature is useful for: - Gradually rolling out changes. - Selectively enabling features that require specific hardware: * :ref:`XDP acceleration` * :ref:`ipv6_big_tcp` CiliumNodeConfig objects ------------------------ A CiliumNodeConfig object allows for overriding ConfigMap / Agent arguments. It consists of a set of fields and a label selector. The label selector defines to which nodes the configuration applies. As is the standard with Kubernetes, an empty LabelSelector (e.g. ``{}``) selects all nodes. .. note:: Creating or modifying a CiliumNodeConfig will not cause changes to take effect until pods are deleted and re-created (or their node is restarted). Example: selective XDP enablement --------------------------------- To enable :ref:`XDP acceleration` only on nodes with necessary hardware, one would label the relevant nodes and override their configuration. .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: enable-xdp spec: nodeSelector: matchLabels: io.cilium.xdp-offload: "true" defaults: bpf-lb-acceleration: native Example: KubeProxyReplacement Rollout ------------------------------------- To roll out :ref:`kube-proxy replacement <kubeproxy-free>` in a gradual manner, you may also wish to use the CiliumNodeConfig feature. This will label all migrated nodes with ``io.cilium.migration/kube-proxy-replacement: true`` .. warning:: You must have installed Cilium with the Helm values ``k8sServiceHost`` and ``k8sServicePort``. Otherwise, Cilium will not be able to reach the Kubernetes APIServer after kube-proxy is uninstalled. You can apply these two values to a running cluster via ``helm upgrade``. #. Patch kube-proxy to only run on unmigrated nodes. .. code-block:: shell-session kubectl -n kube-system patch daemonset kube-proxy --patch '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "io.cilium.migration/kube-proxy-replacement", "operator": "NotIn", "values": ["true"]}]}]}}}}}}}' #. Configure Cilium to use kube-proxy replacement on migrated nodes .. code-block:: shell-session cat <<EOF | kubectl apply --server-side -f - apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: kube-proxy-replacement spec: nodeSelector: matchLabels: io.cilium.migration/kube-proxy-replacement: true defaults: kube-proxy-replacement: true kube-proxy-replacement-healthz-bind-address: "0.0.0.0:10256" EOF #. Select a node to migrate. Optionally, cordon and drain that node: .. code-block:: shell-session export NODE=kind-worker kubectl label node $NODE --overwrite 'io.cilium.migration/kube-proxy-replacement=true' kubectl cordon $NODE #. Delete Cilium DaemonSet to reload configuration: .. code-block:: shell-session kubectl -n kube-system delete pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE #. Ensure Cilium has the correct configuration: .. code-block:: shell-session kubectl -n kube-system exec $(kubectl -n kube-system get pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE -o name) -c cilium-agent -- \ cilium config get kube-proxy-replacement true #. Uncordon node .. code-block:: shell-session kubectl uncordon $NODE #. Cleanup: set default to kube-proxy-replacement: .. code-block:: shell-session cilium config set --restart=false kube-proxy-replacement true cilium config set --restart=false kube-proxy-replacement-healthz-bind-address "0.0.0.0:10256" kubectl -n kube-system delete ciliumnodeconfig kube-proxy-replacement #. Cleanup: delete kube-proxy daemonset, unlabel nodes .. code-block:: shell-session kubectl -n kube-system delete daemonset kube-proxy kubectl label node --all --overwrite 'io.cilium.migration/kube-proxy-replacement-'
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io per node configuration Per node configuration The Cilium agent process a k a DaemonSet supports setting configuration on a per node basis This allows overriding ref cilium config configmap for a node or set of nodes It is managed by CiliumNodeConfig objects This feature is useful for Gradually rolling out changes Selectively enabling features that require specific hardware ref XDP acceleration ref ipv6 big tcp CiliumNodeConfig objects A CiliumNodeConfig object allows for overriding ConfigMap Agent arguments It consists of a set of fields and a label selector The label selector defines to which nodes the configuration applies As is the standard with Kubernetes an empty LabelSelector e g selects all nodes note Creating or modifying a CiliumNodeConfig will not cause changes to take effect until pods are deleted and re created or their node is restarted Example selective XDP enablement To enable ref XDP acceleration only on nodes with necessary hardware one would label the relevant nodes and override their configuration code block yaml apiVersion cilium io v2 kind CiliumNodeConfig metadata namespace kube system name enable xdp spec nodeSelector matchLabels io cilium xdp offload true defaults bpf lb acceleration native Example KubeProxyReplacement Rollout To roll out ref kube proxy replacement kubeproxy free in a gradual manner you may also wish to use the CiliumNodeConfig feature This will label all migrated nodes with io cilium migration kube proxy replacement true warning You must have installed Cilium with the Helm values k8sServiceHost and k8sServicePort Otherwise Cilium will not be able to reach the Kubernetes APIServer after kube proxy is uninstalled You can apply these two values to a running cluster via helm upgrade Patch kube proxy to only run on unmigrated nodes code block shell session kubectl n kube system patch daemonset kube proxy patch spec template spec affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key io cilium migration kube proxy replacement operator NotIn values true Configure Cilium to use kube proxy replacement on migrated nodes code block shell session cat EOF kubectl apply server side f apiVersion cilium io v2 kind CiliumNodeConfig metadata namespace kube system name kube proxy replacement spec nodeSelector matchLabels io cilium migration kube proxy replacement true defaults kube proxy replacement true kube proxy replacement healthz bind address 0 0 0 0 10256 EOF Select a node to migrate Optionally cordon and drain that node code block shell session export NODE kind worker kubectl label node NODE overwrite io cilium migration kube proxy replacement true kubectl cordon NODE Delete Cilium DaemonSet to reload configuration code block shell session kubectl n kube system delete pod l k8s app cilium field selector spec nodeName NODE Ensure Cilium has the correct configuration code block shell session kubectl n kube system exec kubectl n kube system get pod l k8s app cilium field selector spec nodeName NODE o name c cilium agent cilium config get kube proxy replacement true Uncordon node code block shell session kubectl uncordon NODE Cleanup set default to kube proxy replacement code block shell session cilium config set restart false kube proxy replacement true cilium config set restart false kube proxy replacement healthz bind address 0 0 0 0 10256 kubectl n kube system delete ciliumnodeconfig kube proxy replacement Cleanup delete kube proxy daemonset unlabel nodes code block shell session kubectl n kube system delete daemonset kube proxy kubectl label node all overwrite io cilium migration kube proxy replacement
cilium k8sinstallquick k8sinstallstandard docs cilium io Cilium Quick Installation k8squickinstall
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _k8s_install_quick: .. _k8s_quick_install: .. _k8s_install_standard: ************************* Cilium Quick Installation ************************* This guide will walk you through the quick default installation. It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using. All state is stored using Kubernetes custom resource definitions (CRDs). This is the best installation method for most use cases. For large environments (> 500 nodes) or if you want to run specific datapath modes, refer to the :ref:`getting_started` guide. Should you encounter any issues during the installation, please refer to the :ref:`troubleshooting_k8s` section and/or seek help on `Cilium Slack`_. .. _create_cluster: Create the Cluster =================== If you don't have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service: .. tabs:: .. group-tab:: GKE The following commands create a Kubernetes cluster using `Google Kubernetes Engine <https://cloud.google.com/kubernetes-engine>`_. See `Installing Google Cloud SDK <https://cloud.google.com/sdk/install>`_ for instructions on how to install ``gcloud`` and prepare your account. .. code-block:: bash export NAME="$(whoami)-$RANDOM" # Create the node pool with the following taint to guarantee that # Pods are only scheduled/executed in the node when Cilium is ready. # Alternatively, see the note below. gcloud container clusters create "${NAME}" \ --node-taints node.cilium.io/agent-not-ready=true:NoExecute \ --zone us-west2-a gcloud container clusters get-credentials "${NAME}" --zone us-west2-a .. note:: Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods<taint_effects>`. .. group-tab:: AKS The following commands create a Kubernetes cluster using `Azure Kubernetes Service <https://docs.microsoft.com/en-us/azure/aks/>`_ with no CNI plugin pre-installed (BYOCNI). See `Azure Cloud CLI <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>`_ for instructions on how to install ``az`` and prepare your account, and the `Bring your own CNI documentation <https://docs.microsoft.com/en-us/azure/aks/use-byo-cni?tabs=azure-cli>`_ for more details about BYOCNI prerequisites / implications. .. code-block:: bash export NAME="$(whoami)-$RANDOM" export AZURE_RESOURCE_GROUP="${NAME}-group" az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2 # Create AKS cluster az aks create \ --resource-group "${AZURE_RESOURCE_GROUP}" \ --name "${NAME}" \ --network-plugin none \ --generate-ssh-keys # Get the credentials to access the cluster with kubectl az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}" .. group-tab:: EKS The following commands create a Kubernetes cluster with ``eksctl`` using `Amazon Elastic Kubernetes Service <https://aws.amazon.com/eks/>`_. See `eksctl Installation <https://github.com/weaveworks/eksctl>`_ for instructions on how to install ``eksctl`` and prepare your account. .. code-block:: none export NAME="$(whoami)-$RANDOM" cat <<EOF >eks-config.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: ${NAME} region: eu-west-1 managedNodeGroups: - name: ng-1 desiredCapacity: 2 privateNetworking: true # taint nodes so that application pods are # not scheduled/executed until Cilium is deployed. # Alternatively, see the note below. taints: - key: "node.cilium.io/agent-not-ready" value: "true" effect: "NoExecute" EOF eksctl create cluster -f ./eks-config.yaml .. note:: Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods<taint_effects>`. .. group-tab:: kind Install ``kind`` >= v0.7.0 per kind documentation: `Installation and Usage <https://kind.sigs.k8s.io/#installation-and-usage>`_ .. parsed-literal:: curl -LO \ |SCM_WEB|\/Documentation/installation/kind-config.yaml kind create cluster --config=kind-config.yaml .. note:: Cilium may fail to deploy due to too many open files in one or more of the agent pods. If you notice this error, you can increase the ``inotify`` resource limits on your host machine (see `Pod errors due to "too many open files" <https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files>`__). .. group-tab:: minikube Install minikube ≥ v1.28.0 as per minikube documentation: `Install Minikube <https://kubernetes.io/docs/tasks/tools/install-minikube/>`_. The following command will bring up a single node minikube cluster prepared for installing cilium. .. code-block:: shell-session minikube start --cni=cilium .. note:: - This may not install the latest version of cilium. - It might be necessary to add ``--host-dns-resolver=false`` if using the Virtualbox provider, otherwise DNS resolution may not work after Cilium installation. .. group-tab:: Rancher Desktop Install Rancher Desktop >= v1.1.0 as per Rancher Desktop documentation: `Install Rancher Desktop <https://docs.rancherdesktop.io/getting-started/installation>`_. Next you need to configure Rancher Desktop to disable the built-in CNI so you can install Cilium. .. include:: ../installation/rancher-desktop-configure.rst .. group-tab:: Alibaba ACK .. include:: ../beta.rst .. note:: The AlibabaCloud ENI integration with Cilium is subject to the following limitations: - It is currently only enabled for IPv4. - It only works with instances supporting ENI. Refer to `Instance families <https://www.alibabacloud.com/help/doc-detail/25378.htm>`_ for details. Setup a Kubernetes on AlibabaCloud. You can use any method you prefer. The quickest way is to create an ACK (Alibaba Cloud Container Service for Kubernetes) cluster and to replace the CNI plugin with Cilium. For more details on how to set up an ACK cluster please follow the `official documentation <https://www.alibabacloud.com/help/doc-detail/86745.htm>`_. .. _install_cilium_cli: Install the Cilium CLI ====================== .. include:: ../installation/cli-download.rst .. admonition:: Video :class: attention To learn more about the Cilium CLI, check out `eCHO episode 8: Exploring the Cilium CLI <https://www.youtube.com/watch?v=ndjmaM1i0WQ&t=1136s>`__. Install Cilium ============== You can install Cilium on any Kubernetes cluster. Pick one of the options below: .. tabs:: .. group-tab:: Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster. The installer will attempt to automatically pick the best configuration options for you. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms. .. include:: ../installation/requirements-generic.rst **Install Cilium** Install Cilium into the Kubernetes cluster pointed to by your current kubectl context: .. parsed-literal:: cilium install |CHART_VERSION| .. group-tab:: GKE .. include:: ../installation/requirements-gke.rst **Install Cilium:** Install Cilium into the GKE cluster: .. parsed-literal:: cilium install |CHART_VERSION| .. group-tab:: AKS .. include:: ../installation/requirements-aks.rst **Install Cilium:** Install Cilium into the AKS cluster: .. parsed-literal:: cilium install |CHART_VERSION| --set azure.resourceGroup="${AZURE_RESOURCE_GROUP}" .. group-tab:: EKS .. include:: ../installation/requirements-eks.rst **Install Cilium:** Install Cilium into the EKS cluster. .. parsed-literal:: cilium install |CHART_VERSION| cilium status --wait .. note:: If you have to uninstall Cilium and later install it again, that could cause connectivity issues due to ``aws-node`` DaemonSet flushing Linux routing tables. The issues can be fixed by restarting all pods, alternatively to avoid such issues you can delete ``aws-node`` DaemonSet prior to installing Cilium. .. group-tab:: OpenShift .. include:: ../installation/requirements-openshift.rst **Install Cilium:** Cilium is a `Certified OpenShift CNI Plugin <https://access.redhat.com/articles/5436171>`_ and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to :ref:`k8s_install_openshift_okd` for more information. .. group-tab:: RKE .. include:: ../installation/requirements-rke.rst **Install Cilium:** Install Cilium into your newly created RKE cluster: .. parsed-literal:: cilium install |CHART_VERSION| .. group-tab:: k3s .. include:: ../installation/requirements-k3s.rst **Install Cilium:** Install Cilium into your newly created Kubernetes cluster: .. parsed-literal:: cilium install |CHART_VERSION| .. group-tab:: Alibaba ACK You can install Cilium using Helm on Alibaba ACK, refer to `k8s_install_helm` for details. If the installation fails for some reason, run ``cilium status`` to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed. .. tip:: You may be seeing ``cilium install`` print something like this: .. code-block:: shell-session ♻️ Restarted unmanaged pod kube-system/event-exporter-gke-564fb97f9-rv8hg ♻️ Restarted unmanaged pod kube-system/kube-dns-6465f78586-hlcrz ♻️ Restarted unmanaged pod kube-system/kube-dns-autoscaler-7f89fb6b79-fsmsg ♻️ Restarted unmanaged pod kube-system/l7-default-backend-7fd66b8b88-qqhh5 ♻️ Restarted unmanaged pod kube-system/metrics-server-v0.3.6-7b5cdbcbb8-kjl65 ♻️ Restarted unmanaged pod kube-system/stackdriver-metadata-agent-cluster-level-6cc964cddf-8n2rt This indicates that your cluster was already running some pods before Cilium was deployed and the installer has automatically restarted them to ensure all pods get networking provided by Cilium. Validate the Installation ========================= .. include:: ../installation/cli-status.rst .. include:: ../installation/cli-connectivity-test.rst .. include:: ../installation/next-steps.rst
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io k8s install quick k8s quick install k8s install standard Cilium Quick Installation This guide will walk you through the quick default installation It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using All state is stored using Kubernetes custom resource definitions CRDs This is the best installation method for most use cases For large environments 500 nodes or if you want to run specific datapath modes refer to the ref getting started guide Should you encounter any issues during the installation please refer to the ref troubleshooting k8s section and or seek help on Cilium Slack create cluster Create the Cluster If you don t have a Kubernetes Cluster yet you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service tabs group tab GKE The following commands create a Kubernetes cluster using Google Kubernetes Engine https cloud google com kubernetes engine See Installing Google Cloud SDK https cloud google com sdk install for instructions on how to install gcloud and prepare your account code block bash export NAME whoami RANDOM Create the node pool with the following taint to guarantee that Pods are only scheduled executed in the node when Cilium is ready Alternatively see the note below gcloud container clusters create NAME node taints node cilium io agent not ready true NoExecute zone us west2 a gcloud container clusters get credentials NAME zone us west2 a note Please make sure to read and understand the documentation page on ref taint effects and unmanaged pods taint effects group tab AKS The following commands create a Kubernetes cluster using Azure Kubernetes Service https docs microsoft com en us azure aks with no CNI plugin pre installed BYOCNI See Azure Cloud CLI https docs microsoft com en us cli azure install azure cli view azure cli latest for instructions on how to install az and prepare your account and the Bring your own CNI documentation https docs microsoft com en us azure aks use byo cni tabs azure cli for more details about BYOCNI prerequisites implications code block bash export NAME whoami RANDOM export AZURE RESOURCE GROUP NAME group az group create name AZURE RESOURCE GROUP l westus2 Create AKS cluster az aks create resource group AZURE RESOURCE GROUP name NAME network plugin none generate ssh keys Get the credentials to access the cluster with kubectl az aks get credentials resource group AZURE RESOURCE GROUP name NAME group tab EKS The following commands create a Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service https aws amazon com eks See eksctl Installation https github com weaveworks eksctl for instructions on how to install eksctl and prepare your account code block none export NAME whoami RANDOM cat EOF eks config yaml apiVersion eksctl io v1alpha5 kind ClusterConfig metadata name NAME region eu west 1 managedNodeGroups name ng 1 desiredCapacity 2 privateNetworking true taint nodes so that application pods are not scheduled executed until Cilium is deployed Alternatively see the note below taints key node cilium io agent not ready value true effect NoExecute EOF eksctl create cluster f eks config yaml note Please make sure to read and understand the documentation page on ref taint effects and unmanaged pods taint effects group tab kind Install kind v0 7 0 per kind documentation Installation and Usage https kind sigs k8s io installation and usage parsed literal curl LO SCM WEB Documentation installation kind config yaml kind create cluster config kind config yaml note Cilium may fail to deploy due to too many open files in one or more of the agent pods If you notice this error you can increase the inotify resource limits on your host machine see Pod errors due to too many open files https kind sigs k8s io docs user known issues pod errors due to too many open files group tab minikube Install minikube v1 28 0 as per minikube documentation Install Minikube https kubernetes io docs tasks tools install minikube The following command will bring up a single node minikube cluster prepared for installing cilium code block shell session minikube start cni cilium note This may not install the latest version of cilium It might be necessary to add host dns resolver false if using the Virtualbox provider otherwise DNS resolution may not work after Cilium installation group tab Rancher Desktop Install Rancher Desktop v1 1 0 as per Rancher Desktop documentation Install Rancher Desktop https docs rancherdesktop io getting started installation Next you need to configure Rancher Desktop to disable the built in CNI so you can install Cilium include installation rancher desktop configure rst group tab Alibaba ACK include beta rst note The AlibabaCloud ENI integration with Cilium is subject to the following limitations It is currently only enabled for IPv4 It only works with instances supporting ENI Refer to Instance families https www alibabacloud com help doc detail 25378 htm for details Setup a Kubernetes on AlibabaCloud You can use any method you prefer The quickest way is to create an ACK Alibaba Cloud Container Service for Kubernetes cluster and to replace the CNI plugin with Cilium For more details on how to set up an ACK cluster please follow the official documentation https www alibabacloud com help doc detail 86745 htm install cilium cli Install the Cilium CLI include installation cli download rst admonition Video class attention To learn more about the Cilium CLI check out eCHO episode 8 Exploring the Cilium CLI https www youtube com watch v ndjmaM1i0WQ t 1136s Install Cilium You can install Cilium on any Kubernetes cluster Pick one of the options below tabs group tab Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster The installer will attempt to automatically pick the best configuration options for you Please see the other tabs for distribution platform specific instructions which also list the ideal default configuration for particular platforms include installation requirements generic rst Install Cilium Install Cilium into the Kubernetes cluster pointed to by your current kubectl context parsed literal cilium install CHART VERSION group tab GKE include installation requirements gke rst Install Cilium Install Cilium into the GKE cluster parsed literal cilium install CHART VERSION group tab AKS include installation requirements aks rst Install Cilium Install Cilium into the AKS cluster parsed literal cilium install CHART VERSION set azure resourceGroup AZURE RESOURCE GROUP group tab EKS include installation requirements eks rst Install Cilium Install Cilium into the EKS cluster parsed literal cilium install CHART VERSION cilium status wait note If you have to uninstall Cilium and later install it again that could cause connectivity issues due to aws node DaemonSet flushing Linux routing tables The issues can be fixed by restarting all pods alternatively to avoid such issues you can delete aws node DaemonSet prior to installing Cilium group tab OpenShift include installation requirements openshift rst Install Cilium Cilium is a Certified OpenShift CNI Plugin https access redhat com articles 5436171 and is best installed when an OpenShift cluster is created using the OpenShift installer Please refer to ref k8s install openshift okd for more information group tab RKE include installation requirements rke rst Install Cilium Install Cilium into your newly created RKE cluster parsed literal cilium install CHART VERSION group tab k3s include installation requirements k3s rst Install Cilium Install Cilium into your newly created Kubernetes cluster parsed literal cilium install CHART VERSION group tab Alibaba ACK You can install Cilium using Helm on Alibaba ACK refer to k8s install helm for details If the installation fails for some reason run cilium status to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed tip You may be seeing cilium install print something like this code block shell session Restarted unmanaged pod kube system event exporter gke 564fb97f9 rv8hg Restarted unmanaged pod kube system kube dns 6465f78586 hlcrz Restarted unmanaged pod kube system kube dns autoscaler 7f89fb6b79 fsmsg Restarted unmanaged pod kube system l7 default backend 7fd66b8b88 qqhh5 Restarted unmanaged pod kube system metrics server v0 3 6 7b5cdbcbb8 kjl65 Restarted unmanaged pod kube system stackdriver metadata agent cluster level 6cc964cddf 8n2rt This indicates that your cluster was already running some pods before Cilium was deployed and the installer has automatically restarted them to ensure all pods get networking provided by Cilium Validate the Installation include installation cli status rst include installation cli connectivity test rst include installation next steps rst
cilium docs cilium io Getting Started with the Star Wars Demo starwarsdemo security gsgswdemo rst
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _starwars_demo: ####################################### Getting Started with the Star Wars Demo ####################################### .. include:: /security/gsg_sw_demo.rst Check Current Access ==================== From the perspective of the *deathstar* service, only the ships with label ``org=empire`` are allowed to connect and request landing. Since we have no rules enforced, both *xwing* and *tiefighter* will be able to request landing. To test this, use the commands below. .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed Apply an L3/L4 Policy ===================== When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster. We'll start with the basic policy restricting deathstar landing requests to only the ships that have label (``org=empire``). This will not allow any ships that don't have the ``org=empire`` label to even connect with the *deathstar* service. This is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3/L4 network security policy. Note: Cilium performs stateful *connection tracking*, meaning that if policy allows the frontend to reach backend, it will automatically allow all required reply packets that are part of backend replying to frontend within the context of the same TCP/UDP connection. **L4 Policy with Cilium and Kubernetes** .. image:: images/cilium_http_l3_l4_gsg.png :scale: 30 % We can achieve that with the following CiliumNetworkPolicy: .. literalinclude:: ../../examples/minikube/sw_l3_l4_policy.yaml CiliumNetworkPolicies match on pod labels using an "endpointSelector" to identify the sources and destinations to which the policy applies. The above policy whitelists traffic sent from any pods with label (``org=empire``) to *deathstar* pods with label (``org=empire, class=deathstar``) on TCP port 80. To apply this L3/L4 policy, run: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 created Now if we run the landing requests again, only the *tiefighter* pods with the label ``org=empire`` will succeed. The *xwing* pods will be blocked! .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed This works as expected. Now the same request run from an *xwing* pod will fail: .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing This request will hang, so press Control-C to kill the curl request, or wait for it to time out. Inspecting the Policy ===================== If we run ``cilium-dbg endpoint list`` again we will see that the pods with the label ``org=empire`` and ``class=deathstar`` now have ingress policy enforcement enabled as per the policy above. .. code-block:: shell-session $ kubectl -n kube-system exec cilium-1c2cz -- cilium-dbg endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 232 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 726 Disabled Disabled 1 reserved:host ready 883 Disabled Disabled 4 reserved:health 10.0.0.244 ready 1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2843 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance You can also inspect the policy details via ``kubectl`` .. code-block:: shell-session $ kubectl get cnp NAME AGE rule1 2m $ kubectl describe cnp rule1 Name: rule1 Namespace: default Labels: <none> Annotations: <none> API Version: cilium.io/v2 Description: L3-L4 policy to restrict deathstar access to empire ships only Kind: CiliumNetworkPolicy Metadata: Creation Timestamp: 2020-06-15T14:06:48Z Generation: 1 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:description: f:spec: .: f:endpointSelector: .: f:matchLabels: .: f:class: f:org: f:ingress: Manager: kubectl Operation: Update Time: 2020-06-15T14:06:48Z Resource Version: 2914 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d Spec: Endpoint Selector: Match Labels: Class: deathstar Org: empire Ingress: From Endpoints: Match Labels: Org: empire To Ports: Ports: Port: 80 Protocol: TCP Events: <none> Apply and Test HTTP-aware L7 Policy =================================== In the simple scenario above, it was sufficient to either give *tiefighter* / *xwing* full access to *deathstar's* API or no access at all. But to provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls *deathstar's* API should be limited to making only the set of HTTP requests it requires for legitimate operation. For example, consider that the *deathstar* service exposes some maintenance APIs which should not be called by random empire ships. To see this run: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Panic: deathstar exploded goroutine 1 [running]: main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) /code/src/github.com/empire/deathstar/ temp/main.go:9 +0x64 main.main() /code/src/github.com/empire/deathstar/ temp/main.go:5 +0x85 While this is an illustrative example, unauthorized access such as above can have adverse security repercussions. **L7 Policy with Cilium and Kubernetes** .. image:: images/cilium_http_l3_l4_l7_gsg.png :scale: 30 % Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what URLs the *tiefighter* is allowed to reach. Here is an example policy file that extends our original policy by limiting *tiefighter* to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port). .. literalinclude:: ../../examples/minikube/sw_l3_l4_l7_policy.yaml Update the existing rule to apply L7-aware policy to protect *deathstar* using: .. parsed-literal:: $ kubectl apply -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_l7_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 configured We can now re-run the same test as above, but we will see a different outcome: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed and .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Access denied As this rule builds on the identity-aware rule, traffic from pods without the label ``org=empire`` will continue to be dropped causing the connection to time out: .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing As you can see, with Cilium L7 security policies, we are able to permit *tiefighter* to access only the required API resources on *deathstar*, thereby implementing a "least privilege" security approach for communication between microservices. Note that ``path`` matches the exact url, if for example you want to allow anything under /v1/, you need to use a regular expression: .. code-block:: yaml path: "/v1/.*" You can observe the L7 policy via ``kubectl``: .. code-block:: shell-session $ kubectl describe ciliumnetworkpolicies Name: rule1 Namespace: default Labels: <none> Annotations: API Version: cilium.io/v2 Description: L7 policy to restrict access to specific HTTP call Kind: CiliumNetworkPolicy Metadata: Creation Timestamp: 2020-06-15T14:06:48Z Generation: 2 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:description: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:endpointSelector: .: f:matchLabels: .: f:class: f:org: f:ingress: Manager: kubectl Operation: Update Time: 2020-06-15T14:10:46Z Resource Version: 3445 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d Spec: Endpoint Selector: Match Labels: Class: deathstar Org: empire Ingress: From Endpoints: Match Labels: Org: empire To Ports: Ports: Port: 80 Protocol: TCP Rules: Http: Method: POST Path: /v1/request-landing Events: <none> and ``cilium`` CLI: .. code-block:: shell-session $ kubectl -n kube-system exec cilium-qh5l2 -- cilium-dbg policy get [ { "endpointSelector": { "matchLabels": { "any:class": "deathstar", "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } }, "ingress": [ { "fromEndpoints": [ { "matchLabels": { "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } } ], "toPorts": [ { "ports": [ { "port": "80", "protocol": "TCP" } ], "rules": { "http": [ { "path": "/v1/request-landing", "method": "POST" } ] } } ] } ], "labels": [ { "key": "io.cilium.k8s.policy.derived-from", "value": "CiliumNetworkPolicy", "source": "k8s" }, { "key": "io.cilium.k8s.policy.name", "value": "rule1", "source": "k8s" }, { "key": "io.cilium.k8s.policy.namespace", "value": "default", "source": "k8s" }, { "key": "io.cilium.k8s.policy.uid", "value": "eb3a688b-b3aa-495c-b20a-d4f79e7c088d", "source": "k8s" } ] } ] Revision: 11 It is also possible to monitor the HTTP requests live by using ``cilium-dbg monitor``: .. code-block:: shell-session $ kubectl exec -it -n kube-system cilium-kzgdx -- cilium-dbg monitor -v --type l7 <- Response http to 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) from 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Forwarded POST http://deathstar.default.svc.cluster.local/v1/request-landing => 200 <- Request http from 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) to 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Denied PUT http://deathstar.default.svc.cluster.local/v1/request-landing => 403 The above output demonstrates a successful response to a POST request followed by a PUT request that is denied by the L7 policy. We hope you enjoyed the tutorial. Feel free to play more with the setup, read the rest of the documentation, and reach out to us on the `Cilium Slack`_ with any questions! Clean-up ======== .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/minikube/http-sw-app.yaml $ kubectl delete cnp rule1 .. include:: ../installation/next-steps.rst
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io starwars demo Getting Started with the Star Wars Demo include security gsg sw demo rst Check Current Access From the perspective of the deathstar service only the ships with label org empire are allowed to connect and request landing Since we have no rules enforced both xwing and tiefighter will be able to request landing To test this use the commands below code block shell session kubectl exec xwing curl s XPOST deathstar default svc cluster local v1 request landing Ship landed kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed Apply an L3 L4 Policy When using Cilium endpoint IP addresses are irrelevant when defining security policies Instead you can use the labels assigned to the pods to define security policies The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster We ll start with the basic policy restricting deathstar landing requests to only the ships that have label org empire This will not allow any ships that don t have the org empire label to even connect with the deathstar service This is a simple policy that filters only on IP protocol network layer 3 and TCP protocol network layer 4 so it is often referred to as an L3 L4 network security policy Note Cilium performs stateful connection tracking meaning that if policy allows the frontend to reach backend it will automatically allow all required reply packets that are part of backend replying to frontend within the context of the same TCP UDP connection L4 Policy with Cilium and Kubernetes image images cilium http l3 l4 gsg png scale 30 We can achieve that with the following CiliumNetworkPolicy literalinclude examples minikube sw l3 l4 policy yaml CiliumNetworkPolicies match on pod labels using an endpointSelector to identify the sources and destinations to which the policy applies The above policy whitelists traffic sent from any pods with label org empire to deathstar pods with label org empire class deathstar on TCP port 80 To apply this L3 L4 policy run parsed literal kubectl create f SCM WEB examples minikube sw l3 l4 policy yaml ciliumnetworkpolicy cilium io rule1 created Now if we run the landing requests again only the tiefighter pods with the label org empire will succeed The xwing pods will be blocked code block shell session kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed This works as expected Now the same request run from an xwing pod will fail code block shell session kubectl exec xwing curl s XPOST deathstar default svc cluster local v1 request landing This request will hang so press Control C to kill the curl request or wait for it to time out Inspecting the Policy If we run cilium dbg endpoint list again we will see that the pods with the label org empire and class deathstar now have ingress policy enforcement enabled as per the policy above code block shell session kubectl n kube system exec cilium 1c2cz cilium dbg endpoint list ENDPOINT POLICY ingress POLICY egress IDENTITY LABELS source key value IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 232 Enabled Disabled 16530 k8s class deathstar 10 0 0 147 ready k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire 726 Disabled Disabled 1 reserved host ready 883 Disabled Disabled 4 reserved health 10 0 0 244 ready 1634 Disabled Disabled 51373 k8s io cilium k8s policy cluster default 10 0 0 118 ready k8s io cilium k8s policy serviceaccount coredns k8s io kubernetes pod namespace kube system k8s k8s app kube dns 1673 Disabled Disabled 31028 k8s class tiefighter 10 0 0 112 ready k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire 2811 Disabled Disabled 51373 k8s io cilium k8s policy cluster default 10 0 0 47 ready k8s io cilium k8s policy serviceaccount coredns k8s io kubernetes pod namespace kube system k8s k8s app kube dns 2843 Enabled Disabled 16530 k8s class deathstar 10 0 0 89 ready k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire 3184 Disabled Disabled 22654 k8s class xwing 10 0 0 30 ready k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org alliance You can also inspect the policy details via kubectl code block shell session kubectl get cnp NAME AGE rule1 2m kubectl describe cnp rule1 Name rule1 Namespace default Labels none Annotations none API Version cilium io v2 Description L3 L4 policy to restrict deathstar access to empire ships only Kind CiliumNetworkPolicy Metadata Creation Timestamp 2020 06 15T14 06 48Z Generation 1 Managed Fields API Version cilium io v2 Fields Type FieldsV1 fieldsV1 f description f spec f endpointSelector f matchLabels f class f org f ingress Manager kubectl Operation Update Time 2020 06 15T14 06 48Z Resource Version 2914 Self Link apis cilium io v2 namespaces default ciliumnetworkpolicies rule1 UID eb3a688b b3aa 495c b20a d4f79e7c088d Spec Endpoint Selector Match Labels Class deathstar Org empire Ingress From Endpoints Match Labels Org empire To Ports Ports Port 80 Protocol TCP Events none Apply and Test HTTP aware L7 Policy In the simple scenario above it was sufficient to either give tiefighter xwing full access to deathstar s API or no access at all But to provide the strongest security i e enforce least privilege isolation between microservices each service that calls deathstar s API should be limited to making only the set of HTTP requests it requires for legitimate operation For example consider that the deathstar service exposes some maintenance APIs which should not be called by random empire ships To see this run code block shell session kubectl exec tiefighter curl s XPUT deathstar default svc cluster local v1 exhaust port Panic deathstar exploded goroutine 1 running main HandleGarbage 0x2080c3f50 0x2 0x4 0x425c0 0x5 0xa code src github com empire deathstar temp main go 9 0x64 main main code src github com empire deathstar temp main go 5 0x85 While this is an illustrative example unauthorized access such as above can have adverse security repercussions L7 Policy with Cilium and Kubernetes image images cilium http l3 l4 l7 gsg png scale 30 Cilium is capable of enforcing HTTP layer i e L7 policies to limit what URLs the tiefighter is allowed to reach Here is an example policy file that extends our original policy by limiting tiefighter to making only a POST v1 request landing API call but disallowing all other calls including PUT v1 exhaust port literalinclude examples minikube sw l3 l4 l7 policy yaml Update the existing rule to apply L7 aware policy to protect deathstar using parsed literal kubectl apply f SCM WEB examples minikube sw l3 l4 l7 policy yaml ciliumnetworkpolicy cilium io rule1 configured We can now re run the same test as above but we will see a different outcome code block shell session kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed and code block shell session kubectl exec tiefighter curl s XPUT deathstar default svc cluster local v1 exhaust port Access denied As this rule builds on the identity aware rule traffic from pods without the label org empire will continue to be dropped causing the connection to time out code block shell session kubectl exec xwing curl s XPOST deathstar default svc cluster local v1 request landing As you can see with Cilium L7 security policies we are able to permit tiefighter to access only the required API resources on deathstar thereby implementing a least privilege security approach for communication between microservices Note that path matches the exact url if for example you want to allow anything under v1 you need to use a regular expression code block yaml path v1 You can observe the L7 policy via kubectl code block shell session kubectl describe ciliumnetworkpolicies Name rule1 Namespace default Labels none Annotations API Version cilium io v2 Description L7 policy to restrict access to specific HTTP call Kind CiliumNetworkPolicy Metadata Creation Timestamp 2020 06 15T14 06 48Z Generation 2 Managed Fields API Version cilium io v2 Fields Type FieldsV1 fieldsV1 f description f metadata f annotations f kubectl kubernetes io last applied configuration f spec f endpointSelector f matchLabels f class f org f ingress Manager kubectl Operation Update Time 2020 06 15T14 10 46Z Resource Version 3445 Self Link apis cilium io v2 namespaces default ciliumnetworkpolicies rule1 UID eb3a688b b3aa 495c b20a d4f79e7c088d Spec Endpoint Selector Match Labels Class deathstar Org empire Ingress From Endpoints Match Labels Org empire To Ports Ports Port 80 Protocol TCP Rules Http Method POST Path v1 request landing Events none and cilium CLI code block shell session kubectl n kube system exec cilium qh5l2 cilium dbg policy get endpointSelector matchLabels any class deathstar any org empire k8s io kubernetes pod namespace default ingress fromEndpoints matchLabels any org empire k8s io kubernetes pod namespace default toPorts ports port 80 protocol TCP rules http path v1 request landing method POST labels key io cilium k8s policy derived from value CiliumNetworkPolicy source k8s key io cilium k8s policy name value rule1 source k8s key io cilium k8s policy namespace value default source k8s key io cilium k8s policy uid value eb3a688b b3aa 495c b20a d4f79e7c088d source k8s Revision 11 It is also possible to monitor the HTTP requests live by using cilium dbg monitor code block shell session kubectl exec it n kube system cilium kzgdx cilium dbg monitor v type l7 Response http to 0 k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire from 2756 k8s io cilium k8s policy cluster default k8s class deathstar k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 8876 43854 verdict Forwarded POST http deathstar default svc cluster local v1 request landing 200 Request http from 0 k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire to 2756 k8s io cilium k8s policy cluster default k8s class deathstar k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 8876 43854 verdict Denied PUT http deathstar default svc cluster local v1 request landing 403 The above output demonstrates a successful response to a POST request followed by a PUT request that is denied by the L7 policy We hope you enjoyed the tutorial Feel free to play more with the setup read the rest of the documentation and reach out to us on the Cilium Slack with any questions Clean up parsed literal kubectl delete f SCM WEB examples minikube http sw app yaml kubectl delete cnp rule1 include installation next steps rst
cilium docs cilium io Terminology labels label
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io *********** Terminology *********** .. _label: .. _labels: Labels ====== Labels are a generic, flexible and highly scalable way of addressing a large set of resources as they allow for arbitrary grouping and creation of sets. Whenever something needs to be described, addressed or selected, it is done based on labels: - `Endpoints` are assigned labels as derived from the container runtime, orchestration system, or other sources. - `Network policies` select pairs of `endpoints` which are allowed to communicate based on labels. The policies themselves are identified by labels as well. What is a Label? ---------------- A label is a pair of strings consisting of a ``key`` and ``value``. A label can be formatted as a single string with the format ``key=value``. The key portion is mandatory and must be unique. This is typically achieved by using the reverse domain name notion, e.g. ``io.cilium.mykey=myvalue``. The value portion is optional and can be omitted, e.g. ``io.cilium.mykey``. Key names should typically consist of the character set ``[a-z0-9-.]``. When using labels to select resources, both the key and the value must match, e.g. when a policy should be applied to all endpoints with the label ``my.corp.foo`` then the label ``my.corp.foo=bar`` will not match the selector. Label Source ------------ A label can be derived from various sources. For example, an `endpoint`_ will derive the labels associated to the container by the local container runtime as well as the labels associated with the pod as provided by Kubernetes. As these two label namespaces are not aware of each other, this may result in conflicting label keys. To resolve this potential conflict, Cilium prefixes all label keys with ``source:`` to indicate the source of the label when importing labels, e.g. ``k8s:role=frontend``, ``container:user=joe``, ``k8s:role=backend``. This means that when you run a Docker container using ``docker run [...] -l foo=bar``, the label ``container:foo=bar`` will appear on the Cilium endpoint representing the container. Similarly, a Kubernetes pod started with the label ``foo: bar`` will be represented with a Cilium endpoint associated with the label ``k8s:foo=bar``. A unique name is allocated for each potential source. The following label sources are currently supported: - ``container:`` for labels derived from the local container runtime - ``k8s:`` for labels derived from Kubernetes - ``reserved:`` for special reserved labels, see :ref:`reserved_labels`. - ``unspec:`` for labels with unspecified source When using labels to identify other resources, the source can be included to limit matching of labels to a particular type. If no source is provided, the label source defaults to ``any:`` which will match all labels regardless of their source. If a source is provided, the source of the selecting and matching labels need to match. .. _endpoint: .. _endpoints: Endpoint ========= Cilium makes application containers available on the network by assigning them IP addresses. Multiple application containers can share the same IP address; a typical example for this model is a Kubernetes :term:`Pod`. All application containers which share a common address are grouped together in what Cilium refers to as an endpoint. Allocating individual IP addresses enables the use of the entire Layer 4 port range by each endpoint. This essentially allows multiple application containers running on the same cluster node to all bind to well known ports such as ``80`` without causing any conflicts. The default behavior of Cilium is to assign both an IPv6 and IPv4 address to every endpoint. However, this behavior can be configured to only allocate an IPv6 address with the ``--enable-ipv4=false`` option. If both an IPv6 and IPv4 address are assigned, either address can be used to reach the endpoint. The same behavior will apply with regard to policy rules, load-balancing, etc. See :ref:`address_management` for more details. Identification -------------- For identification purposes, Cilium assigns an internal endpoint id to all endpoints on a cluster node. The endpoint id is unique within the context of an individual cluster node. .. _endpoint id: Endpoint Metadata ----------------- An endpoint automatically derives metadata from the application containers associated with the endpoint. The metadata can then be used to identify the endpoint for security/policy, load-balancing and routing purposes. The source of the metadata will depend on the orchestration system and container runtime in use. The following metadata retrieval mechanisms are currently supported: +---------------------+---------------------------------------------------+ | System | Description | +=====================+===================================================+ | Kubernetes | Pod labels (via k8s API) | +---------------------+---------------------------------------------------+ | containerd (Docker) | Container labels (via Docker API) | +---------------------+---------------------------------------------------+ Metadata is attached to endpoints in the form of `labels`. The following example launches a container with the label ``app=benchmark`` which is then associated with the endpoint. The label is prefixed with ``container:`` to indicate that the label was derived from the container runtime. .. code-block:: shell-session $ docker run --net cilium -d -l app=benchmark tgraf/netperf aaff7190f47d071325e7af06577f672beff64ccc91d2b53c42262635c063cf1c $ cilium-dbg endpoint list ENDPOINT POLICY IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT 62006 Disabled 257 container:app=benchmark f00d::a00:20f:0:f236 10.15.116.202 ready An endpoint can have metadata associated from multiple sources. A typical example is a Kubernetes cluster which uses containerd as the container runtime. Endpoints will derive Kubernetes pod labels (prefixed with the ``k8s:`` source prefix) and containerd labels (prefixed with ``container:`` source prefix). .. _identity: Identity ======== All `endpoints` are assigned an identity. The identity is what is used to enforce basic connectivity between endpoints. In traditional networking terminology, this would be equivalent to Layer 3 enforcement. An identity is identified by `labels` and is given a cluster wide unique identifier. The endpoint is assigned the identity which matches the endpoint's `security relevant labels`, i.e. all endpoints which share the same set of `security relevant labels` will share the same identity. This concept allows to scale policy enforcement to a massive number of endpoints as many individual endpoints will typically share the same set of security `labels` as applications are scaled. What is an Identity? -------------------- The identity of an endpoint is derived based on the `labels` associated with the pod or container which are derived to the `endpoint`_. When a pod or container is started, Cilium will create an `endpoint`_ based on the event received by the container runtime to represent the pod or container on the network. As a next step, Cilium will resolve the identity of the `endpoint`_ created. Whenever the `labels` of the pod or container change, the identity is reconfirmed and automatically modified as required. .. _security relevant labels: Security Relevant Labels ------------------------ Not all `labels` associated with a container or pod are meaningful when deriving the `identity`. Labels may be used to store metadata such as the timestamp when a container was launched. Cilium requires to know which labels are meaningful and are subject to being considered when deriving the identity. For this purpose, the user is required to specify a list of string prefixes of meaningful labels. The standard behavior is to include all labels which start with the prefix ``id.``, e.g. ``id.service1``, ``id.service2``, ``id.groupA.service44``. The list of meaningful label prefixes can be specified when starting the agent. .. _reserved_labels: Special Identities ------------------ All endpoints which are managed by Cilium will be assigned an identity. In order to allow communication to network endpoints which are not managed by Cilium, special identities exist to represent those. Special reserved identities are prefixed with the string ``reserved:``. +-----------------------------+------------+---------------------------------------------------+ | Identity | Numeric ID | Description | +=============================+============+===================================================+ | ``reserved:unknown`` | 0 | The identity could not be derived. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:host`` | 1 | The local host. Any traffic that originates from | | | | or is designated to one of the local host IPs. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:world`` | 2 | Any network endpoint outside of the cluster | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:unmanaged`` | 3 | An endpoint that is not managed by Cilium, e.g. | | | | a Kubernetes pod that was launched before Cilium | | | | was installed. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:health`` | 4 | This is health checking traffic generated by | | | | Cilium agents. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:init`` | 5 | An endpoint for which the identity has not yet | | | | been resolved is assigned the init identity. | | | | This represents the phase of an endpoint in which | | | | some of the metadata required to derive the | | | | security identity is still missing. This is | | | | typically the case in the bootstrapping phase. | | | | | | | | The init identity is only allocated if the labels | | | | of the endpoint are not known at creation time. | | | | This can be the case for the Docker plugin. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:remote-node`` | 6 | The collection of all remote cluster hosts. | | | | Any traffic that originates from or is designated | | | | to one of the IPs of any host in any connected | | | | cluster other than the local node. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:kube-apiserver`` | 7 | Remote node(s) which have backend(s) serving the | | | | kube-apiserver running. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:ingress`` | 8 | Given to the IPs used as the source address for | | | | connections from Ingress proxies. | +-----------------------------+------------+---------------------------------------------------+ Well-known Identities --------------------- The following is a list of well-known identities which Cilium is aware of automatically and will hand out a security identity without requiring to contact any external dependencies such as the kvstore. The purpose of this is to allow bootstrapping Cilium and enable network connectivity with policy enforcement in the cluster for essential services without depending on any dependencies. ======================== =================== ==================== ================= =========== ============================================================================ Deployment Namespace ServiceAccount Cluster Name Numeric ID Labels ======================== =================== ==================== ================= =========== ============================================================================ kube-dns kube-system kube-dns <cilium-cluster> 102 ``k8s-app=kube-dns`` kube-dns (EKS) kube-system kube-dns <cilium-cluster> 103 ``k8s-app=kube-dns``, ``eks.amazonaws.com/component=kube-dns`` core-dns kube-system coredns <cilium-cluster> 104 ``k8s-app=kube-dns`` core-dns (EKS) kube-system coredns <cilium-cluster> 106 ``k8s-app=kube-dns``, ``eks.amazonaws.com/component=coredns`` cilium-operator <cilium-namespace> cilium-operator <cilium-cluster> 105 ``name=cilium-operator``, ``io.cilium/app=operator`` ======================== =================== ==================== ================= =========== ============================================================================ *Note*: if ``cilium-cluster`` is not defined with the ``cluster-name`` option, the default value will be set to "``default``". Identity Management in the Cluster ---------------------------------- Identities are valid in the entire cluster which means that if several pods or containers are started on several cluster nodes, all of them will resolve and share a single identity if they share the identity relevant labels. This requires coordination between cluster nodes. .. image:: ../images/identity_store.png :align: center The operation to resolve an endpoint identity is performed with the help of the distributed key-value store which allows to perform atomic operations in the form *generate a new unique identifier if the following value has not been seen before*. This allows each cluster node to create the identity relevant subset of labels and then query the key-value store to derive the identity. Depending on whether the set of labels has been queried before, either a new identity will be created, or the identity of the initial query will be returned. .. _node: Node ==== Cilium refers to a node as an individual member of a cluster. Each node must be running the ``cilium-agent`` and will operate in a mostly autonomous manner. Synchronization of state between Cilium agents running on different nodes is kept to a minimum for simplicity and scale. It occurs exclusively via the Key-Value store or with packet metadata. Node Address ------------ Cilium will automatically detect the node's IPv4 and IPv6 address. The detected node address is printed out when the ``cilium-agent`` starts: :: Local node-name: worker0 Node-IPv6: f00d::ac10:14:0:1 External-Node IPv4: 172.16.0.20 Internal-Node IPv4: 10.200.28.238
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Terminology label labels Labels Labels are a generic flexible and highly scalable way of addressing a large set of resources as they allow for arbitrary grouping and creation of sets Whenever something needs to be described addressed or selected it is done based on labels Endpoints are assigned labels as derived from the container runtime orchestration system or other sources Network policies select pairs of endpoints which are allowed to communicate based on labels The policies themselves are identified by labels as well What is a Label A label is a pair of strings consisting of a key and value A label can be formatted as a single string with the format key value The key portion is mandatory and must be unique This is typically achieved by using the reverse domain name notion e g io cilium mykey myvalue The value portion is optional and can be omitted e g io cilium mykey Key names should typically consist of the character set a z0 9 When using labels to select resources both the key and the value must match e g when a policy should be applied to all endpoints with the label my corp foo then the label my corp foo bar will not match the selector Label Source A label can be derived from various sources For example an endpoint will derive the labels associated to the container by the local container runtime as well as the labels associated with the pod as provided by Kubernetes As these two label namespaces are not aware of each other this may result in conflicting label keys To resolve this potential conflict Cilium prefixes all label keys with source to indicate the source of the label when importing labels e g k8s role frontend container user joe k8s role backend This means that when you run a Docker container using docker run l foo bar the label container foo bar will appear on the Cilium endpoint representing the container Similarly a Kubernetes pod started with the label foo bar will be represented with a Cilium endpoint associated with the label k8s foo bar A unique name is allocated for each potential source The following label sources are currently supported container for labels derived from the local container runtime k8s for labels derived from Kubernetes reserved for special reserved labels see ref reserved labels unspec for labels with unspecified source When using labels to identify other resources the source can be included to limit matching of labels to a particular type If no source is provided the label source defaults to any which will match all labels regardless of their source If a source is provided the source of the selecting and matching labels need to match endpoint endpoints Endpoint Cilium makes application containers available on the network by assigning them IP addresses Multiple application containers can share the same IP address a typical example for this model is a Kubernetes term Pod All application containers which share a common address are grouped together in what Cilium refers to as an endpoint Allocating individual IP addresses enables the use of the entire Layer 4 port range by each endpoint This essentially allows multiple application containers running on the same cluster node to all bind to well known ports such as 80 without causing any conflicts The default behavior of Cilium is to assign both an IPv6 and IPv4 address to every endpoint However this behavior can be configured to only allocate an IPv6 address with the enable ipv4 false option If both an IPv6 and IPv4 address are assigned either address can be used to reach the endpoint The same behavior will apply with regard to policy rules load balancing etc See ref address management for more details Identification For identification purposes Cilium assigns an internal endpoint id to all endpoints on a cluster node The endpoint id is unique within the context of an individual cluster node endpoint id Endpoint Metadata An endpoint automatically derives metadata from the application containers associated with the endpoint The metadata can then be used to identify the endpoint for security policy load balancing and routing purposes The source of the metadata will depend on the orchestration system and container runtime in use The following metadata retrieval mechanisms are currently supported System Description Kubernetes Pod labels via k8s API containerd Docker Container labels via Docker API Metadata is attached to endpoints in the form of labels The following example launches a container with the label app benchmark which is then associated with the endpoint The label is prefixed with container to indicate that the label was derived from the container runtime code block shell session docker run net cilium d l app benchmark tgraf netperf aaff7190f47d071325e7af06577f672beff64ccc91d2b53c42262635c063cf1c cilium dbg endpoint list ENDPOINT POLICY IDENTITY LABELS source key value IPv6 IPv4 STATUS ENFORCEMENT 62006 Disabled 257 container app benchmark f00d a00 20f 0 f236 10 15 116 202 ready An endpoint can have metadata associated from multiple sources A typical example is a Kubernetes cluster which uses containerd as the container runtime Endpoints will derive Kubernetes pod labels prefixed with the k8s source prefix and containerd labels prefixed with container source prefix identity Identity All endpoints are assigned an identity The identity is what is used to enforce basic connectivity between endpoints In traditional networking terminology this would be equivalent to Layer 3 enforcement An identity is identified by labels and is given a cluster wide unique identifier The endpoint is assigned the identity which matches the endpoint s security relevant labels i e all endpoints which share the same set of security relevant labels will share the same identity This concept allows to scale policy enforcement to a massive number of endpoints as many individual endpoints will typically share the same set of security labels as applications are scaled What is an Identity The identity of an endpoint is derived based on the labels associated with the pod or container which are derived to the endpoint When a pod or container is started Cilium will create an endpoint based on the event received by the container runtime to represent the pod or container on the network As a next step Cilium will resolve the identity of the endpoint created Whenever the labels of the pod or container change the identity is reconfirmed and automatically modified as required security relevant labels Security Relevant Labels Not all labels associated with a container or pod are meaningful when deriving the identity Labels may be used to store metadata such as the timestamp when a container was launched Cilium requires to know which labels are meaningful and are subject to being considered when deriving the identity For this purpose the user is required to specify a list of string prefixes of meaningful labels The standard behavior is to include all labels which start with the prefix id e g id service1 id service2 id groupA service44 The list of meaningful label prefixes can be specified when starting the agent reserved labels Special Identities All endpoints which are managed by Cilium will be assigned an identity In order to allow communication to network endpoints which are not managed by Cilium special identities exist to represent those Special reserved identities are prefixed with the string reserved Identity Numeric ID Description reserved unknown 0 The identity could not be derived reserved host 1 The local host Any traffic that originates from or is designated to one of the local host IPs reserved world 2 Any network endpoint outside of the cluster reserved unmanaged 3 An endpoint that is not managed by Cilium e g a Kubernetes pod that was launched before Cilium was installed reserved health 4 This is health checking traffic generated by Cilium agents reserved init 5 An endpoint for which the identity has not yet been resolved is assigned the init identity This represents the phase of an endpoint in which some of the metadata required to derive the security identity is still missing This is typically the case in the bootstrapping phase The init identity is only allocated if the labels of the endpoint are not known at creation time This can be the case for the Docker plugin reserved remote node 6 The collection of all remote cluster hosts Any traffic that originates from or is designated to one of the IPs of any host in any connected cluster other than the local node reserved kube apiserver 7 Remote node s which have backend s serving the kube apiserver running reserved ingress 8 Given to the IPs used as the source address for connections from Ingress proxies Well known Identities The following is a list of well known identities which Cilium is aware of automatically and will hand out a security identity without requiring to contact any external dependencies such as the kvstore The purpose of this is to allow bootstrapping Cilium and enable network connectivity with policy enforcement in the cluster for essential services without depending on any dependencies Deployment Namespace ServiceAccount Cluster Name Numeric ID Labels kube dns kube system kube dns cilium cluster 102 k8s app kube dns kube dns EKS kube system kube dns cilium cluster 103 k8s app kube dns eks amazonaws com component kube dns core dns kube system coredns cilium cluster 104 k8s app kube dns core dns EKS kube system coredns cilium cluster 106 k8s app kube dns eks amazonaws com component coredns cilium operator cilium namespace cilium operator cilium cluster 105 name cilium operator io cilium app operator Note if cilium cluster is not defined with the cluster name option the default value will be set to default Identity Management in the Cluster Identities are valid in the entire cluster which means that if several pods or containers are started on several cluster nodes all of them will resolve and share a single identity if they share the identity relevant labels This requires coordination between cluster nodes image images identity store png align center The operation to resolve an endpoint identity is performed with the help of the distributed key value store which allows to perform atomic operations in the form generate a new unique identifier if the following value has not been seen before This allows each cluster node to create the identity relevant subset of labels and then query the key value store to derive the identity Depending on whether the set of labels has been queried before either a new identity will be created or the identity of the initial query will be returned node Node Cilium refers to a node as an individual member of a cluster Each node must be running the cilium agent and will operate in a mostly autonomous manner Synchronization of state between Cilium agents running on different nodes is kept to a minimum for simplicity and scale It occurs exclusively via the Key Value store or with packet metadata Node Address Cilium will automatically detect the node s IPv4 and IPv6 address The detected node address is printed out when the cilium agent starts Local node name worker0 Node IPv6 f00d ac10 14 0 1 External Node IPv4 172 16 0 20 Internal Node IPv4 10 200 28 238
cilium the list below has to be deleted to prevent conflicts Cilium will manage ENIs instead of the ACK CNI so any running DaemonSet from docs cilium io
To install Cilium on `ACK (Alibaba Cloud Container Service for Kubernetes) <https://www.alibabacloud.com/help/doc-detail/86745.htm>`_, perform the following steps: **Disable ACK CNI (ACK Only):** If you are running an ACK cluster, you should delete the ACK CNI. .. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Cilium will manage ENIs instead of the ACK CNI, so any running DaemonSet from the list below has to be deleted to prevent conflicts. - ``kube-flannel-ds`` - ``terway`` - ``terway-eni`` - ``terway-eniip`` .. note:: If you are using ACK with Flannel (DaemonSet ``kube-flannel-ds``), the Cloud Controller Manager (CCM) will create a route (Pod CIDR) in VPC. If your cluster is a Managed Kubernetes you cannot disable this behavior. Please consider creating a new cluster. .. code-block:: shell-session kubectl -n kube-system delete daemonset <terway> The next step is to remove CRD below created by ``terway*`` CNI .. code-block:: shell-session kubectl delete crd \ ciliumclusterwidenetworkpolicies.cilium.io \ ciliumendpoints.cilium.io \ ciliumidentities.cilium.io \ ciliumnetworkpolicies.cilium.io \ ciliumnodes.cilium.io \ bgpconfigurations.crd.projectcalico.org \ clusterinformations.crd.projectcalico.org \ felixconfigurations.crd.projectcalico.org \ globalnetworkpolicies.crd.projectcalico.org \ globalnetworksets.crd.projectcalico.org \ hostendpoints.crd.projectcalico.org \ ippools.crd.projectcalico.org \ networkpolicies.crd.projectcalico.org **Create AlibabaCloud Secrets:** Before installing Cilium, a new Kubernetes Secret with the AlibabaCloud Tokens needs to be added to your Kubernetes cluster. This Secret will allow Cilium to gather information from the AlibabaCloud API which is needed to implement ToGroups policies. **AlibabaCloud Access Keys:** To create a new access token the `following guide can be used <https://www.alibabacloud.com/help/doc-detail/93691.htm>`_. These keys need to have certain `RAM Permissions <https://ram.console.aliyun.com/overview>`_: .. code-block:: json { "Version": "1", "Statement": [{ "Action": [ "ecs:CreateNetworkInterface", "ecs:DescribeNetworkInterfaces", "ecs:AttachNetworkInterface", "ecs:DetachNetworkInterface", "ecs:DeleteNetworkInterface", "ecs:DescribeInstanceAttribute", "ecs:DescribeInstanceTypes", "ecs:AssignPrivateIpAddresses", "ecs:UnassignPrivateIpAddresses", "ecs:DescribeInstances", "ecs:DescribeSecurityGroups", "ecs:ListTagResources" ], "Resource": [ "*" ], "Effect": "Allow" }, { "Action": [ "vpc:DescribeVSwitches", "vpc:ListTagResources", "vpc:DescribeVpcs" ], "Resource": [ "*" ], "Effect": "Allow" } ] } As soon as you have the access tokens, the following secret needs to be added, with each empty string replaced by the associated value as a base64-encoded string: .. code-block:: yaml apiVersion: v1 kind: Secret metadata: name: cilium-alibabacloud namespace: kube-system type: Opaque data: ALIBABA_CLOUD_ACCESS_KEY_ID: "" ALIBABA_CLOUD_ACCESS_KEY_SECRET: "" The base64 command line utility can be used to generate each value, for example: .. code-block:: shell-session $ echo -n "access_key" | base64 YWNjZXNzX2tleQ== This secret stores the AlibabaCloud credentials, which will be used to connect to the AlibabaCloud API. .. code-block:: shell-session $ kubectl create -f cilium-secret.yaml **Install Cilium:** Install Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set alibabacloud.enabled=true \\ --set ipam.mode=alibabacloud \\ --set enableIPv4Masquerade=false \\ --set routingMode=native .. note:: You must ensure that the security groups associated with the ENIs (``eth1``, ``eth2``, ...) allow for egress traffic to go outside of the VPC. By default, the security groups for pod ENIs are derived from the primary ENI (``eth0``).
cilium
To install Cilium on ACK Alibaba Cloud Container Service for Kubernetes https www alibabacloud com help doc detail 86745 htm perform the following steps Disable ACK CNI ACK Only If you are running an ACK cluster you should delete the ACK CNI only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Cilium will manage ENIs instead of the ACK CNI so any running DaemonSet from the list below has to be deleted to prevent conflicts kube flannel ds terway terway eni terway eniip note If you are using ACK with Flannel DaemonSet kube flannel ds the Cloud Controller Manager CCM will create a route Pod CIDR in VPC If your cluster is a Managed Kubernetes you cannot disable this behavior Please consider creating a new cluster code block shell session kubectl n kube system delete daemonset terway The next step is to remove CRD below created by terway CNI code block shell session kubectl delete crd ciliumclusterwidenetworkpolicies cilium io ciliumendpoints cilium io ciliumidentities cilium io ciliumnetworkpolicies cilium io ciliumnodes cilium io bgpconfigurations crd projectcalico org clusterinformations crd projectcalico org felixconfigurations crd projectcalico org globalnetworkpolicies crd projectcalico org globalnetworksets crd projectcalico org hostendpoints crd projectcalico org ippools crd projectcalico org networkpolicies crd projectcalico org Create AlibabaCloud Secrets Before installing Cilium a new Kubernetes Secret with the AlibabaCloud Tokens needs to be added to your Kubernetes cluster This Secret will allow Cilium to gather information from the AlibabaCloud API which is needed to implement ToGroups policies AlibabaCloud Access Keys To create a new access token the following guide can be used https www alibabacloud com help doc detail 93691 htm These keys need to have certain RAM Permissions https ram console aliyun com overview code block json Version 1 Statement Action ecs CreateNetworkInterface ecs DescribeNetworkInterfaces ecs AttachNetworkInterface ecs DetachNetworkInterface ecs DeleteNetworkInterface ecs DescribeInstanceAttribute ecs DescribeInstanceTypes ecs AssignPrivateIpAddresses ecs UnassignPrivateIpAddresses ecs DescribeInstances ecs DescribeSecurityGroups ecs ListTagResources Resource Effect Allow Action vpc DescribeVSwitches vpc ListTagResources vpc DescribeVpcs Resource Effect Allow As soon as you have the access tokens the following secret needs to be added with each empty string replaced by the associated value as a base64 encoded string code block yaml apiVersion v1 kind Secret metadata name cilium alibabacloud namespace kube system type Opaque data ALIBABA CLOUD ACCESS KEY ID ALIBABA CLOUD ACCESS KEY SECRET The base64 command line utility can be used to generate each value for example code block shell session echo n access key base64 YWNjZXNzX2tleQ This secret stores the AlibabaCloud credentials which will be used to connect to the AlibabaCloud API code block shell session kubectl create f cilium secret yaml Install Cilium Install Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set alibabacloud enabled true set ipam mode alibabacloud set enableIPv4Masquerade false set routingMode native note You must ensure that the security groups associated with the ENIs eth1 eth2 allow for egress traffic to go outside of the VPC By default the security groups for pod ENIs are derived from the primary ENI eth0
cilium chainingazure Azure CNI Legacy docs cilium io
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _chaining_azure: ****************** Azure CNI (Legacy) ****************** .. note:: For most users, the best way to run Cilium on AKS is either AKS BYO CNI as described in :ref:`k8s_install_quick` or `Azure CNI Powered by Cilium <https://aka.ms/aks/cilium-dataplane>`__. This guide provides alternative instructions to run Cilium with Azure CNI in a chaining configuration. This is the legacy way of running Azure CNI with cilium as Azure IPAM is legacy, for more information see :ref:`ipam_azure`. .. include:: cni-chaining-limitations.rst .. admonition:: Video :class: attention If you'd like a video explanation of the Azure CNI Powered by Cilium, check out `eCHO episode 70: Azure CNI Powered by Cilium <https://www.youtube.com/watch?v=8it8Hm2F_GM>`__. This guide explains how to set up Cilium in combination with Azure CNI in a chaining configuration. In this hybrid mode, the Azure CNI plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM). After the initial networking is setup, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by Azure CNI to enforce network policies, perform load-balancing, and encryption. Create an AKS + Cilium CNI configuration ======================================== Create a ``chaining.yaml`` file based on the following template to specify the desired CNI chaining configuration. This :term:`ConfigMap` will be installed as the CNI configuration file on all nodes and defines the chaining configuration. In the example below, the Azure CNI, portmap, and Cilium are chained together. .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: kube-system data: cni-config: |- { "cniVersion": "0.3.0", "name": "azure", "plugins": [ { "type": "azure-vnet", "mode": "transparent", "ipam": { "type": "azure-vnet-ipam" } }, { "type": "portmap", "capabilities": {"portMappings": true}, "snat": true }, { "name": "cilium", "type": "cilium-cni" } ] } Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f chaining.yaml Deploy Cilium ============= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set cni.chainingMode=generic-veth \\ --set cni.customConf=true \\ --set cni.exclusive=false \\ --set nodeinit.enabled=true \\ --set cni.configMap=cni-configuration \\ --set routingMode=native \\ --set enableIPv4Masquerade=false \\ --set endpointRoutes.enabled=true This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the eBPF filesystem and updating the existing Azure CNI plugin to run in 'transparent' mode. .. include:: k8s-install-restart-pods.rst .. include:: k8s-install-validate.rst .. include:: next-steps.rst
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io chaining azure Azure CNI Legacy note For most users the best way to run Cilium on AKS is either AKS BYO CNI as described in ref k8s install quick or Azure CNI Powered by Cilium https aka ms aks cilium dataplane This guide provides alternative instructions to run Cilium with Azure CNI in a chaining configuration This is the legacy way of running Azure CNI with cilium as Azure IPAM is legacy for more information see ref ipam azure include cni chaining limitations rst admonition Video class attention If you d like a video explanation of the Azure CNI Powered by Cilium check out eCHO episode 70 Azure CNI Powered by Cilium https www youtube com watch v 8it8Hm2F GM This guide explains how to set up Cilium in combination with Azure CNI in a chaining configuration In this hybrid mode the Azure CNI plugin is responsible for setting up the virtual network devices as well as address allocation IPAM After the initial networking is setup the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by Azure CNI to enforce network policies perform load balancing and encryption Create an AKS Cilium CNI configuration Create a chaining yaml file based on the following template to specify the desired CNI chaining configuration This term ConfigMap will be installed as the CNI configuration file on all nodes and defines the chaining configuration In the example below the Azure CNI portmap and Cilium are chained together code block yaml apiVersion v1 kind ConfigMap metadata name cni configuration namespace kube system data cni config cniVersion 0 3 0 name azure plugins type azure vnet mode transparent ipam type azure vnet ipam type portmap capabilities portMappings true snat true name cilium type cilium cni Deploy the term ConfigMap code block shell session kubectl apply f chaining yaml Deploy Cilium include k8s install download release rst Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set cni chainingMode generic veth set cni customConf true set cni exclusive false set nodeinit enabled true set cni configMap cni configuration set routingMode native set enableIPv4Masquerade false set endpointRoutes enabled true This will create both the main cilium daemonset as well as the cilium node init daemonset which handles tasks like mounting the eBPF filesystem and updating the existing Azure CNI plugin to run in transparent mode include k8s install restart pods rst include k8s install validate rst include next steps rst
cilium This guide will show you how to install Cilium using Helm https helm sh This involves a couple of additional steps compared to docs cilium io Installation using Helm k8sinstallhelm
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _k8s_install_helm: *********************** Installation using Helm *********************** This guide will show you how to install Cilium using `Helm <https://helm.sh/>`_. This involves a couple of additional steps compared to the :ref:`k8s_quick_install` and requires you to manually select the best datapath and IPAM mode for your particular environment. Install Cilium ============== .. include:: k8s-install-download-release.rst .. tabs:: .. group-tab:: Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster using the default configuration options below. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms. **Default Configuration:** =============== =============== ============== Datapath IPAM Datastore =============== =============== ============== Encapsulation Cluster Pool Kubernetes CRD =============== =============== ============== .. include:: requirements-generic.rst **Install Cilium:** Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system .. group-tab:: GKE .. include:: requirements-gke.rst **Install Cilium:** Extract the Cluster CIDR to enable native-routing: .. code-block:: shell-session NATIVE_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')" echo $NATIVE_CIDR Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set nodeinit.enabled=true \\ --set nodeinit.reconfigureKubelet=true \\ --set nodeinit.removeCbrBridge=true \\ --set cni.binPath=/home/kubernetes/bin \\ --set gke.enabled=true \\ --set ipam.mode=kubernetes \\ --set ipv4NativeRoutingCIDR=$NATIVE_CIDR The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions: * Reconfigure kubelet to run in CNI mode * Mount the eBPF filesystem .. group-tab:: AKS .. include:: ../installation/requirements-aks.rst **Install Cilium:** Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set aksbyocni.enabled=true \\ --set nodeinit.enabled=true .. note:: Installing Cilium via helm is supported only for AKS BYOCNI cluster and not for Azure CNI Powered by Cilium clusters. .. group-tab:: EKS .. include:: requirements-eks.rst **Patch VPC CNI (aws-node DaemonSet)** Cilium will manage ENIs instead of VPC CNI, so the ``aws-node`` DaemonSet has to be patched to prevent conflict behavior. .. code-block:: shell-session kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}' **Install Cilium:** Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set eni.enabled=true \\ --set ipam.mode=eni \\ --set egressMasqueradeInterfaces=eth+ \\ --set routingMode=native .. note:: This helm command sets ``eni.enabled=true`` and ``routingMode=native``, meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod, similar to the behavior of the `Amazon VPC CNI plugin <https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html>`_. This mode depends on a set of :ref:`ec2privileges` from the EC2 API. Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs. This allows running more pods per Kubernetes worker node than the ENI limit but includes the following caveats: 1. Pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node. 2. The EKS API Server is unable to route packets to the overlay network. This implies that any `webhook <https://kubernetes.io/docs/reference/access-authn-authz/webhook/>`_ which needs to be accessed must be host networked or exposed through a service or ingress. To set up Cilium overlay mode, follow the steps below: 1. Excluding the lines for ``eni.enabled=true``, ``ipam.mode=eni`` and ``routingMode=native`` from the helm command will configure Cilium to use overlay routing mode (which is the helm default). 2. Flush iptables rules added by VPC CNI .. code-block:: shell-session iptables -t nat -F AWS-SNAT-CHAIN-0 \\ && iptables -t nat -F AWS-SNAT-CHAIN-1 \\ && iptables -t nat -F AWS-CONNMARK-CHAIN-0 \\ && iptables -t nat -F AWS-CONNMARK-CHAIN-1 Some Linux distributions use a different interface naming convention. If you use masquerading with the option ``egressMasqueradeInterfaces=eth+``, remember to replace ``eth+`` with the proper interface name. For reference, Amazon Linux 2 uses ``eth+``, whereas Amazon Linux 2023 uses ``ens+``. Mixed node clusters are not supported currently. .. group-tab:: OpenShift .. include:: requirements-openshift.rst **Install Cilium:** Cilium is a `Certified OpenShift CNI Plugin <https://access.redhat.com/articles/5436171>`_ and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to :ref:`k8s_install_openshift_okd` for more information. .. group-tab:: RKE .. include:: requirements-rke.rst .. group-tab:: k3s .. include:: requirements-k3s.rst **Install Cilium:** .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace $CILIUM_NAMESPACE \\ --set operator.replicas=1 .. group-tab:: Rancher Desktop **Configure Rancher Desktop:** To install Cilium on `Rancher Desktop <https://rancherdesktop.io>`_, perform the following steps: .. include:: rancher-desktop-configure.rst **Install Cilium:** .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace $CILIUM_NAMESPACE \\ --set operator.replicas=1 \\ --set cni.binPath=/usr/libexec/cni .. group-tab:: Talos Linux To install Cilium on `Talos Linux <https://www.talos.dev/>`_, perform the following steps. .. include:: k8s-install-talos-linux.rst .. group-tab:: Alibaba ACK .. include:: ../installation/alibabacloud-eni.rst .. admonition:: Video :class: attention If you'd like to learn more about Cilium Helm values, check out `eCHO episode 117: A Tour of the Cilium Helm Values <https://www.youtube.com/watch?v=ni0Uw4WLHYo>`__. .. include:: k8s-install-restart-pods.rst .. include:: k8s-install-validate.rst .. include:: next-steps.rst
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io k8s install helm Installation using Helm This guide will show you how to install Cilium using Helm https helm sh This involves a couple of additional steps compared to the ref k8s quick install and requires you to manually select the best datapath and IPAM mode for your particular environment Install Cilium include k8s install download release rst tabs group tab Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster using the default configuration options below Please see the other tabs for distribution platform specific instructions which also list the ideal default configuration for particular platforms Default Configuration Datapath IPAM Datastore Encapsulation Cluster Pool Kubernetes CRD include requirements generic rst Install Cilium Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system group tab GKE include requirements gke rst Install Cilium Extract the Cluster CIDR to enable native routing code block shell session NATIVE CIDR gcloud container clusters describe NAME zone ZONE format value clusterIpv4Cidr echo NATIVE CIDR Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set nodeinit enabled true set nodeinit reconfigureKubelet true set nodeinit removeCbrBridge true set cni binPath home kubernetes bin set gke enabled true set ipam mode kubernetes set ipv4NativeRoutingCIDR NATIVE CIDR The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster The NodeInit DaemonSet will perform the following actions Reconfigure kubelet to run in CNI mode Mount the eBPF filesystem group tab AKS include installation requirements aks rst Install Cilium Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set aksbyocni enabled true set nodeinit enabled true note Installing Cilium via helm is supported only for AKS BYOCNI cluster and not for Azure CNI Powered by Cilium clusters group tab EKS include requirements eks rst Patch VPC CNI aws node DaemonSet Cilium will manage ENIs instead of VPC CNI so the aws node DaemonSet has to be patched to prevent conflict behavior code block shell session kubectl n kube system patch daemonset aws node type strategic p spec template spec nodeSelector io cilium aws node enabled true Install Cilium Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set eni enabled true set ipam mode eni set egressMasqueradeInterfaces eth set routingMode native note This helm command sets eni enabled true and routingMode native meaning that Cilium will allocate a fully routable AWS ENI IP address for each pod similar to the behavior of the Amazon VPC CNI plugin https docs aws amazon com eks latest userguide pod networking html This mode depends on a set of ref ec2privileges from the EC2 API Cilium can alternatively run in EKS using an overlay mode that gives pods non VPC routable IPs This allows running more pods per Kubernetes worker node than the ENI limit but includes the following caveats 1 Pod connectivity to resources outside the cluster e g VMs in the VPC or AWS managed services is masqueraded i e SNAT by Cilium to use the VPC IP address of the Kubernetes worker node 2 The EKS API Server is unable to route packets to the overlay network This implies that any webhook https kubernetes io docs reference access authn authz webhook which needs to be accessed must be host networked or exposed through a service or ingress To set up Cilium overlay mode follow the steps below 1 Excluding the lines for eni enabled true ipam mode eni and routingMode native from the helm command will configure Cilium to use overlay routing mode which is the helm default 2 Flush iptables rules added by VPC CNI code block shell session iptables t nat F AWS SNAT CHAIN 0 iptables t nat F AWS SNAT CHAIN 1 iptables t nat F AWS CONNMARK CHAIN 0 iptables t nat F AWS CONNMARK CHAIN 1 Some Linux distributions use a different interface naming convention If you use masquerading with the option egressMasqueradeInterfaces eth remember to replace eth with the proper interface name For reference Amazon Linux 2 uses eth whereas Amazon Linux 2023 uses ens Mixed node clusters are not supported currently group tab OpenShift include requirements openshift rst Install Cilium Cilium is a Certified OpenShift CNI Plugin https access redhat com articles 5436171 and is best installed when an OpenShift cluster is created using the OpenShift installer Please refer to ref k8s install openshift okd for more information group tab RKE include requirements rke rst group tab k3s include requirements k3s rst Install Cilium parsed literal helm install cilium CHART RELEASE namespace CILIUM NAMESPACE set operator replicas 1 group tab Rancher Desktop Configure Rancher Desktop To install Cilium on Rancher Desktop https rancherdesktop io perform the following steps include rancher desktop configure rst Install Cilium parsed literal helm install cilium CHART RELEASE namespace CILIUM NAMESPACE set operator replicas 1 set cni binPath usr libexec cni group tab Talos Linux To install Cilium on Talos Linux https www talos dev perform the following steps include k8s install talos linux rst group tab Alibaba ACK include installation alibabacloud eni rst admonition Video class attention If you d like to learn more about Cilium Helm values check out eCHO episode 117 A Tour of the Cilium Helm Values https www youtube com watch v ni0Uw4WLHYo include k8s install restart pods rst include k8s install validate rst include next steps rst
cilium Installation using Kops docs cilium io k8sinstallkops As of kops 1 9 release Cilium can be plugged into kops deployed kopsguide
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _kops_guide: .. _k8s_install_kops: *********************** Installation using Kops *********************** As of kops 1.9 release, Cilium can be plugged into kops-deployed clusters as the CNI plugin. This guide provides steps to create a Kubernetes cluster on AWS using kops and Cilium as the CNI plugin. Note, the kops deployment will automate several deployment features in AWS by default, including AutoScaling, Volumes, VPCs, etc. Kops offers several out-of-the-box configurations of Cilium including :ref:`kubeproxy-free`, :ref:`ipam_eni`, and dedicated etcd cluster for Cilium. This guide will just go through a basic setup. Prerequisites ============= * `aws cli <https://aws.amazon.com/cli/>`_ * `kubectl <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`_ * aws account with permissions: * AmazonEC2FullAccess * AmazonRoute53FullAccess * AmazonS3FullAccess * IAMFullAccess * AmazonVPCFullAccess Installing kops =============== .. tabs:: .. group-tab:: Linux .. code-block:: shell-session curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops .. group-tab:: MacOS .. code-block:: shell-session brew update && brew install kops Setting up IAM Group and User ============================= Assuming you have all the prerequisites, run the following commands to create the kops user and group: .. code-block:: shell-session $ # Create IAM group named kops and grant access $ aws iam create-group --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops $ aws iam create-user --user-name kops $ aws iam add-user-to-group --user-name kops --group-name kops $ aws iam create-access-key --user-name kops kops requires the creation of a dedicated S3 bucket in order to store the state and representation of the cluster. You will need to change the bucket name and provide your unique bucket name (for example a reverse of FQDN added with short description of the cluster). Also make sure to use the region where you will be deploying the cluster. .. code-block:: shell-session $ aws s3api create-bucket --bucket prefix-example-com-state-store --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2 $ export KOPS_STATE_STORE=s3://prefix-example-com-state-store The above steps are sufficient for getting a working cluster installed. Please consult `kops aws documentation <https://kops.sigs.k8s.io/getting_started/install/>`_ for more detailed setup instructions. Cilium Prerequisites ==================== * Ensure the :ref:`admin_system_reqs` are met, particularly the Linux kernel and key-value store versions. The default AMI satisfies the minimum kernel version required by Cilium, which is what we will use in this guide. Creating a Cluster ================== * Note that you will need to specify the ``--master-zones`` and ``--zones`` for creating the master and worker nodes. The number of master zones should be * odd (1, 3, ...) for HA. For simplicity, you can just use 1 region. * To keep things simple when following this guide, we will use a gossip-based cluster. This means you do not have to create a hosted zone upfront. cluster ``NAME`` variable must end with ``k8s.local`` to use the gossip protocol. If creating multiple clusters using the same kops user, then make the cluster name unique by adding a prefix such as ``com-company-emailid-``. .. code-block:: shell-session $ export NAME=com-company-emailid-cilium.k8s.local $ kops create cluster --state=${KOPS_STATE_STORE} --node-count 3 --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --networking cilium --cloud-labels "Team=Dev,Owner=Admin" ${NAME} --yes You may be prompted to create a ssh public-private key pair. .. code-block:: shell-session $ ssh-keygen (Please see :ref:`appendix_kops`) .. include:: k8s-install-validate.rst .. _appendix_kops: Deleting a Cluster ================== To undo the dependencies and other deployment features in AWS from the kops cluster creation, use kops to destroy a cluster *immediately* with the parameter ``--yes``: .. code-block:: shell-session $ kops delete cluster ${NAME} --yes Further reading on using Cilium with Kops ========================================= * See the `kops networking documentation <https://kops.sigs.k8s.io/networking/cilium/>`_ for more information on the configuration options kops offers. * See the `kops cluster spec documentation <https://pkg.go.dev/k8s.io/kops/pkg/apis/kops?tab=doc#CiliumNetworkingSpec>`_ for a comprehensive list of all the options Appendix: Details of kops flags used in cluster creation ======================================================== The following section explains all the flags used in create cluster command. * ``--state=${KOPS_STATE_STORE}`` : KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster * ``--node-count 3`` : No. of worker nodes in the kubernetes cluster. * ``--topology private`` : Cluster will be created with private topology, what that means is all masters/nodes will be launched in a private subnet in the VPC * ``--master-zones eu-west-1a,eu-west-1b,eu-west-1c`` : The 3 zones ensure the HA of master nodes, each belonging in a different Availability zones. * ``--zones eu-west-1a,eu-west-1b,eu-west-1c`` : Zones where the worker nodes will be deployed * ``--networking cilium`` : Networking CNI plugin to be used - cilium. You can also use ``cilium-etcd``, which will use a dedicated etcd cluster as key/value store instead of CRDs. * ``--cloud-labels "Team=Dev,Owner=Admin"`` : Labels for your cluster that will be applied to your instances * ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io kops guide k8s install kops Installation using Kops As of kops 1 9 release Cilium can be plugged into kops deployed clusters as the CNI plugin This guide provides steps to create a Kubernetes cluster on AWS using kops and Cilium as the CNI plugin Note the kops deployment will automate several deployment features in AWS by default including AutoScaling Volumes VPCs etc Kops offers several out of the box configurations of Cilium including ref kubeproxy free ref ipam eni and dedicated etcd cluster for Cilium This guide will just go through a basic setup Prerequisites aws cli https aws amazon com cli kubectl https kubernetes io docs tasks tools install kubectl aws account with permissions AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess Installing kops tabs group tab Linux code block shell session curl LO https github com kubernetes kops releases download curl s https api github com repos kubernetes kops releases latest grep tag name cut d f 4 kops linux amd64 chmod x kops linux amd64 sudo mv kops linux amd64 usr local bin kops group tab MacOS code block shell session brew update brew install kops Setting up IAM Group and User Assuming you have all the prerequisites run the following commands to create the kops user and group code block shell session Create IAM group named kops and grant access aws iam create group group name kops aws iam attach group policy policy arn arn aws iam aws policy AmazonEC2FullAccess group name kops aws iam attach group policy policy arn arn aws iam aws policy AmazonRoute53FullAccess group name kops aws iam attach group policy policy arn arn aws iam aws policy AmazonS3FullAccess group name kops aws iam attach group policy policy arn arn aws iam aws policy IAMFullAccess group name kops aws iam attach group policy policy arn arn aws iam aws policy AmazonVPCFullAccess group name kops aws iam create user user name kops aws iam add user to group user name kops group name kops aws iam create access key user name kops kops requires the creation of a dedicated S3 bucket in order to store the state and representation of the cluster You will need to change the bucket name and provide your unique bucket name for example a reverse of FQDN added with short description of the cluster Also make sure to use the region where you will be deploying the cluster code block shell session aws s3api create bucket bucket prefix example com state store region us west 2 create bucket configuration LocationConstraint us west 2 export KOPS STATE STORE s3 prefix example com state store The above steps are sufficient for getting a working cluster installed Please consult kops aws documentation https kops sigs k8s io getting started install for more detailed setup instructions Cilium Prerequisites Ensure the ref admin system reqs are met particularly the Linux kernel and key value store versions The default AMI satisfies the minimum kernel version required by Cilium which is what we will use in this guide Creating a Cluster Note that you will need to specify the master zones and zones for creating the master and worker nodes The number of master zones should be odd 1 3 for HA For simplicity you can just use 1 region To keep things simple when following this guide we will use a gossip based cluster This means you do not have to create a hosted zone upfront cluster NAME variable must end with k8s local to use the gossip protocol If creating multiple clusters using the same kops user then make the cluster name unique by adding a prefix such as com company emailid code block shell session export NAME com company emailid cilium k8s local kops create cluster state KOPS STATE STORE node count 3 topology private master zones us west 2a us west 2b us west 2c zones us west 2a us west 2b us west 2c networking cilium cloud labels Team Dev Owner Admin NAME yes You may be prompted to create a ssh public private key pair code block shell session ssh keygen Please see ref appendix kops include k8s install validate rst appendix kops Deleting a Cluster To undo the dependencies and other deployment features in AWS from the kops cluster creation use kops to destroy a cluster immediately with the parameter yes code block shell session kops delete cluster NAME yes Further reading on using Cilium with Kops See the kops networking documentation https kops sigs k8s io networking cilium for more information on the configuration options kops offers See the kops cluster spec documentation https pkg go dev k8s io kops pkg apis kops tab doc CiliumNetworkingSpec for a comprehensive list of all the options Appendix Details of kops flags used in cluster creation The following section explains all the flags used in create cluster command state KOPS STATE STORE KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster node count 3 No of worker nodes in the kubernetes cluster topology private Cluster will be created with private topology what that means is all masters nodes will be launched in a private subnet in the VPC master zones eu west 1a eu west 1b eu west 1c The 3 zones ensure the HA of master nodes each belonging in a different Availability zones zones eu west 1a eu west 1b eu west 1c Zones where the worker nodes will be deployed networking cilium Networking CNI plugin to be used cilium You can also use cilium etcd which will use a dedicated etcd cluster as key value store instead of CRDs cloud labels Team Dev Owner Admin Labels for your cluster that will be applied to your instances NAME Name of the cluster Make sure the name ends with k8s local for a gossip based cluster
cilium Cilium as the CNI The guide uses The guide is to use Kubespray for creating an AWS Kubernetes cluster running docs cilium io Installation using Kubespray k8sinstallkubespray
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _k8s_install_kubespray: **************************** Installation using Kubespray **************************** The guide is to use Kubespray for creating an AWS Kubernetes cluster running Cilium as the CNI. The guide uses: - Kubespray v2.6.0 - Latest `Cilium released version`_ (instructions for using the version are mentioned below) Please consult `Kubespray Prerequisites <https://github.com/kubernetes-sigs/kubespray#requirements>`__ and Cilium :ref:`admin_system_reqs`. .. _Cilium released version: `latest released Cilium version`_ Installing Kubespray ==================== .. code-block:: shell-session $ git clone --branch v2.6.0 https://github.com/kubernetes-sigs/kubespray Install dependencies from ``requirements.txt`` .. code-block:: shell-session $ cd kubespray $ sudo pip install -r requirements.txt Infrastructure Provisioning =========================== We will use Terraform for provisioning AWS infrastructure. Configure AWS credentials ------------------------- Export the variables for your AWS credentials .. code-block:: shell-session export AWS_ACCESS_KEY_ID="www" export AWS_SECRET_ACCESS_KEY ="xxx" export AWS_SSH_KEY_NAME="yyy" export AWS_DEFAULT_REGION="zzz" Configure Terraform Variables ----------------------------- We will start by specifying the infrastructure needed for the Kubernetes cluster. .. code-block:: shell-session $ cd contrib/terraform/aws $ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. You can change the master and etcd number to 1 for deployments that don't need high availability. By default, this tutorial will create: - VPC with 2 public and private subnets - Bastion Hosts and NAT Gateways in the Public Subnet - Three of each (masters, etcd, and worker nodes) in the Private Subnet - AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet - Terraform scripts using ``CoreOS`` as base image. Example ``terraform.tfvars`` file: .. code-block:: bash #Global Vars aws_cluster_name = "kubespray" #VPC Vars aws_vpc_cidr_block = "XXX.XXX.192.0/18" aws_cidr_subnets_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"] aws_cidr_subnets_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"] #Bastion Host aws_bastion_size = "t2.medium" #Kubernetes Cluster aws_kube_master_num = 3 aws_kube_master_size = "t2.medium" aws_etcd_num = 3 aws_etcd_size = "t2.medium" aws_kube_worker_num = 3 aws_kube_worker_size = "t2.medium" #Settings AWS ELB aws_elb_api_port = 6443 k8s_secure_api_port = 6443 kube_insecure_apiserver_address = "0.0.0.0" Apply the configuration ----------------------- ``terraform init`` to initialize the following modules - ``module.aws-vpc`` - ``module.aws-elb`` - ``module.aws-iam`` .. code-block:: shell-session $ terraform init Once initialized , execute: .. code-block:: shell-session $ terraform plan -out=aws_kubespray_plan This will generate a file, ``aws_kubespray_plan``, depicting an execution plan of the infrastructure that will be created on AWS. To apply, execute: .. code-block:: shell-session $ terraform init $ terraform apply "aws_kubespray_plan" Terraform automatically creates an Ansible Inventory file at ``inventory/hosts``. Installing Kubernetes cluster with Cilium as CNI ================================================ Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ``ansible_ssh_private_key_file=<path to EC2 SSH private key file>`` We recommend using the `latest released Cilium version`_ by passing the variable when running the ``ansible-playbook`` command. For example, you could add the following flag to the command below: ``-e cilium_version=v1.11.0``. .. code-block:: shell-session $ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file> .. _latest released Cilium version: https://github.com/cilium/cilium/releases If you are interested in configuring your Kubernetes cluster setup, you should consider copying the sample inventory. Then, you can edit the variables in the relevant file in the ``group_vars`` directory. .. code-block:: shell-session $ cp -r inventory/sample inventory/my-inventory $ cp ./inventory/hosts ./inventory/my-inventory/hosts $ echo 'cilium_version: "v1.11.0"' >> ./inventory/my-inventory/group_vars/k8s_cluster/k8s-net-cilium.yml $ ansible-playbook -i ./inventory/my-inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file> Validate Cluster ================ To check if cluster is created successfully, ssh into the bastion host with the user ``core``. .. code-block:: shell-session $ # Get information about the basiton host $ cat ssh-bastion.conf $ ssh -i ~/path/to/ec2-key-file.pem core@public_ip_of_bastion_host Execute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node. .. include:: k8s-install-validate.rst Delete Cluster ============== .. code-block:: shell-session $ cd contrib/terraform/aws $ terraform destroy
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io k8s install kubespray Installation using Kubespray The guide is to use Kubespray for creating an AWS Kubernetes cluster running Cilium as the CNI The guide uses Kubespray v2 6 0 Latest Cilium released version instructions for using the version are mentioned below Please consult Kubespray Prerequisites https github com kubernetes sigs kubespray requirements and Cilium ref admin system reqs Cilium released version latest released Cilium version Installing Kubespray code block shell session git clone branch v2 6 0 https github com kubernetes sigs kubespray Install dependencies from requirements txt code block shell session cd kubespray sudo pip install r requirements txt Infrastructure Provisioning We will use Terraform for provisioning AWS infrastructure Configure AWS credentials Export the variables for your AWS credentials code block shell session export AWS ACCESS KEY ID www export AWS SECRET ACCESS KEY xxx export AWS SSH KEY NAME yyy export AWS DEFAULT REGION zzz Configure Terraform Variables We will start by specifying the infrastructure needed for the Kubernetes cluster code block shell session cd contrib terraform aws cp contrib terraform aws terraform tfvars example terraform tfvars Open the file and change any defaults particularly the number of master etcd and worker nodes You can change the master and etcd number to 1 for deployments that don t need high availability By default this tutorial will create VPC with 2 public and private subnets Bastion Hosts and NAT Gateways in the Public Subnet Three of each masters etcd and worker nodes in the Private Subnet AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet Terraform scripts using CoreOS as base image Example terraform tfvars file code block bash Global Vars aws cluster name kubespray VPC Vars aws vpc cidr block XXX XXX 192 0 18 aws cidr subnets private XXX XXX 192 0 20 XXX XXX 208 0 20 aws cidr subnets public XXX XXX 224 0 20 XXX XXX 240 0 20 Bastion Host aws bastion size t2 medium Kubernetes Cluster aws kube master num 3 aws kube master size t2 medium aws etcd num 3 aws etcd size t2 medium aws kube worker num 3 aws kube worker size t2 medium Settings AWS ELB aws elb api port 6443 k8s secure api port 6443 kube insecure apiserver address 0 0 0 0 Apply the configuration terraform init to initialize the following modules module aws vpc module aws elb module aws iam code block shell session terraform init Once initialized execute code block shell session terraform plan out aws kubespray plan This will generate a file aws kubespray plan depicting an execution plan of the infrastructure that will be created on AWS To apply execute code block shell session terraform init terraform apply aws kubespray plan Terraform automatically creates an Ansible Inventory file at inventory hosts Installing Kubernetes cluster with Cilium as CNI Kubespray uses Ansible as its substrate for provisioning and orchestration Once the infrastructure is created you can run the Ansible playbook to install Kubernetes and all the required dependencies Execute the below command in the kubespray clone repo providing the correct path of the AWS EC2 ssh private key in ansible ssh private key file path to EC2 SSH private key file We recommend using the latest released Cilium version by passing the variable when running the ansible playbook command For example you could add the following flag to the command below e cilium version v1 11 0 code block shell session ansible playbook i inventory hosts cluster yml e ansible user core e bootstrap os coreos e kube network plugin cilium b become user root flush cache e ansible ssh private key file path to EC2 SSH private key file latest released Cilium version https github com cilium cilium releases If you are interested in configuring your Kubernetes cluster setup you should consider copying the sample inventory Then you can edit the variables in the relevant file in the group vars directory code block shell session cp r inventory sample inventory my inventory cp inventory hosts inventory my inventory hosts echo cilium version v1 11 0 inventory my inventory group vars k8s cluster k8s net cilium yml ansible playbook i inventory my inventory hosts cluster yml e ansible user core e bootstrap os coreos e kube network plugin cilium b become user root flush cache e ansible ssh private key file path to EC2 SSH private key file Validate Cluster To check if cluster is created successfully ssh into the bastion host with the user core code block shell session Get information about the basiton host cat ssh bastion conf ssh i path to ec2 key file pem core public ip of bastion host Execute the commands below from the bastion host If kubectl isn t installed on the bastion host you can login to the master node to test the below commands You may need to copy the private key to the bastion host to access the master node include k8s install validate rst Delete Cluster code block shell session cd contrib terraform aws terraform destroy
cilium AWS VPC CNI plugin docs cilium io plugin In this hybrid mode the AWS VPC CNI plugin is responsible for setting This guide explains how to set up Cilium in combination with the AWS VPC CNI chainingawscni
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _chaining_aws_cni: ****************** AWS VPC CNI plugin ****************** This guide explains how to set up Cilium in combination with the AWS VPC CNI plugin. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management (IPAM) via ENIs. After the initial networking is setup for a given pod, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by the AWS VPC CNI plugin in order to enforce network policies, perform load-balancing and provide encryption. .. image:: aws-cilium-architecture.png .. include:: cni-chaining-limitations.rst .. admonition:: Video :class: attention If you require advanced features of Cilium, consider migrating fully to Cilium. To help you with the process, you can watch two Principal Engineers at Meltwater talk about `how they migrated Meltwater's production Kubernetes clusters - from the AWS VPC CNI plugin to Cilium <https://www.youtube.com/watch?v=w6S6baRHHu8&list=PLDg_GiBbAx-kDXqDYimwytMLh2kAHyMPd&t=182s>`__. .. important:: Please ensure that you are running version `1.11.2 <https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.11.2>`_ or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium. .. code-block:: shell-session $ kubectl -n kube-system get ds/aws-node -o json | jq -r '.spec.template.spec.containers[0].image' 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.11.2 If you are running an older version, as in the above example, you can upgrade it with: .. code-block:: shell-session $ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.11/config/master/aws-k8s-cni.yaml .. image:: aws-cni-architecture.png Setting up a cluster on AWS =========================== Follow the instructions in the :ref:`k8s_install_quick` guide to set up an EKS cluster, or use any other method of your preference to set up a Kubernetes cluster on AWS. Ensure that the `aws-vpc-cni-k8s <https://github.com/aws/amazon-vpc-cni-k8s>`_ plugin is installed — which will already be the case if you have created an EKS cluster. Also, ensure the version of the plugin is up-to-date as per the above. .. include:: k8s-install-download-release.rst Deploy Cilium via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set cni.chainingMode=aws-cni \\ --set cni.exclusive=false \\ --set enableIPv4Masquerade=false \\ --set routingMode=native \\ --set endpointRoutes.enabled=true This will enable chaining with the AWS VPC CNI plugin. It will also disable tunneling, as it's not required since ENI IP addresses can be directly routed in the VPC. For the same reason, masquerading can be disabled as well. Restart existing pods ===================== The new CNI chaining configuration *will not* apply to any pod that is already running in the cluster. Existing pods will be reachable, and Cilium will load-balance *to* them, but not *from* them. Policy enforcement will also not be applied. For these reasons, you must restart these pods so that the chaining configuration can be applied to them. The following command can be used to check which pods need to be restarted: .. code-block:: bash for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do ceps=$(kubectl -n "${ns}" get cep \ -o jsonpath='{.items[*].metadata.name}') pods=$(kubectl -n "${ns}" get pod \ -o custom-columns=NAME:.metadata.name,NETWORK:.spec.hostNetwork \ | grep -E '\s(<none>|false)' | awk '{print $1}' | tr '\n' ' ') ncep=$(echo "${pods} ${ceps}" | tr ' ' '\n' | sort | uniq -u | paste -s -d ' ' -) for pod in $(echo $ncep); do echo "${ns}/${pod}"; done done .. include:: k8s-install-validate.rst Advanced ======== Enabling security groups for pods (EKS) --------------------------------------- Cilium can be used alongside the `security groups for pods <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_ feature of EKS in supported clusters when running in chaining mode. Follow the instructions below to enable this feature: .. important:: The following guide requires `jq <https://stedolan.github.io/jq/>`_ and the `AWS CLI <https://aws.amazon.com/cli/>`_ to be installed and configured. Make sure that the ``AmazonEKSVPCResourceController`` managed policy is attached to the IAM role associated with the EKS cluster: .. code-block:: shell-session export EKS_CLUSTER_NAME="my-eks-cluster" # Change accordingly export EKS_CLUSTER_ROLE_NAME=$(aws eks describe-cluster \ --name "${EKS_CLUSTER_NAME}" \ | jq -r '.cluster.roleArn' | awk -F/ '{print $NF}') aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \ --role-name "${EKS_CLUSTER_ROLE_NAME}" Then, as mentioned above, make sure that the version of the AWS VPC CNI plugin running in the cluster is up-to-date: .. code-block:: shell-session kubectl -n kube-system get ds/aws-node \ -o jsonpath='{.spec.template.spec.containers[0].image}' 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.10 Next, patch the ``kube-system/aws-node`` DaemonSet in order to enable security groups for pods: .. code-block:: shell-session kubectl -n kube-system patch ds aws-node \ -p '{"spec":{"template":{"spec":{"initContainers":[{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}],"containers":[{"env":[{"name":"ENABLE_POD_ENI","value":"true"}],"name":"aws-node"}]}}}}' kubectl -n kube-system rollout status ds aws-node After the rollout is complete, all nodes in the cluster should have the ``vps.amazonaws.com/has-trunk-attached`` label set to ``true``: .. code-block:: shell-session kubectl get nodes -L vpc.amazonaws.com/has-trunk-attached NAME STATUS ROLES AGE VERSION HAS-TRUNK-ATTACHED ip-192-168-111-169.eu-west-2.compute.internal Ready <none> 22m v1.19.6-eks-49a6c0 true ip-192-168-129-175.eu-west-2.compute.internal Ready <none> 22m v1.19.6-eks-49a6c0 true From this moment everything should be in place. For details on how to actually associate security groups to pods, please refer to the `official documentation <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_. .. include:: next-steps.rst
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io chaining aws cni AWS VPC CNI plugin This guide explains how to set up Cilium in combination with the AWS VPC CNI plugin In this hybrid mode the AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management IPAM via ENIs After the initial networking is setup for a given pod the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by the AWS VPC CNI plugin in order to enforce network policies perform load balancing and provide encryption image aws cilium architecture png include cni chaining limitations rst admonition Video class attention If you require advanced features of Cilium consider migrating fully to Cilium To help you with the process you can watch two Principal Engineers at Meltwater talk about how they migrated Meltwater s production Kubernetes clusters from the AWS VPC CNI plugin to Cilium https www youtube com watch v w6S6baRHHu8 list PLDg GiBbAx kDXqDYimwytMLh2kAHyMPd t 182s important Please ensure that you are running version 1 11 2 https github com aws amazon vpc cni k8s releases tag v1 11 2 or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium code block shell session kubectl n kube system get ds aws node o json jq r spec template spec containers 0 image 602401143452 dkr ecr us west 2 amazonaws com amazon k8s cni v1 11 2 If you are running an older version as in the above example you can upgrade it with code block shell session kubectl apply f https raw githubusercontent com aws amazon vpc cni k8s release 1 11 config master aws k8s cni yaml image aws cni architecture png Setting up a cluster on AWS Follow the instructions in the ref k8s install quick guide to set up an EKS cluster or use any other method of your preference to set up a Kubernetes cluster on AWS Ensure that the aws vpc cni k8s https github com aws amazon vpc cni k8s plugin is installed which will already be the case if you have created an EKS cluster Also ensure the version of the plugin is up to date as per the above include k8s install download release rst Deploy Cilium via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set cni chainingMode aws cni set cni exclusive false set enableIPv4Masquerade false set routingMode native set endpointRoutes enabled true This will enable chaining with the AWS VPC CNI plugin It will also disable tunneling as it s not required since ENI IP addresses can be directly routed in the VPC For the same reason masquerading can be disabled as well Restart existing pods The new CNI chaining configuration will not apply to any pod that is already running in the cluster Existing pods will be reachable and Cilium will load balance to them but not from them Policy enforcement will also not be applied For these reasons you must restart these pods so that the chaining configuration can be applied to them The following command can be used to check which pods need to be restarted code block bash for ns in kubectl get ns o jsonpath items metadata name do ceps kubectl n ns get cep o jsonpath items metadata name pods kubectl n ns get pod o custom columns NAME metadata name NETWORK spec hostNetwork grep E s none false awk print 1 tr n ncep echo pods ceps tr n sort uniq u paste s d for pod in echo ncep do echo ns pod done done include k8s install validate rst Advanced Enabling security groups for pods EKS Cilium can be used alongside the security groups for pods https docs aws amazon com eks latest userguide security groups for pods html feature of EKS in supported clusters when running in chaining mode Follow the instructions below to enable this feature important The following guide requires jq https stedolan github io jq and the AWS CLI https aws amazon com cli to be installed and configured Make sure that the AmazonEKSVPCResourceController managed policy is attached to the IAM role associated with the EKS cluster code block shell session export EKS CLUSTER NAME my eks cluster Change accordingly export EKS CLUSTER ROLE NAME aws eks describe cluster name EKS CLUSTER NAME jq r cluster roleArn awk F print NF aws iam attach role policy policy arn arn aws iam aws policy AmazonEKSVPCResourceController role name EKS CLUSTER ROLE NAME Then as mentioned above make sure that the version of the AWS VPC CNI plugin running in the cluster is up to date code block shell session kubectl n kube system get ds aws node o jsonpath spec template spec containers 0 image 602401143452 dkr ecr us west 2 amazonaws com amazon k8s cni v1 7 10 Next patch the kube system aws node DaemonSet in order to enable security groups for pods code block shell session kubectl n kube system patch ds aws node p spec template spec initContainers env name DISABLE TCP EARLY DEMUX value true name aws vpc cni init containers env name ENABLE POD ENI value true name aws node kubectl n kube system rollout status ds aws node After the rollout is complete all nodes in the cluster should have the vps amazonaws com has trunk attached label set to true code block shell session kubectl get nodes L vpc amazonaws com has trunk attached NAME STATUS ROLES AGE VERSION HAS TRUNK ATTACHED ip 192 168 111 169 eu west 2 compute internal Ready none 22m v1 19 6 eks 49a6c0 true ip 192 168 129 175 eu west 2 compute internal Ready none 22m v1 19 6 eks 49a6c0 true From this moment everything should be in place For details on how to actually associate security groups to pods please refer to the official documentation https docs aws amazon com eks latest userguide security groups for pods html include next steps rst
cilium Migrating a cluster to Cilium docs cilium io be migrated on a node by node basis without disrupting existing traffic cnimigration Cilium can be used to migrate from another cni Running clusters can
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _cni_migration: ************************************* Migrating a cluster to Cilium ************************************* Cilium can be used to migrate from another cni. Running clusters can be migrated on a node-by-node basis, without disrupting existing traffic or requiring a complete cluster outage or rebuild depending on the complexity of the migration case. This document outlines how migrations with Cilium work. You will have a good understanding of the basic requirements, as well as see an example migration which you can practice using :ref:`Kind <gs_kind>`. Background ========== When the kubelet creates a Pod's Sandbox, the installed CNI, as configured in ``/etc/cni/net.d/``, is called. The cni will handle the networking for a pod - including allocating an ip address, creating & configuring a network interface, and (potentially) establishing an overlay network. The Pod's network configuration shares the same life cycle as the PodSandbox. In the case of migration, we typically reconfigure ``/etc/cni/net.d/`` to point to Cilium. However, any existing pods will still have been configured by the old network plugin and any new pods will be configured by the newer CNI. To complete the migration all Pods on the cluster that are configured by the old cni must be recycled in order to be a member of the new CNI. A naive approach to migrating a CNI would be to reconfigure all nodes with a new CNI and then gradually restart each node in the cluster, thus replacing the CNI when the node is brought back up and ensuring that all pods are part of the new CNI. This simple migration, while effective, comes at the cost of disrupting cluster connectivity during the rollout. Unmigrated and migrated nodes would be split in to two "islands" of connectivity, and pods would be randomly unable to reach one-another until the migration is complete. Migration via dual overlays --------------------------- Instead, Cilium supports a *hybrid* mode, where two separate overlays are established across the cluster. While pods on a given node can only be attached to one network, they have access to both Cilium and non-Cilium pods while the migration is taking place. As long as Cilium and the existing networking provider use a separate IP range, the Linux routing table takes care of separating traffic. In this document we will discuss a model for live migrating between two deployed CNI implementations. This will have the benefit of reducing downtime of nodes and workloads and ensuring that workloads on both configured CNIs can communicate during migration. For live migration to work, Cilium will be installed with a separate CIDR range and encapsulation port than that of the currently installed CNI. As long as Cilium and the existing CNI use a separate IP range, the Linux routing table takes care of separating traffic. Requirements ============ Live migration requires the following: - A new, distinct Cluster CIDR for Cilium to use - Use of the :ref:`Cluster Pool IPAM mode<ipam_crd_cluster_pool>` - A distinct overlay, either protocol or port - An existing network plugin that uses the Linux routing stack, such as Flannel, Calico, or AWS-CNI Limitations =========== Currently, Cilium migration has not been tested with: - BGP-based routing - Changing IP families (e.g. from IPv4 to IPv6) - Migrating from Cilium in chained mode - An existing NetworkPolicy provider During migration, Cilium's NetworkPolicy and CiliumNetworkPolicy enforcement will be disabled. Otherwise, traffic from non-Cilium pods may be incorrectly dropped. Once the migration process is complete, policy enforcement can be re-enabled. If there is an existing NetworkPolicy provider, you may wish to temporarily delete all NetworkPolicies before proceeding. It is strongly recommended to install Cilium using the :ref:`cluster-pool <ipam_crd_cluster_pool>` IPAM allocator. This provides the strongest assurance that there will be no IP collisions. .. warning:: Migration is highly dependent on the exact configuration of existing clusters. It is, thus, strongly recommended to perform a trial migration on a test or lab cluster. Overview ======== The migration process utilizes the :ref:`per-node configuration<per-node-configuration>` feature to selectively enable Cilium CNI. This allows for a controlled rollout of Cilium without disrupting existing workloads. Cilium will be installed, first, in a mode where it establishes an overlay but does not provide CNI networking for any pods. Then, individual nodes will be migrated. In summary, the process looks like: 1. Install cilium in "secondary" mode 2. Cordon, drain, migrate, and reboot each node 3. Remove the existing network provider 4. (Optional) Reboot each node again Migration procedure =================== Preparation ----------- - Optional: Create a :ref:`Kind <gs_kind>` cluster and install `Flannel <https://github.com/flannel-io/flannel>`_ on it. .. parsed-literal:: $ cat <<EOF > kind-config.yaml apiVersion: kind.x-k8s.io/v1alpha4 kind: Cluster nodes: - role: control-plane - role: worker - role: worker networking: disableDefaultCNI: true EOF $ kind create cluster --config=kind-config.yaml $ kubectl apply -n kube-system --server-side -f \ |SCM_WEB|\/examples/misc/migration/install-reference-cni-plugins.yaml $ kubectl apply --server-side -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml $ kubectl wait --for=condition=Ready nodes --all - Optional: Monitor connectivity. You may wish to install a tool such as `goldpinger <https://github.com/bloomberg/goldpinger>`_ to detect any possible connectivity issues. 1. Select a **new** CIDR for pods. It must be distinct from all other CIDRs in use. For Kind clusters, the default is ``10.244.0.0/16``. So, for this example, we will use ``10.245.0.0/16``. 2. Select a **distinct** encapsulation port. For example, if the existing cluster is using VXLAN, then you should either use GENEVE or configure Cilium to use VXLAN with a different port. For this example, we will use VXLAN with a non-default port of 8473. 3. Create a helm ``values-migration.yaml`` file based on the following example. Be sure to fill in the CIDR you selected in step 1. .. code-block:: yaml operator: unmanagedPodWatcher: restart: false # Migration: Don't restart unmigrated pods routingMode: tunnel # Migration: Optional: default is tunneling, configure as needed tunnelProtocol: vxlan # Migration: Optional: default is VXLAN, configure as needed tunnelPort: 8473 # Migration: Optional, change only if both networks use the same port by default cni: customConf: true # Migration: Don't install a CNI configuration file uninstall: false # Migration: Don't remove CNI configuration on shutdown ipam: mode: "cluster-pool" operator: clusterPoolIPv4PodCIDRList: ["10.245.0.0/16"] # Migration: Ensure this is distinct and unused policyEnforcementMode: "never" # Migration: Disable policy enforcement bpf: hostLegacyRouting: true # Migration: Allow for routing between Cilium and the existing overlay 4. Configure any additional Cilium Helm values. Cilium supports a number of :ref:`Helm configuration options<helm_reference>`. You may choose to auto-detect typical ones using the :ref:`cilium-cli <install_cilium_cli>`. This will consume the template and auto-detect any other relevant Helm values. Review these values for your particular installation. .. parsed-literal:: $ cilium install |CHART_VERSION| --values values-migration.yaml --dry-run-helm-values > values-initial.yaml $ cat values-initial.yaml 5. Install cilium using :ref:`helm <k8s_install_helm>`. .. code-block:: shell-session $ helm repo add cilium https://helm.cilium.io/ $ helm install cilium cilium/cilium --namespace kube-system --values values-initial.yaml At this point, you should have a cluster with Cilium installed and an overlay established, but no pods managed by Cilium itself. You can verify this with the ``cilium`` command. .. code-block:: shell-session $ cilium status --wait ... Cluster Pods: 0/3 managed by Cilium 6. Create a :ref:`per-node config<per-node-configuration>` that will instruct Cilium to "take over" CNI networking on the node. Initially, this will apply to no nodes; you will roll it out gradually via the migration process. .. code-block:: shell-session cat <<EOF | kubectl apply --server-side -f - apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: cilium-default spec: nodeSelector: matchLabels: io.cilium.migration/cilium-default: "true" defaults: write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist custom-cni-conf: "false" cni-chaining-mode: "none" cni-exclusive: "true" EOF Migration --------- At this point, you are ready to begin the migration process. The basic flow is: Select a node to be migrated. It is not recommended to start with a control-plane node. .. code-block:: shell-session $ NODE="kind-worker" # for the Kind example 1. Cordon and, optionally, drain the node in question. .. code-block:: shell-session $ kubectl cordon $NODE $ kubectl drain --ignore-daemonsets $NODE Draining is not strictly required, but it is recommended. Otherwise pods will encounter a brief interruption while the node is rebooted. 2. Label the node. This causes the ``CiliumNodeConfig`` to apply to this node. .. code-block:: shell-session $ kubectl label node $NODE --overwrite "io.cilium.migration/cilium-default=true" 3. Restart Cilium. This will cause it to write its CNI configuration file. .. code-block:: shell-session $ kubectl -n kube-system delete pod --field-selector spec.nodeName=$NODE -l k8s-app=cilium $ kubectl -n kube-system rollout status ds/cilium -w 4. Reboot the node. If using kind, do so with docker: .. code-block:: shell-session docker restart $NODE 5. Validate that the node has been successfully migrated. .. code-block:: shell-session $ cilium status --wait $ kubectl get -o wide node $NODE $ kubectl -n kube-system run --attach --rm --restart=Never verify-network \ --overrides='{"spec": {"nodeName": "'$NODE'", "tolerations": [{"operator": "Exists"}]}}' \ --image ghcr.io/nicolaka/netshoot:v0.8 -- /bin/bash -c 'ip -br addr && curl -s -k https://$KUBERNETES_SERVICE_HOST/healthz && echo' Ensure the IP address of the pod is in the Cilium CIDR(s) supplied above and that the apiserver is reachable. 6. Uncordon the node. .. code-block:: shell-session $ kubectl uncordon $NODE Once you are satisfied everything has been migrated successfully, select another unmigrated node in the cluster and repeat these steps. Post-migration -------------- Perform these steps once the cluster is fully migrated. 1. Ensure Cilium is healthy and that all pods have been migrated: .. code-block:: shell-session $ cilium status 2. Update the Cilium configuration: - Cilium should be the primary CNI - NetworkPolicy should be enforced - The Operator can restart unmanaged pods - **Optional**: use :ref:`eBPF_Host_Routing`. Enabling this will cause a short connectivity interruption on each node as the daemon restarts, but improves networking performance. You can do this manually, or via the ``cilium`` tool (this will not apply changes to the cluster): .. parsed-literal:: $ cilium install |CHART_VERSION| --values values-initial.yaml --dry-run-helm-values \ --set operator.unmanagedPodWatcher.restart=true --set cni.customConf=false \ --set policyEnforcementMode=default \ --set bpf.hostLegacyRouting=false > values-final.yaml # optional, can cause brief interruptions $ diff values-initial.yaml values-final.yaml Then, apply the changes to the cluster: .. code-block:: shell-session $ helm upgrade --namespace kube-system cilium cilium/cilium --values values-final.yaml $ kubectl -n kube-system rollout restart daemonset cilium $ cilium status --wait 3. Delete the per-node configuration: .. code-block:: shell-session $ kubectl delete -n kube-system ciliumnodeconfig cilium-default 4. Delete the previous network plugin. At this point, all pods should be using Cilium for networking. You can easily verify this with ``cilium status``. It is now safe to delete the previous network plugin from the cluster. Most network plugins leave behind some resources, e.g. iptables rules and interfaces. These will be cleaned up when the node next reboots. If desired, you may perform a rolling reboot again.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io cni migration Migrating a cluster to Cilium Cilium can be used to migrate from another cni Running clusters can be migrated on a node by node basis without disrupting existing traffic or requiring a complete cluster outage or rebuild depending on the complexity of the migration case This document outlines how migrations with Cilium work You will have a good understanding of the basic requirements as well as see an example migration which you can practice using ref Kind gs kind Background When the kubelet creates a Pod s Sandbox the installed CNI as configured in etc cni net d is called The cni will handle the networking for a pod including allocating an ip address creating configuring a network interface and potentially establishing an overlay network The Pod s network configuration shares the same life cycle as the PodSandbox In the case of migration we typically reconfigure etc cni net d to point to Cilium However any existing pods will still have been configured by the old network plugin and any new pods will be configured by the newer CNI To complete the migration all Pods on the cluster that are configured by the old cni must be recycled in order to be a member of the new CNI A naive approach to migrating a CNI would be to reconfigure all nodes with a new CNI and then gradually restart each node in the cluster thus replacing the CNI when the node is brought back up and ensuring that all pods are part of the new CNI This simple migration while effective comes at the cost of disrupting cluster connectivity during the rollout Unmigrated and migrated nodes would be split in to two islands of connectivity and pods would be randomly unable to reach one another until the migration is complete Migration via dual overlays Instead Cilium supports a hybrid mode where two separate overlays are established across the cluster While pods on a given node can only be attached to one network they have access to both Cilium and non Cilium pods while the migration is taking place As long as Cilium and the existing networking provider use a separate IP range the Linux routing table takes care of separating traffic In this document we will discuss a model for live migrating between two deployed CNI implementations This will have the benefit of reducing downtime of nodes and workloads and ensuring that workloads on both configured CNIs can communicate during migration For live migration to work Cilium will be installed with a separate CIDR range and encapsulation port than that of the currently installed CNI As long as Cilium and the existing CNI use a separate IP range the Linux routing table takes care of separating traffic Requirements Live migration requires the following A new distinct Cluster CIDR for Cilium to use Use of the ref Cluster Pool IPAM mode ipam crd cluster pool A distinct overlay either protocol or port An existing network plugin that uses the Linux routing stack such as Flannel Calico or AWS CNI Limitations Currently Cilium migration has not been tested with BGP based routing Changing IP families e g from IPv4 to IPv6 Migrating from Cilium in chained mode An existing NetworkPolicy provider During migration Cilium s NetworkPolicy and CiliumNetworkPolicy enforcement will be disabled Otherwise traffic from non Cilium pods may be incorrectly dropped Once the migration process is complete policy enforcement can be re enabled If there is an existing NetworkPolicy provider you may wish to temporarily delete all NetworkPolicies before proceeding It is strongly recommended to install Cilium using the ref cluster pool ipam crd cluster pool IPAM allocator This provides the strongest assurance that there will be no IP collisions warning Migration is highly dependent on the exact configuration of existing clusters It is thus strongly recommended to perform a trial migration on a test or lab cluster Overview The migration process utilizes the ref per node configuration per node configuration feature to selectively enable Cilium CNI This allows for a controlled rollout of Cilium without disrupting existing workloads Cilium will be installed first in a mode where it establishes an overlay but does not provide CNI networking for any pods Then individual nodes will be migrated In summary the process looks like 1 Install cilium in secondary mode 2 Cordon drain migrate and reboot each node 3 Remove the existing network provider 4 Optional Reboot each node again Migration procedure Preparation Optional Create a ref Kind gs kind cluster and install Flannel https github com flannel io flannel on it parsed literal cat EOF kind config yaml apiVersion kind x k8s io v1alpha4 kind Cluster nodes role control plane role worker role worker networking disableDefaultCNI true EOF kind create cluster config kind config yaml kubectl apply n kube system server side f SCM WEB examples misc migration install reference cni plugins yaml kubectl apply server side f https github com flannel io flannel releases latest download kube flannel yml kubectl wait for condition Ready nodes all Optional Monitor connectivity You may wish to install a tool such as goldpinger https github com bloomberg goldpinger to detect any possible connectivity issues 1 Select a new CIDR for pods It must be distinct from all other CIDRs in use For Kind clusters the default is 10 244 0 0 16 So for this example we will use 10 245 0 0 16 2 Select a distinct encapsulation port For example if the existing cluster is using VXLAN then you should either use GENEVE or configure Cilium to use VXLAN with a different port For this example we will use VXLAN with a non default port of 8473 3 Create a helm values migration yaml file based on the following example Be sure to fill in the CIDR you selected in step 1 code block yaml operator unmanagedPodWatcher restart false Migration Don t restart unmigrated pods routingMode tunnel Migration Optional default is tunneling configure as needed tunnelProtocol vxlan Migration Optional default is VXLAN configure as needed tunnelPort 8473 Migration Optional change only if both networks use the same port by default cni customConf true Migration Don t install a CNI configuration file uninstall false Migration Don t remove CNI configuration on shutdown ipam mode cluster pool operator clusterPoolIPv4PodCIDRList 10 245 0 0 16 Migration Ensure this is distinct and unused policyEnforcementMode never Migration Disable policy enforcement bpf hostLegacyRouting true Migration Allow for routing between Cilium and the existing overlay 4 Configure any additional Cilium Helm values Cilium supports a number of ref Helm configuration options helm reference You may choose to auto detect typical ones using the ref cilium cli install cilium cli This will consume the template and auto detect any other relevant Helm values Review these values for your particular installation parsed literal cilium install CHART VERSION values values migration yaml dry run helm values values initial yaml cat values initial yaml 5 Install cilium using ref helm k8s install helm code block shell session helm repo add cilium https helm cilium io helm install cilium cilium cilium namespace kube system values values initial yaml At this point you should have a cluster with Cilium installed and an overlay established but no pods managed by Cilium itself You can verify this with the cilium command code block shell session cilium status wait Cluster Pods 0 3 managed by Cilium 6 Create a ref per node config per node configuration that will instruct Cilium to take over CNI networking on the node Initially this will apply to no nodes you will roll it out gradually via the migration process code block shell session cat EOF kubectl apply server side f apiVersion cilium io v2 kind CiliumNodeConfig metadata namespace kube system name cilium default spec nodeSelector matchLabels io cilium migration cilium default true defaults write cni conf when ready host etc cni net d 05 cilium conflist custom cni conf false cni chaining mode none cni exclusive true EOF Migration At this point you are ready to begin the migration process The basic flow is Select a node to be migrated It is not recommended to start with a control plane node code block shell session NODE kind worker for the Kind example 1 Cordon and optionally drain the node in question code block shell session kubectl cordon NODE kubectl drain ignore daemonsets NODE Draining is not strictly required but it is recommended Otherwise pods will encounter a brief interruption while the node is rebooted 2 Label the node This causes the CiliumNodeConfig to apply to this node code block shell session kubectl label node NODE overwrite io cilium migration cilium default true 3 Restart Cilium This will cause it to write its CNI configuration file code block shell session kubectl n kube system delete pod field selector spec nodeName NODE l k8s app cilium kubectl n kube system rollout status ds cilium w 4 Reboot the node If using kind do so with docker code block shell session docker restart NODE 5 Validate that the node has been successfully migrated code block shell session cilium status wait kubectl get o wide node NODE kubectl n kube system run attach rm restart Never verify network overrides spec nodeName NODE tolerations operator Exists image ghcr io nicolaka netshoot v0 8 bin bash c ip br addr curl s k https KUBERNETES SERVICE HOST healthz echo Ensure the IP address of the pod is in the Cilium CIDR s supplied above and that the apiserver is reachable 6 Uncordon the node code block shell session kubectl uncordon NODE Once you are satisfied everything has been migrated successfully select another unmigrated node in the cluster and repeat these steps Post migration Perform these steps once the cluster is fully migrated 1 Ensure Cilium is healthy and that all pods have been migrated code block shell session cilium status 2 Update the Cilium configuration Cilium should be the primary CNI NetworkPolicy should be enforced The Operator can restart unmanaged pods Optional use ref eBPF Host Routing Enabling this will cause a short connectivity interruption on each node as the daemon restarts but improves networking performance You can do this manually or via the cilium tool this will not apply changes to the cluster parsed literal cilium install CHART VERSION values values initial yaml dry run helm values set operator unmanagedPodWatcher restart true set cni customConf false set policyEnforcementMode default set bpf hostLegacyRouting false values final yaml optional can cause brief interruptions diff values initial yaml values final yaml Then apply the changes to the cluster code block shell session helm upgrade namespace kube system cilium cilium cilium values values final yaml kubectl n kube system rollout restart daemonset cilium cilium status wait 3 Delete the per node configuration code block shell session kubectl delete n kube system ciliumnodeconfig cilium default 4 Delete the previous network plugin At this point all pods should be using Cilium for networking You can easily verify this with cilium status It is now safe to delete the previous network plugin from the cluster Most network plugins leave behind some resources e g iptables rules and interfaces These will be cleaned up when the node next reboots If desired you may perform a rolling reboot again
cilium docs cilium io ranchermanagedrkeclusters Introduction Installation using Rancher
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _rancher_managed_rke_clusters: ************************** Installation using Rancher ************************** Introduction ============ If you're not using the Rancher Management Console/UI to install your clusters, head over to the :ref:`installation guides for standalone RKE clusters <rke_install>`. Rancher comes with `official support for Cilium <https://ranchermanager.docs.rancher.com/faq/container-network-interface-providers>`__. For most Rancher users, that's the recommended way to use Cilium on Rancher-managed clusters. However, as Rancher is using a custom ``rke2-cilium`` `Helm chart <https://github.com/rancher/rke2-charts/tree/main-source/packages/rke2-cilium>`__ with independent release cycles, Cilium power-users might want to use an out-of-band Cilium installation instead, based on the official `Cilium Helm chart <https://github.com/cilium/charts>`__, on top of their Rancher-managed RKE1/RKE2 downstream clusters. This guide explains how to achieve this. .. note:: This guide only shows a step-by-step guide for Rancher-managed (**non-standalone**) **RKE2** clusters. However, for a legacy RKE1 cluster, it's even easier. You also need to edit the cluster YAML and change ``network.cni`` to ``none`` as described in the :ref:`RKE 1 standalone guide<rke1_cni_none>`, but there's no need to copy over a Control Plane node local KubeConfig manually. Luckily, Rancher allows access to RKE1 clusters in ``Updating`` state, which are not ready yet. Hence, there's no chicken-egg issue to resolve. Prerequisites ============= * Fully functioning `Rancher Version 2.x <https://ranchermanager.docs.rancher.com/>`__ instance * At least one empty Linux VM, to be used as initial downstream "Custom Cluster" (Control Plane) node * DNS record pointing to the Kubernetes API of the downstream "Custom Cluster" Control Plane node(s) or L4 load-balancer Create a New Cluster ==================== In Rancher UI, navigate to the Cluster Management page. In the top right, click on the ``Create`` button to create a new cluster. .. image:: images/rancher_add_cluster.png On the Cluster creation page select to create a new ``Custom`` cluster: .. image:: images/rancher_existing_nodes.png When the ``Create Custom`` page opens, provide at least a name for the cluster. Go through the other configuration options and configure the ones that are relevant for your setup. Next to the ``Cluster Options`` section click the box to ``Edit as YAML``. The configuration for the cluster will open up in an editor in the window. .. image:: images/rancher_edit_as_yaml.png Within the ``Cluster`` CustomResource (``provisioning.cattle.io/v1``), the relevant parts to change are ``spec.rkeConfig.machineGlobalConfig.cni``, ``spec.rkeConfig.machineGlobalConfig.tls-san``, and optionally ``spec.rkeConfig.chartValues.rke2-calico`` and ``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy``: .. image:: images/rancher_delete_network_plugin.png It's required to add a DNS record, pointing to the Control Plane node IP(s) or an L4 load-balancer in front of them, under ``spec.rkeConfig.machineGlobalConfig.tls-san``, as that's required to resolve a chicken-egg issue further down the line. Ensure that ``spec.rkeConfig.machineGlobalConfig.cni`` is set to ``none`` and ``spec.rkeConfig.machineGlobalConfig.tls-san`` lists the mentioned DNS record: .. image:: images/rancher_network_plugin_none.png Optionally, if ``spec.rkeConfig.chartValues.rke2-calico`` is not empty, remove the full object as you won't deploy Rancher's default CNI. At the same time, change ``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy`` to ``true`` in case you want to run :ref:`Cilium without Kube-Proxy<kubeproxy-free>`. Make any additional changes to the configuration that are appropriate for your environment. When you are ready, click ``Create`` and Rancher will create the cluster. .. image:: images/rancher_cluster_state_provisioning.png The cluster will stay in ``Updating`` state until you add nodes. Click on the cluster. In the ``Registration`` tab you should see the generated ``Registation command`` you need to run on the downstream cluster nodes. Do not forget to select the correct node roles. Rancher comes with the default to deploy all three roles (``etcd``, ``Control Plane``, and ``Worker``), which is often not what you want for multi-node clusters. .. image:: images/rancher_add_nodes.png A few seconds after you added at least a single node, you should see the new node(s) in the ``Machines`` tab. The machine will be stuck in ``Reconciling`` state and won't become ``Active``: .. image:: images/rancher_node_not_ready.png That's expected as there's no CNI running on this cluster yet. Unfortunately, this also means critical pods like ``rke2-coredns-rke2-coredns-*`` and ``cattle-cluster-agent-*`` are stuck in ``PENDING`` state. Hence, the downstream cluster is not yet able to register itself on Rancher. As a next step, you need to resolve this chicken-egg issue by directly accessing the downstream cluster's Kubernetes API, without going via Rancher. Rancher will not allow access to this downstream cluster, as it's still in ``Updating`` state. That's why you can't use the downstream cluster's KubeConfig provided by the Rancher management console/UI. Copy ``/etc/rancher/rke2/rke2.yaml`` from the first downstream cluster Control Plane node to your jump/bastion host where you have ``helm`` installed and can access the Cilium Helm charts. .. code-block:: shell-session scp root@<cp-node-1-ip>:/etc/rancher/rke2/rke2.yaml . Search and replace ``127.0.0.1`` (``clusters[0].cluster.server``) with the already mentioned DNS record pointing to the Control Plane / L4 load-balancer IP(s). .. code-block:: yaml apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0...S0K server: https://127.0.0.1:6443 name: default contexts: {} Check if you can access the Kubernetes API: .. code-block:: shell-session export KUBECONFIG=$(pwd)/my-cluster-kubeconfig.yaml kubectl get nodes NAME STATUS ROLES AGE VERSION rancher-demo-node NotReady control-plane,etcd,master 44m v1.27.8+rke2r1 If successful, you can now install Cilium via Helm CLI: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ -f my-cluster-cilium-values.yaml After a few minutes, you should see that the node changed to the ``Ready`` status: .. code-block:: shell-session kubectl get nodes NAME STATUS ROLES AGE VERSION rancher-demo-node Ready control-plane,etcd,master 48m v1.27.8+rke2r1 Back in the Rancher UI, you should see that the cluster changed to the healthy ``Active`` status: .. image:: images/rancher_my_cluster_active.png That's it. You can now normally work with this cluster as if you installed the CNI the default Rancher way. Additional nodes can now be added straightaway and the "local Control Plane RKE2 KubeConfig" workaround is not required anymore. Optional: Add Cilium to Rancher Registries ========================================== One small, optional convenience item would be to add the Cilium Helm repository to Rancher so that, in the future, Cilium can easily be upgraded via Rancher UI. You have two options available: **Option 1**: Navigate to ``Cluster Management`` -> ``Advanced`` -> ``Repositories`` and click the ``Create`` button: .. image:: images/rancher_add_repository.png **Option 2**: Alternatively, you can also just add the Cilium Helm repository on a single cluster by navigating to ``<your-cluster>`` -> ``Apps`` -> ``Repositories``: .. image:: images/rancher_add_repository_cluster.png For either option, in the window that opens, add the official Cilium Helm chart repository (``https://helm.cilium.io``) to the Rancher repository list: .. image:: images/rancher_add_cilium_repository.png Once added, you should see the Cilium repository in the repositories list: .. image:: images/rancher_repositories_list_success.png If you now head to ``<your-cluster>`` -> ``Apps`` -> ``Installed Apps``, you should see the ``cilium`` app. Ensure ``All Namespaces`` or ``Project: System -> kube-system`` is selected at the top of the page. .. image:: images/rancher_cluster_cilium_app.png Since you added the Cilium repository, you will now see a small hint on this app entry when there's a new Cilium version released. You can then upgrade directly via Rancher UI. .. image:: images/rancher_cluster_cilium_app_upgrade.png .. image:: images/rancher_cluster_cilium_app_upgrade_version.pn
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io rancher managed rke clusters Installation using Rancher Introduction If you re not using the Rancher Management Console UI to install your clusters head over to the ref installation guides for standalone RKE clusters rke install Rancher comes with official support for Cilium https ranchermanager docs rancher com faq container network interface providers For most Rancher users that s the recommended way to use Cilium on Rancher managed clusters However as Rancher is using a custom rke2 cilium Helm chart https github com rancher rke2 charts tree main source packages rke2 cilium with independent release cycles Cilium power users might want to use an out of band Cilium installation instead based on the official Cilium Helm chart https github com cilium charts on top of their Rancher managed RKE1 RKE2 downstream clusters This guide explains how to achieve this note This guide only shows a step by step guide for Rancher managed non standalone RKE2 clusters However for a legacy RKE1 cluster it s even easier You also need to edit the cluster YAML and change network cni to none as described in the ref RKE 1 standalone guide rke1 cni none but there s no need to copy over a Control Plane node local KubeConfig manually Luckily Rancher allows access to RKE1 clusters in Updating state which are not ready yet Hence there s no chicken egg issue to resolve Prerequisites Fully functioning Rancher Version 2 x https ranchermanager docs rancher com instance At least one empty Linux VM to be used as initial downstream Custom Cluster Control Plane node DNS record pointing to the Kubernetes API of the downstream Custom Cluster Control Plane node s or L4 load balancer Create a New Cluster In Rancher UI navigate to the Cluster Management page In the top right click on the Create button to create a new cluster image images rancher add cluster png On the Cluster creation page select to create a new Custom cluster image images rancher existing nodes png When the Create Custom page opens provide at least a name for the cluster Go through the other configuration options and configure the ones that are relevant for your setup Next to the Cluster Options section click the box to Edit as YAML The configuration for the cluster will open up in an editor in the window image images rancher edit as yaml png Within the Cluster CustomResource provisioning cattle io v1 the relevant parts to change are spec rkeConfig machineGlobalConfig cni spec rkeConfig machineGlobalConfig tls san and optionally spec rkeConfig chartValues rke2 calico and spec rkeConfig machineGlobalConfig disable kube proxy image images rancher delete network plugin png It s required to add a DNS record pointing to the Control Plane node IP s or an L4 load balancer in front of them under spec rkeConfig machineGlobalConfig tls san as that s required to resolve a chicken egg issue further down the line Ensure that spec rkeConfig machineGlobalConfig cni is set to none and spec rkeConfig machineGlobalConfig tls san lists the mentioned DNS record image images rancher network plugin none png Optionally if spec rkeConfig chartValues rke2 calico is not empty remove the full object as you won t deploy Rancher s default CNI At the same time change spec rkeConfig machineGlobalConfig disable kube proxy to true in case you want to run ref Cilium without Kube Proxy kubeproxy free Make any additional changes to the configuration that are appropriate for your environment When you are ready click Create and Rancher will create the cluster image images rancher cluster state provisioning png The cluster will stay in Updating state until you add nodes Click on the cluster In the Registration tab you should see the generated Registation command you need to run on the downstream cluster nodes Do not forget to select the correct node roles Rancher comes with the default to deploy all three roles etcd Control Plane and Worker which is often not what you want for multi node clusters image images rancher add nodes png A few seconds after you added at least a single node you should see the new node s in the Machines tab The machine will be stuck in Reconciling state and won t become Active image images rancher node not ready png That s expected as there s no CNI running on this cluster yet Unfortunately this also means critical pods like rke2 coredns rke2 coredns and cattle cluster agent are stuck in PENDING state Hence the downstream cluster is not yet able to register itself on Rancher As a next step you need to resolve this chicken egg issue by directly accessing the downstream cluster s Kubernetes API without going via Rancher Rancher will not allow access to this downstream cluster as it s still in Updating state That s why you can t use the downstream cluster s KubeConfig provided by the Rancher management console UI Copy etc rancher rke2 rke2 yaml from the first downstream cluster Control Plane node to your jump bastion host where you have helm installed and can access the Cilium Helm charts code block shell session scp root cp node 1 ip etc rancher rke2 rke2 yaml Search and replace 127 0 0 1 clusters 0 cluster server with the already mentioned DNS record pointing to the Control Plane L4 load balancer IP s code block yaml apiVersion v1 clusters cluster certificate authority data LS0 S0K server https 127 0 0 1 6443 name default contexts Check if you can access the Kubernetes API code block shell session export KUBECONFIG pwd my cluster kubeconfig yaml kubectl get nodes NAME STATUS ROLES AGE VERSION rancher demo node NotReady control plane etcd master 44m v1 27 8 rke2r1 If successful you can now install Cilium via Helm CLI parsed literal helm install cilium CHART RELEASE namespace kube system f my cluster cilium values yaml After a few minutes you should see that the node changed to the Ready status code block shell session kubectl get nodes NAME STATUS ROLES AGE VERSION rancher demo node Ready control plane etcd master 48m v1 27 8 rke2r1 Back in the Rancher UI you should see that the cluster changed to the healthy Active status image images rancher my cluster active png That s it You can now normally work with this cluster as if you installed the CNI the default Rancher way Additional nodes can now be added straightaway and the local Control Plane RKE2 KubeConfig workaround is not required anymore Optional Add Cilium to Rancher Registries One small optional convenience item would be to add the Cilium Helm repository to Rancher so that in the future Cilium can easily be upgraded via Rancher UI You have two options available Option 1 Navigate to Cluster Management Advanced Repositories and click the Create button image images rancher add repository png Option 2 Alternatively you can also just add the Cilium Helm repository on a single cluster by navigating to your cluster Apps Repositories image images rancher add repository cluster png For either option in the window that opens add the official Cilium Helm chart repository https helm cilium io to the Rancher repository list image images rancher add cilium repository png Once added you should see the Cilium repository in the repositories list image images rancher repositories list success png If you now head to your cluster Apps Installed Apps you should see the cilium app Ensure All Namespaces or Project System kube system is selected at the top of the page image images rancher cluster cilium app png Since you added the Cilium repository you will now see a small hint on this app entry when there s a new Cilium version released You can then upgrade directly via Rancher UI image images rancher cluster cilium app upgrade png image images rancher cluster cilium app upgrade version pn
cilium Hubble internals docs cilium io hubbleinternals This documentation section is targeted at developers who are interested in contributing to Hubble For this purpose it describes
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _hubble_internals: **************** Hubble internals **************** .. note:: This documentation section is targeted at developers who are interested in contributing to Hubble. For this purpose, it describes Hubble internals. .. note:: This documentation covers the Hubble server (sometimes referred as "Hubble embedded") and Hubble Relay components but does not cover the Hubble UI and CLI. Hubble builds on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. One of the design goals of Hubble is to achieve all of this at large scale. Hubble's server component is embedded into the Cilium agent in order to achieve high performance with low-overhead. The gRPC services offered by Hubble server may be consumed locally via a Unix domain socket or, more typically, through Hubble Relay. Hubble Relay is a standalone component which is aware of all Hubble instances and offers full cluster visibility by connecting to their respective gRPC APIs. This capability is usually referred to as multi-node. Hubble Relay's main goal is to offer a rich API that can be safely exposed and consumed by the Hubble UI and CLI. Hubble Architecture =================== Hubble exposes gRPC services from the Cilium process that allows clients to receive flows and other type of data. Hubble server ------------- The Hubble server component implements two gRPC services. The **Observer service** which may optionally be exposed via a TCP socket in addition to a local Unix domain socket and the **Peer service**, which is served on both as well as being exposed as a Kubernetes Service when enabled via TCP. The Observer service ^^^^^^^^^^^^^^^^^^^^ The Observer service is the principal service. It provides four RPC endpoints: ``GetFlows``, ``GetNodes``, ``GetNamespaces`` and ``ServerStatus``. * ``GetNodes`` returns a list of metrics and other information related to each Hubble instance * ``ServerStatus`` returns a summary the information in ``GetNodes`` * ``GetNamespaces`` returns a list of namespaces that had network flows within the last one hour * ``GetFlows`` returns a stream of flow related events Using ``GetFlows``, callers get a stream of payloads. Request parameters allow callers to specify filters in the form of allow lists and deny lists to allow for fine-grained filtering of data. In order to answer ``GetFlows`` requests, Hubble stores monitoring events from Cilium's event monitor into a user-space ring buffer structure. Monitoring events are obtained by registering a new listener on Cilium monitor. The ring buffer is capable of storing a configurable amount of events in memory. Events are continuously consumed, overriding older ones once the ring buffer is full. Additionally, the Observer service also provides the ``GetAgentEvents`` and ``GetDebugEvents`` RPC endpoints to expose data about the Cilium agent events and Cilium datapath debug events, respectively. Both are similar to ``GetFlows`` except they do not implement filtering capabilities. .. image:: ./../images/hubble_getflows.png For efficiency, the internal buffer length is a bit mask of ones + 1. The most significant bit of this bit mask is the same position of the most significant bit position of 'n'. In other terms, the internal buffer size is always a power of 2 with 1 slot reserved for the writer. In effect, from a user perspective, the ring buffer capacity is one less than a power of 2. As the ring buffer is a hot code path, it has been designed to not employ any locking mechanisms and uses atomic operations instead. While this approach has performance benefits, it also has the downsides of being a complex component. Due to its complex nature, the ring buffer is typically accessed via a ring reader that abstracts the complexity of this data structure for reading. The ring reader allows reading one event at the time with 'previous' and 'next' methods but also implements a follow mode where events are continuously read as they are written to the ring buffer. The Peer service ^^^^^^^^^^^^^^^^ The Peer service sends information about Hubble peers in the cluster in a stream. When the ``Notify`` method is called, it reports information about all the peers in the cluster and subsequently sends information about peers that are updated, added, or removed from the cluster. Thus, it allows the caller to keep track of all Hubble instances and query their respective gRPC services. This service is exposed as a Kubernetes Service and is primarily used by Hubble Relay in order to have a cluster-wide view of all Hubble instances. The Peer service obtains peer change notifications by subscribing to Cilium's node manager. To this end, it internally defines a handler that implements Cilium's datapath node handler interface. .. _hubble_relay: Hubble Relay ------------ Hubble Relay is the Hubble component that brings multi-node support. It leverages the Peer service to obtain information about Hubble instances and consume their gRPC API in order to provide a more rich API that covers events from across the entire cluster (or even multiple clusters in a ClusterMesh scenario). Hubble Relay was first introduced as a technology preview with the release of Cilium v1.8 and was declared stable with the release of Cilium v1.9. Hubble Relay implements the Observer service for multi-node. To that end, it maintains a persistent connection with every Hubble peer in a cluster with a peer manager. This component provides callers with the list of peers. Callers may report when a peer is unreachable, in which case the peer manager will attempt to reconnect. As Hubble Relay connects to every node in a cluster, the Hubble server instances must make their API available (by default on port 4244). By default, Hubble server endpoints are secured using mutual TLS (mTLS) when exposed on a TCP port in order to limit access to Hubble Relay only.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io hubble internals Hubble internals note This documentation section is targeted at developers who are interested in contributing to Hubble For this purpose it describes Hubble internals note This documentation covers the Hubble server sometimes referred as Hubble embedded and Hubble Relay components but does not cover the Hubble UI and CLI Hubble builds on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner One of the design goals of Hubble is to achieve all of this at large scale Hubble s server component is embedded into the Cilium agent in order to achieve high performance with low overhead The gRPC services offered by Hubble server may be consumed locally via a Unix domain socket or more typically through Hubble Relay Hubble Relay is a standalone component which is aware of all Hubble instances and offers full cluster visibility by connecting to their respective gRPC APIs This capability is usually referred to as multi node Hubble Relay s main goal is to offer a rich API that can be safely exposed and consumed by the Hubble UI and CLI Hubble Architecture Hubble exposes gRPC services from the Cilium process that allows clients to receive flows and other type of data Hubble server The Hubble server component implements two gRPC services The Observer service which may optionally be exposed via a TCP socket in addition to a local Unix domain socket and the Peer service which is served on both as well as being exposed as a Kubernetes Service when enabled via TCP The Observer service The Observer service is the principal service It provides four RPC endpoints GetFlows GetNodes GetNamespaces and ServerStatus GetNodes returns a list of metrics and other information related to each Hubble instance ServerStatus returns a summary the information in GetNodes GetNamespaces returns a list of namespaces that had network flows within the last one hour GetFlows returns a stream of flow related events Using GetFlows callers get a stream of payloads Request parameters allow callers to specify filters in the form of allow lists and deny lists to allow for fine grained filtering of data In order to answer GetFlows requests Hubble stores monitoring events from Cilium s event monitor into a user space ring buffer structure Monitoring events are obtained by registering a new listener on Cilium monitor The ring buffer is capable of storing a configurable amount of events in memory Events are continuously consumed overriding older ones once the ring buffer is full Additionally the Observer service also provides the GetAgentEvents and GetDebugEvents RPC endpoints to expose data about the Cilium agent events and Cilium datapath debug events respectively Both are similar to GetFlows except they do not implement filtering capabilities image images hubble getflows png For efficiency the internal buffer length is a bit mask of ones 1 The most significant bit of this bit mask is the same position of the most significant bit position of n In other terms the internal buffer size is always a power of 2 with 1 slot reserved for the writer In effect from a user perspective the ring buffer capacity is one less than a power of 2 As the ring buffer is a hot code path it has been designed to not employ any locking mechanisms and uses atomic operations instead While this approach has performance benefits it also has the downsides of being a complex component Due to its complex nature the ring buffer is typically accessed via a ring reader that abstracts the complexity of this data structure for reading The ring reader allows reading one event at the time with previous and next methods but also implements a follow mode where events are continuously read as they are written to the ring buffer The Peer service The Peer service sends information about Hubble peers in the cluster in a stream When the Notify method is called it reports information about all the peers in the cluster and subsequently sends information about peers that are updated added or removed from the cluster Thus it allows the caller to keep track of all Hubble instances and query their respective gRPC services This service is exposed as a Kubernetes Service and is primarily used by Hubble Relay in order to have a cluster wide view of all Hubble instances The Peer service obtains peer change notifications by subscribing to Cilium s node manager To this end it internally defines a handler that implements Cilium s datapath node handler interface hubble relay Hubble Relay Hubble Relay is the Hubble component that brings multi node support It leverages the Peer service to obtain information about Hubble instances and consume their gRPC API in order to provide a more rich API that covers events from across the entire cluster or even multiple clusters in a ClusterMesh scenario Hubble Relay was first introduced as a technology preview with the release of Cilium v1 8 and was declared stable with the release of Cilium v1 9 Hubble Relay implements the Observer service for multi node To that end it maintains a persistent connection with every Hubble peer in a cluster with a peer manager This component provides callers with the list of peers Callers may report when a peer is unreachable in which case the peer manager will attempt to reconnect As Hubble Relay connects to every node in a cluster the Hubble server instances must make their API available by default on port 4244 By default Hubble server endpoints are secured using mutual TLS mTLS when exposed on a TCP port in order to limit access to Hubble Relay only
cilium ciliumoperatorinternals This document provides a technical overview of the Cilium Operator and describes Cilium Operator docs cilium io the cluster wide operations it is responsible for
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _cilium_operator_internals: Cilium Operator =============== This document provides a technical overview of the Cilium Operator and describes the cluster-wide operations it is responsible for. Highly Available Cilium Operator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Cilium Operator uses Kubernetes leader election library in conjunction with lease locks to provide HA functionality. The capability is supported on Kubernetes versions 1.14 and above. It is Cilium's default behavior since the 1.9 release. The number of replicas for the HA deployment can be configured using Helm option ``operator.replicas``. .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set operator.replicas=3 .. code-block:: shell-session $ kubectl get deployment cilium-operator -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE cilium-operator 3/3 3 3 46s The operator is an integral part of Cilium installations in Kubernetes environments and is tasked to perform the following operations: CRD Registration ~~~~~~~~~~~~~~~~ The default behavior of the Cilium Operator is to register the CRDs used by Cilium. The following custom resources are registered by the Cilium Operator: .. include:: ../crdlist.rst IPAM ~~~~ Cilium Operator is responsible for IP address management when running in the following modes: - :ref:`ipam_azure` - :ref:`ipam_eni` - :ref:`ipam_crd_cluster_pool` When running in IPAM mode :ref:`k8s_hostscope`, the allocation CIDRs used by ``cilium-agent`` is derived from the fields ``podCIDR`` and ``podCIDRs`` populated by Kubernetes in the Kubernetes ``Node`` resource. For :ref:`concepts_ipam_crd` IPAM allocation mode, it is the job of Cloud-specific operator to populate the required information about CIDRs in the ``CiliumNode`` resource. Cilium currently has native support for the following Cloud providers in CRD IPAM mode: - Azure - ``cilium-operator-azure`` - AWS - ``cilium-operator-aws`` For more information on IPAM visit :ref:`address_management`. Load Balancer IP Address Management ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When :ref:`lb_ipam` is used, Cilium Operator manages IP address for ``type: LoadBalancer`` services. KVStore operations ~~~~~~~~~~~~~~~~~~ These operations are performed only when KVStore is enabled for the Cilium Operator. In addition, KVStore operations are only required when ``cilium-operator`` is running with any of the below options: - ``--synchronize-k8s-services`` - ``--synchronize-k8s-nodes`` - ``--identity-allocation-mode=kvstore`` K8s Services synchronization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Cilium Operator performs the job of synchronizing Kubernetes services to external KVStore configured for the Cilium Operator if running with ``--synchronize-k8s-services`` flag. The Cilium Operator performs this operation only for shared services (services that have ``service.cilium.io/shared`` annotation set to true). This is meaningful when running Cilium to setup a ClusterMesh. K8s Nodes synchronization ^^^^^^^^^^^^^^^^^^^^^^^^^ Similar to K8s services, Cilium Operator also synchronizes Kubernetes nodes information to the shared KVStore. When a ``Node`` object is deleted it is not possible to reliably cleanup the corresponding ``CiliumNode`` object from the Agent itself. The Cilium Operator holds the responsibility to garbage collect orphaned ``CiliumNodes``. Heartbeat update ^^^^^^^^^^^^^^^^ The Cilium Operator periodically updates the Cilium's heartbeat path key with the current time. The default key for this heartbeat is ``cilium/.heartbeat`` in the KVStore. It is used by Cilium Agents to validate that KVStore updates can be received. Identity garbage collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Each workload in Kubernetes is assigned a security identity that is used for policy decision making. This identity is based on common workload markers like labels. Cilium supports two identity allocation mechanisms: - CRD Identity allocation - KVStore Identity allocation Both the mechanisms of identity allocation require the Cilium Operator to perform the garbage collection of stale identities. This garbage collection is necessary because a 16-bit unsigned integer represents the security identity, and thus we can only have a maximum of 65536 identities in the cluster. CRD Identity garbage collection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ CRD identity allocation uses Kubernetes custom resource ``CiliumIdentity`` to represent a security identity. This is the default behavior of Cilium and works out of the box in any K8s environment without any external dependency. The Cilium Operator maintains a local cache for CiliumIdentities with the last time they were seen active. A controller runs in the background periodically which scans this local cache and deletes identities that have not had their heartbeat life sign updated since ``identity-heartbeat-timeout``. One thing to note here is that an Identity is always assumed to be live if it has an endpoint associated with it. KVStore Identity garbage collection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ While the CRD allocation mode for identities is more common, it is limited in terms of scale. When running in a very large environment, a saner choice is to use the KVStore allocation mode. This mode stores the identities in an external store like etcd. For more information on Cilium's scalability visit :ref:`scalability_guide`. The garbage collection mechanism involves scanning the KVStore of all the identities. For each identity, the Cilium Operator search in the KVStore if there are any active users of that identity. The entry is deleted from the KVStore if there are no active users. CiliumEndpoint garbage collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CiliumEndpoint object is created by the ``cilium-agent`` for each ``Pod`` in the cluster. The Cilium Operator manages a controller to handle the garbage collection of orphaned ``CiliumEndpoint`` objects. An orphaned ``CiliumEndpoint`` object means that the owner of the endpoint object is not active anymore in the cluster. CiliumEndpoints are also considered orphaned if the owner is an existing Pod in ``PodFailed`` or ``PodSucceeded`` state. This controller is run periodically if the ``endpoint-gc-interval`` option is specified and only once during startup if the option is unspecified. Derivative network policy creation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When using Cloud-provider-specific constructs like ``toGroups`` in the network policy spec, the Cilium Operator performs the job of converting these constructs to derivative CNP/CCNP objects without these fields. For more information, see how Cilium network policies incorporate the use of ``toGroups`` to :ref:`lock down external access using AWS security groups<aws_metadata_with_policy>`. Ingress and Gateway API Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Ingress or Gateway API support is enabled, the Cilium Operator performs the task of parsing Ingress or Gateway API objects and converting them into ``CiliumEnvoyConfig`` objects used for configuring the per-node Envoy proxy. Additionally, Secrets used by Ingress or Gateway API objects will be synced to a Cilium-managed namespace that the Cilium Agent is then granted access to. This reduces the permissions required of the Cilium Agent. Mutual Authentication Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Cilium's Mutual Authentication Support is enabled, the Cilium Operator is responsible for ensuring that each Cilium Identity has an associated identity in the certificate management system. It will create and delete identity registrations in the configured certificate management section as required. The Cilium Operator does not, however have any to the key material in the identities. That information is only shared with the Cilium Agent via other channels.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io cilium operator internals Cilium Operator This document provides a technical overview of the Cilium Operator and describes the cluster wide operations it is responsible for Highly Available Cilium Operator The Cilium Operator uses Kubernetes leader election library in conjunction with lease locks to provide HA functionality The capability is supported on Kubernetes versions 1 14 and above It is Cilium s default behavior since the 1 9 release The number of replicas for the HA deployment can be configured using Helm option operator replicas parsed literal helm install cilium CHART RELEASE namespace kube system set operator replicas 3 code block shell session kubectl get deployment cilium operator n kube system NAME READY UP TO DATE AVAILABLE AGE cilium operator 3 3 3 3 46s The operator is an integral part of Cilium installations in Kubernetes environments and is tasked to perform the following operations CRD Registration The default behavior of the Cilium Operator is to register the CRDs used by Cilium The following custom resources are registered by the Cilium Operator include crdlist rst IPAM Cilium Operator is responsible for IP address management when running in the following modes ref ipam azure ref ipam eni ref ipam crd cluster pool When running in IPAM mode ref k8s hostscope the allocation CIDRs used by cilium agent is derived from the fields podCIDR and podCIDRs populated by Kubernetes in the Kubernetes Node resource For ref concepts ipam crd IPAM allocation mode it is the job of Cloud specific operator to populate the required information about CIDRs in the CiliumNode resource Cilium currently has native support for the following Cloud providers in CRD IPAM mode Azure cilium operator azure AWS cilium operator aws For more information on IPAM visit ref address management Load Balancer IP Address Management When ref lb ipam is used Cilium Operator manages IP address for type LoadBalancer services KVStore operations These operations are performed only when KVStore is enabled for the Cilium Operator In addition KVStore operations are only required when cilium operator is running with any of the below options synchronize k8s services synchronize k8s nodes identity allocation mode kvstore K8s Services synchronization Cilium Operator performs the job of synchronizing Kubernetes services to external KVStore configured for the Cilium Operator if running with synchronize k8s services flag The Cilium Operator performs this operation only for shared services services that have service cilium io shared annotation set to true This is meaningful when running Cilium to setup a ClusterMesh K8s Nodes synchronization Similar to K8s services Cilium Operator also synchronizes Kubernetes nodes information to the shared KVStore When a Node object is deleted it is not possible to reliably cleanup the corresponding CiliumNode object from the Agent itself The Cilium Operator holds the responsibility to garbage collect orphaned CiliumNodes Heartbeat update The Cilium Operator periodically updates the Cilium s heartbeat path key with the current time The default key for this heartbeat is cilium heartbeat in the KVStore It is used by Cilium Agents to validate that KVStore updates can be received Identity garbage collection Each workload in Kubernetes is assigned a security identity that is used for policy decision making This identity is based on common workload markers like labels Cilium supports two identity allocation mechanisms CRD Identity allocation KVStore Identity allocation Both the mechanisms of identity allocation require the Cilium Operator to perform the garbage collection of stale identities This garbage collection is necessary because a 16 bit unsigned integer represents the security identity and thus we can only have a maximum of 65536 identities in the cluster CRD Identity garbage collection CRD identity allocation uses Kubernetes custom resource CiliumIdentity to represent a security identity This is the default behavior of Cilium and works out of the box in any K8s environment without any external dependency The Cilium Operator maintains a local cache for CiliumIdentities with the last time they were seen active A controller runs in the background periodically which scans this local cache and deletes identities that have not had their heartbeat life sign updated since identity heartbeat timeout One thing to note here is that an Identity is always assumed to be live if it has an endpoint associated with it KVStore Identity garbage collection While the CRD allocation mode for identities is more common it is limited in terms of scale When running in a very large environment a saner choice is to use the KVStore allocation mode This mode stores the identities in an external store like etcd For more information on Cilium s scalability visit ref scalability guide The garbage collection mechanism involves scanning the KVStore of all the identities For each identity the Cilium Operator search in the KVStore if there are any active users of that identity The entry is deleted from the KVStore if there are no active users CiliumEndpoint garbage collection CiliumEndpoint object is created by the cilium agent for each Pod in the cluster The Cilium Operator manages a controller to handle the garbage collection of orphaned CiliumEndpoint objects An orphaned CiliumEndpoint object means that the owner of the endpoint object is not active anymore in the cluster CiliumEndpoints are also considered orphaned if the owner is an existing Pod in PodFailed or PodSucceeded state This controller is run periodically if the endpoint gc interval option is specified and only once during startup if the option is unspecified Derivative network policy creation When using Cloud provider specific constructs like toGroups in the network policy spec the Cilium Operator performs the job of converting these constructs to derivative CNP CCNP objects without these fields For more information see how Cilium network policies incorporate the use of toGroups to ref lock down external access using AWS security groups aws metadata with policy Ingress and Gateway API Support When Ingress or Gateway API support is enabled the Cilium Operator performs the task of parsing Ingress or Gateway API objects and converting them into CiliumEnvoyConfig objects used for configuring the per node Envoy proxy Additionally Secrets used by Ingress or Gateway API objects will be synced to a Cilium managed namespace that the Cilium Agent is then granted access to This reduces the permissions required of the Cilium Agent Mutual Authentication Support When Cilium s Mutual Authentication Support is enabled the Cilium Operator is responsible for ensuring that each Cilium Identity has an associated identity in the certificate management system It will create and delete identity registrations in the configured certificate management section as required The Cilium Operator does not however have any to the key material in the identities That information is only shared with the Cilium Agent via other channels
cilium Cilium and Hubble can both be configured to serve Prometheus docs cilium io Monitoring Metrics metrics https prometheus io metrics Prometheus is a pluggable metrics collection
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _metrics: ******************** Monitoring & Metrics ******************** Cilium and Hubble can both be configured to serve `Prometheus <https://prometheus.io>`_ metrics. Prometheus is a pluggable metrics collection and storage system and can act as a data source for `Grafana <https://grafana.com/>`_, a metrics visualization frontend. Unlike some metrics collectors like statsd, Prometheus requires the collectors to pull metrics from each source. Cilium and Hubble metrics can be enabled independently of each other. Cilium Metrics ============== Cilium metrics provide insights into the state of Cilium itself, namely of the ``cilium-agent``, ``cilium-envoy``, and ``cilium-operator`` processes. To run Cilium with Prometheus metrics enabled, deploy it with the ``prometheus.enabled=true`` Helm value set. Cilium metrics are exported under the ``cilium_`` Prometheus namespace. Envoy metrics are exported under the ``envoy_`` Prometheus namespace, of which the Cilium-defined metrics are exported under the ``envoy_cilium_`` namespace. When running and collecting in Kubernetes they will be tagged with a pod name and namespace. Installation ------------ You can enable metrics for ``cilium-agent`` (including Envoy) with the Helm value ``prometheus.enabled=true``. ``cilium-operator`` metrics are enabled by default, if you want to disable them, set Helm value ``operator.prometheus.enabled=false``. .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true The ports can be configured via ``prometheus.port``, ``envoy.prometheus.port``, or ``operator.prometheus.port`` respectively. When metrics are enabled, all Cilium components will have the following annotations. They can be used to signal Prometheus whether to scrape metrics: .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9962 To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named ``cilium-agent`` with the ``prometheus.io/scrape:'true'`` annotation set: .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9964 This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation. Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the ``scrape_configs`` section: .. code-block:: yaml scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: ${1}:${2} target_label: __address__ .. _hubble_metrics: Hubble Metrics ============== While Cilium metrics allow you to monitor the state Cilium itself, Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium-managed Kubernetes pods with respect to connectivity and security. Installation ------------ To deploy Cilium with Hubble metrics enabled, you need to enable Hubble with ``hubble.enabled=true`` and provide a set of Hubble metrics you want to enable via ``hubble.metrics.enabled``. Some of the metrics can also be configured with additional options. See the :ref:`Hubble exported metrics<hubble_exported_metrics>` section for the full list of available metrics and their options. .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true \\ --set hubble.enabled=true \\ --set hubble.metrics.enableOpenMetrics=true \\ --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\\,source_namespace\\,source_workload\\,destination_ip\\,destination_namespace\\,destination_workload\\,traffic_direction}" The port of the Hubble metrics can be configured with the ``hubble.metrics.port`` Helm value. For details on enabling Hubble metrics with TLS see the :ref:`hubble_configure_metrics_tls` section of the documentation. .. Note:: L7 metrics such as HTTP, are only emitted for pods that enable :ref:`Layer 7 Protocol Visibility <proxy_visibility>`. When deployed with a non-empty ``hubble.metrics.enabled`` Helm value, the Cilium chart will create a Kubernetes headless service named ``hubble-metrics`` with the ``prometheus.io/scrape:'true'`` annotation set: .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9965 Set the following options in the ``scrape_configs`` section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically: .. code-block:: yaml scrape_configs: - job_name: 'kubernetes-endpoints' scrape_interval: 30s kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: (.+)(?::\d+);(\d+) replacement: $1:$2 .. _hubble_open_metrics: OpenMetrics ----------- Additionally, you can opt-in to `OpenMetrics <https://openmetrics.io>`_ by setting ``hubble.metrics.enableOpenMetrics=true``. Enabling OpenMetrics configures the Hubble metrics endpoint to support exporting metrics in OpenMetrics format when explicitly requested by clients. Using OpenMetrics supports additional functionality such as Exemplars, which enables associating metrics with traces by embedding trace IDs into the exported metrics. Prometheus needs to be configured to take advantage of OpenMetrics and will only scrape exemplars when the `exemplars storage feature is enabled <https://prometheus.io/docs/prometheus/latest/feature_flags/#exemplars-storage>`_. OpenMetrics imposes a few additional requirements on metrics names and labels, so this functionality is currently opt-in, though we believe all of the Hubble metrics conform to the OpenMetrics requirements. .. _clustermesh_apiserver_metrics: Cluster Mesh API Server Metrics =============================== Cluster Mesh API Server metrics provide insights into the state of the ``clustermesh-apiserver`` process, the ``kvstoremesh`` process (if enabled), and the sidecar etcd instance. Cluster Mesh API Server metrics are exported under the ``cilium_clustermesh_apiserver_`` Prometheus namespace. KVStoreMesh metrics are exported under the ``cilium_kvstoremesh_`` Prometheus namespace. Etcd metrics are exported under the ``etcd_`` Prometheus namespace. Installation ------------ You can enable the metrics for different Cluster Mesh API Server components by setting the following values: * clustermesh-apiserver: ``clustermesh.apiserver.metrics.enabled=true`` * kvstoremesh: ``clustermesh.apiserver.metrics.kvstoremesh.enabled=true`` * sidecar etcd instance: ``clustermesh.apiserver.metrics.etcd.enabled=true`` .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set clustermesh.useAPIServer=true \\ --set clustermesh.apiserver.metrics.enabled=true \\ --set clustermesh.apiserver.metrics.kvstoremesh.enabled=true \\ --set clustermesh.apiserver.metrics.etcd.enabled=true You can figure the ports by way of ``clustermesh.apiserver.metrics.port``, ``clustermesh.apiserver.metrics.kvstoremesh.port`` and ``clustermesh.apiserver.metrics.etcd.port`` respectively. You can automatically create a `Prometheus Operator <https://github.com/prometheus-operator/prometheus-operator>`_ ``ServiceMonitor`` by setting ``clustermesh.apiserver.metrics.serviceMonitor.enabled=true``. Example Prometheus & Grafana Deployment ======================================= If you don't have an existing Prometheus and Grafana stack running, you can deploy a stack with: .. parsed-literal:: kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/addons/prometheus/monitoring-example.yaml It will run Prometheus and Grafana in the ``cilium-monitoring`` namespace. If you have either enabled Cilium or Hubble metrics, they will automatically be scraped by Prometheus. You can then expose Grafana to access it via your browser. .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000 Open your browser and access http://localhost:3000/ Metrics Reference ================= cilium-agent ------------ Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``cilium-agent`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9962``) will bind the server to all available interfaces (there is usually only one in a container). To customize ``cilium-agent`` metrics, configure the ``--metrics`` option with ``"+metric_a -metric_b -metric_c"``, where ``+/-`` means to enable/disable the metric. For example, for really large clusters, users may consider to disable the following two metrics as they generate too much data: - ``cilium_node_connectivity_status`` - ``cilium_node_connectivity_latency_seconds`` You can then configure the agent with ``--metrics="-cilium_node_connectivity_status -cilium_node_connectivity_latency_seconds"``. Exported Metrics ^^^^^^^^^^^^^^^^ Endpoint ~~~~~~~~ ============================================ ================================================== ========== ======================================================== Name Labels Default Description ============================================ ================================================== ========== ======================================================== ``endpoint`` Enabled Number of endpoints managed by this agent ``endpoint_max_ifindex`` Disabled Maximum interface index observed for existing endpoints ``endpoint_regenerations_total`` ``outcome`` Enabled Count of all endpoint regenerations that have completed ``endpoint_regeneration_time_stats_seconds`` ``scope`` Enabled Endpoint regeneration time stats ``endpoint_state`` ``state`` Enabled Count of all endpoints ============================================ ================================================== ========== ======================================================== The default enabled status of ``endpoint_max_ifindex`` is dynamic. On earlier kernels (typically with version lower than 5.10), Cilium must store the interface index for each endpoint in the conntrack map, which reserves 16 bits for this field. If Cilium is running on such a kernel, this metric will be enabled by default. It can be used to implement an alert if the ifindex is approaching the limit of 65535. This may be the case in instances of significant Endpoint churn. Services ~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``services_events_total`` Enabled Number of services events labeled by action type ``service_implementation_delay`` ``action`` Enabled Duration in seconds to propagate the data plane programming of a service, its network and endpoints from the time the service or the service pod was changed excluding the event queue latency ========================================== ================================================== ========== ======================================================== Cluster health ~~~~~~~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``unreachable_nodes`` Enabled Number of nodes that cannot be reached ``unreachable_health_endpoints`` Enabled Number of health endpoints that cannot be reached ========================================== ================================================== ========== ======================================================== Node Connectivity ~~~~~~~~~~~~~~~~~ ============================================= ====================================================================================================================================================================== ========== ================================================================================================================================================================================================================== Name Labels Default Description ============================================= ====================================================================================================================================================================== ========== ================================================================================================================================================================================================================== ``node_connectivity_status`` ``source_cluster``, ``source_node_name``, ``target_cluster``, ``target_node_name``, ``target_node_type``, ``type`` Enabled Deprecated, will be removed in Cilium 1.18 - use ``node_health_connectivity_status`` instead. The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes ``node_connectivity_latency_seconds`` ``address_type``, ``protocol``, ``source_cluster``, ``source_node_name``, ``target_cluster``, ``target_node_ip``, ``target_node_name``, ``target_node_type``, ``type`` Enabled Deprecated, will be removed in Cilium 1.18 - use ``node_health_connectivity_latency_seconds`` instead. The last observed latency between the current Cilium agent and other Cilium nodes in seconds ``node_health_connectivity_status`` ``source_cluster``, ``source_node_name``, ``type``, ``status`` Enabled Number of endpoints with last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes ``node_health_connectivity_latency_seconds`` ``source_cluster``, ``source_node_name``, ``type``, ``address_type``, ``protocol`` Enabled Histogram of the last observed latency between the current Cilium agent and other Cilium nodes in seconds ============================================= ====================================================================================================================================================================== ========== ================================================================================================================================================================================================================== Clustermesh ~~~~~~~~~~~ =============================================== ============================================================ ========== ================================================================= Name Labels Default Description =============================================== ============================================================ ========== ================================================================= ``clustermesh_global_services`` ``source_cluster``, ``source_node_name`` Enabled The total number of global services in the cluster mesh ``clustermesh_remote_clusters`` ``source_cluster``, ``source_node_name`` Enabled The total number of remote clusters meshed with the local cluster ``clustermesh_remote_cluster_failures`` ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled The total number of failures related to the remote cluster ``clustermesh_remote_cluster_nodes`` ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled The total number of nodes in the remote cluster ``clustermesh_remote_cluster_last_failure_ts`` ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled The timestamp of the last failure of the remote cluster ``clustermesh_remote_cluster_readiness_status`` ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled The readiness status of the remote cluster =============================================== ============================================================ ========== ================================================================= Datapath ~~~~~~~~ ============================================= ================================================== ========== ======================================================== Name Labels Default Description ============================================= ================================================== ========== ======================================================== ``datapath_conntrack_dump_resets_total`` ``area``, ``name``, ``family`` Enabled Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in progress. ``datapath_conntrack_gc_runs_total`` ``status`` Enabled Number of times that the conntrack garbage collector process was run ``datapath_conntrack_gc_key_fallbacks_total`` Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family ``datapath_conntrack_gc_entries`` ``family`` Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run ``datapath_conntrack_gc_duration_seconds`` ``status`` Enabled Duration in seconds of the garbage collector process ============================================= ================================================== ========== ======================================================== IPsec ~~~~~ ============================================= ================================================== ========== =========================================================== Name Labels Default Description ============================================= ================================================== ========== =========================================================== ``ipsec_xfrm_error`` ``error``, ``type`` Enabled Total number of xfrm errors ``ipsec_keys`` Enabled Number of keys in use ``ipsec_xfrm_states`` ``direction`` Enabled Number of XFRM states ``ipsec_xfrm_policies`` ``direction`` Enabled Number of XFRM policies ============================================= ================================================== ========== =========================================================== eBPF ~~~~ ========================================== ===================================================================== ========== ======================================================== Name Labels Default Description ========================================== ===================================================================== ========== ======================================================== ``bpf_syscall_duration_seconds`` ``operation``, ``outcome`` Disabled Duration of eBPF system call performed ``bpf_map_ops_total`` ``mapName`` (deprecated), ``map_name``, ``operation``, ``outcome`` Enabled Number of eBPF map operations performed. ``mapName`` is deprecated and will be removed in 1.10. Use ``map_name`` instead. ``bpf_map_pressure`` ``map_name`` Enabled Map pressure is defined as a ratio of the required map size compared to its configured size. Values < 1.0 indicate the map's utilization, while values >= 1.0 indicate that the map is full. Policy map metrics are only reported when the ratio is over 0.1, ie 10% full. ``bpf_map_capacity`` ``map_group`` Enabled Maximum size of eBPF maps by group of maps (type of map that have the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536. ``bpf_maps_virtual_memory_max_bytes`` Enabled Max memory used by eBPF maps installed in the system ``bpf_progs_virtual_memory_max_bytes`` Enabled Max memory used by eBPF programs installed in the system ``bpf_ratelimit_dropped_total`` ``usage`` Enabled Total drops resulting from BPF ratelimiter, tagged by source of drop ========================================== ===================================================================== ========== ======================================================== Both ``bpf_maps_virtual_memory_max_bytes`` and ``bpf_progs_virtual_memory_max_bytes`` are currently reporting the system-wide memory usage of eBPF that is directly and not directly managed by Cilium. This might change in the future and only report the eBPF memory usage directly managed by Cilium. Drops/Forwards (L3/L4) ~~~~~~~~~~~~~~~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``drop_count_total`` ``reason``, ``direction`` Enabled Total dropped packets ``drop_bytes_total`` ``reason``, ``direction`` Enabled Total dropped bytes ``forward_count_total`` ``direction`` Enabled Total forwarded packets ``forward_bytes_total`` ``direction`` Enabled Total forwarded bytes ========================================== ================================================== ========== ======================================================== Policy ~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``policy`` Enabled Number of policies currently loaded ``policy_regeneration_total`` Enabled Deprecated, will be removed in Cilium 1.17 - use ``endpoint_regenerations_total`` instead. Total number of policies regenerated successfully ``policy_regeneration_time_stats_seconds`` ``scope`` Enabled Deprecated, will be removed in Cilium 1.17 - use ``endpoint_regeneration_time_stats_seconds`` instead. Policy regeneration time stats labeled by the scope ``policy_max_revision`` Enabled Highest policy revision number in the agent ``policy_change_total`` Enabled Number of policy changes by outcome ``policy_endpoint_enforcement_status`` Enabled Number of endpoints labeled by policy enforcement status ``policy_implementation_delay`` ``source`` Enabled Time in seconds between a policy change and it being fully deployed into the datapath, labeled by the policy's source ``policy_selector_match_count_max`` ``class`` Enabled The maximum number of identities selected by a network policy selector ========================================== ================================================== ========== ======================================================== Policy L7 (HTTP/Kafka/FQDN) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``proxy_redirects`` ``protocol`` Enabled Number of redirects installed for endpoints ``proxy_upstream_reply_seconds`` ``error``, ``protocol_l7``, ``scope`` Enabled Seconds waited for upstream server to reply to a request ``proxy_datapath_update_timeout_total`` Disabled Number of total datapath update timeouts due to FQDN IP updates ``policy_l7_total`` ``rule``, ``proxy_type`` Enabled Number of total L7 requests/responses ======================================== ================================================== ========== ======================================================== Identity ~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``identity`` ``type`` Enabled Number of identities currently allocated ``identity_label_sources`` ``source`` Enabled Number of identities which contain at least one label from the given label source ``identity_gc_entries`` ``identity_type`` Enabled Number of alive and deleted identities at the end of a garbage collector run ``identity_gc_runs`` ``outcome``, ``identity_type`` Enabled Number of times identity garbage collector has run ``identity_gc_latency`` ``outcome``, ``identity_type`` Enabled Duration of the last successful identity GC run ``ipcache_errors_total`` ``type``, ``error`` Enabled Number of errors interacting with the ipcache ``ipcache_events_total`` ``type`` Enabled Number of events interacting with the ipcache ``identity_cache_timer_duration`` ``name`` Enabled Seconds required to execute periodic policy processes. ``name="id-alloc-update-policy-maps"`` is the time taken to apply incremental updates to the BPF policy maps. ``identity_cache_timer_trigger_latency`` ``name`` Enabled Seconds spent waiting for a previous process to finish before starting the next round. ``name="id-alloc-update-policy-maps"`` is the time waiting before applying incremental updates to the BPF policy maps. ``identity_cache_timer_trigger_folds`` ``name`` Enabled Number of timer triggers that were coalesced in to one execution. ``name="id-alloc-update-policy-maps"`` applies the incremental updates to the BPF policy maps. ======================================== ================================================== ========== ======================================================== Events external to Cilium ~~~~~~~~~~~~~~~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``event_ts`` ``source`` Enabled Last timestamp when Cilium received an event from a control plane source, per resource and per action ``k8s_event_lag_seconds`` ``source`` Disabled Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server ======================================== ================================================== ========== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers_runs_total`` ``status`` Enabled Number of times that a controller process was run ``controllers_runs_duration_seconds`` ``status`` Enabled Duration in seconds of the controller process ``controllers_group_runs_total`` ``status``, ``group_name`` Enabled Number of times that a controller process was run, labeled by controller group name ``controllers_failing`` Enabled Number of failing controllers ======================================== ================================================== ========== ======================================================== The ``controllers_group_runs_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag, or the ``prometheus.controllerGroupMetrics`` helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names "all" and "none" are supported. SubProcess ~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``subprocess_start_total`` ``subsystem`` Enabled Number of times that Cilium has started a subprocess ======================================== ================================================== ========== ======================================================== Kubernetes ~~~~~~~~~~ =========================================== ================================================== ========== ======================================================== Name Labels Default Description =========================================== ================================================== ========== ======================================================== ``kubernetes_events_received_total`` ``scope``, ``action``, ``validity``, ``equal`` Enabled Number of Kubernetes events received ``kubernetes_events_total`` ``scope``, ``action``, ``outcome`` Enabled Number of Kubernetes events processed ``k8s_cnp_status_completion_seconds`` ``attempts``, ``outcome`` Enabled Duration in seconds in how long it took to complete a CNP status update ``k8s_terminating_endpoints_events_total`` Enabled Number of terminating endpoint events received from Kubernetes =========================================== ================================================== ========== ======================================================== Kubernetes Rest Client ~~~~~~~~~~~~~~~~~~~~~~ ============================================= ============================================= ========== =========================================================== Name Labels Default Description ============================================= ============================================= ========== =========================================================== ``k8s_client_api_latency_time_seconds`` ``path``, ``method`` Enabled Duration of processed API calls labeled by path and method ``k8s_client_rate_limiter_duration_seconds`` ``path``, ``method`` Enabled Kubernetes client rate limiter latency in seconds. Broken down by path and method ``k8s_client_api_calls_total`` ``host``, ``method``, ``return_code`` Enabled Number of API calls made to kube-apiserver labeled by host, method and return code ============================================= ============================================= ========== =========================================================== Kubernetes workqueue ~~~~~~~~~~~~~~~~~~~~ ==================================================== ============================================= ========== =========================================================== Name Labels Default Description ==================================================== ============================================= ========== =========================================================== ``k8s_workqueue_depth`` ``name`` Enabled Current depth of workqueue ``k8s_workqueue_adds_total`` ``name`` Enabled Total number of adds handled by workqueue ``k8s_workqueue_queue_duration_seconds`` ``name`` Enabled Duration in seconds an item stays in workqueue prior to request ``k8s_workqueue_work_duration_seconds`` ``name`` Enabled Duration in seconds to process an item from workqueue ``k8s_workqueue_unfinished_work_seconds`` ``name`` Enabled Duration in seconds of work in progress that hasn't been observed by work_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value increases. ``k8s_workqueue_longest_running_processor_seconds`` ``name`` Enabled Duration in seconds of the longest running processor for workqueue ``k8s_workqueue_retries_total`` ``name`` Enabled Total number of retries handled by workqueue ==================================================== ============================================= ========== =========================================================== IPAM ~~~~ ======================================== ============================================ ========== ======================================================== Name Labels Default Description ======================================== ============================================ ========== ======================================================== ``ipam_capacity`` ``family`` Enabled Total number of IPs in the IPAM pool labeled by family ``ipam_events_total`` Enabled Number of IPAM events received labeled by action and datapath family type ``ip_addresses`` ``family`` Enabled Number of allocated IP addresses ======================================== ============================================ ========== ======================================================== KVstore ~~~~~~~ ======================================== ============================================ ========== ======================================================== Name Labels Default Description ======================================== ============================================ ========== ======================================================== ``kvstore_operations_duration_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Enabled Duration of kvstore operation ``kvstore_events_queue_seconds`` ``action``, ``scope`` Enabled Seconds waited before a received event was queued ``kvstore_quorum_errors_total`` ``error`` Enabled Number of quorum errors ``kvstore_sync_errors_total`` ``scope``, ``source_cluster`` Enabled Number of times synchronization to the kvstore failed ``kvstore_sync_queue_size`` ``scope``, ``source_cluster`` Enabled Number of elements queued for synchronization in the kvstore ``kvstore_initial_sync_completed`` ``scope``, ``source_cluster``, ``action`` Enabled Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ========== ======================================================== Agent ~~~~~ ================================ ================================ ========== ======================================================== Name Labels Default Description ================================ ================================ ========== ======================================================== ``agent_bootstrap_seconds`` ``scope``, ``outcome`` Enabled Duration of various bootstrap phases ``api_process_time_seconds`` Enabled Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP code. ================================ ================================ ========== ======================================================== FQDN ~~~~ ================================== ================================ ============ ======================================================== Name Labels Default Description ================================== ================================ ============ ======================================================== ``fqdn_gc_deletions_total`` Enabled Number of FQDNs that have been cleaned on FQDN garbage collector job ``fqdn_active_names`` ``endpoint`` Disabled Number of domains inside the DNS cache that have not expired (by TTL), per endpoint ``fqdn_active_ips`` ``endpoint`` Disabled Number of IPs inside the DNS cache associated with a domain that has not expired (by TTL), per endpoint ``fqdn_alive_zombie_connections`` ``endpoint`` Disabled Number of IPs associated with domains that have expired (by TTL) yet still associated with an active connection (aka zombie), per endpoint ``fqdn_selectors`` Enabled Number of registered ToFQDN selectors ================================== ================================ ============ ======================================================== Jobs ~~~~ ================================== ================================ ============ ======================================================== Name Labels Default Description ================================== ================================ ============ ======================================================== ``jobs_errors_total`` ``job`` Enabled Number of jobs runs that returned an error ``jobs_one_shot_run_seconds`` ``job`` Enabled Histogram of one shot job run duration ``jobs_timer_run_seconds`` ``job`` Enabled Histogram of timer job run duration ``jobs_observer_run_seconds`` ``job`` Enabled Histogram of observer job run duration ================================== ================================ ============ ======================================================== CIDRGroups ~~~~~~~~~~ =================================================== ===================== ============================= Name Labels Default Description =================================================== ===================== ============================= ``cidrgroups_referenced`` Enabled Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup. CNPs with empty or non-existing CIDRGroupRefs are not considered ``cidrgroup_translation_time_stats_seconds`` Disabled CIDRGroup translation time stats =================================================== ===================== ============================= .. _metrics_api_rate_limiting: API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ========== ======================================================== Name Labels Default Description ============================================== ========================================== ========== ======================================================== ``api_limiter_adjustment_factor`` ``api_call`` Enabled Most recent adjustment factor for automatic adjustment ``api_limiter_processed_requests_total`` ``api_call``, ``outcome``, ``return_code`` Enabled Total number of API requests processed ``api_limiter_processing_duration_seconds`` ``api_call``, ``value`` Enabled Mean and estimated processing duration in seconds ``api_limiter_rate_limit`` ``api_call``, ``value`` Enabled Current rate limiting configuration (limit and burst) ``api_limiter_requests_in_flight`` ``api_call`` ``value`` Enabled Current and maximum allowed number of requests in flight ``api_limiter_wait_duration_seconds`` ``api_call``, ``value`` Enabled Mean, min, and max wait duration ``api_limiter_wait_history_duration_seconds`` ``api_call`` Disabled Histogram of wait duration per API call processed ============================================== ========================================== ========== ======================================================== .. _metrics_bgp_control_plane: BGP Control Plane ~~~~~~~~~~~~~~~~~ ====================== =============================================================== ======== =================================================================== Name Labels Default Description ====================== =============================================================== ======== =================================================================== ``session_state`` ``vrouter``, ``neighbor``, ``neighbor_asn`` Enabled Current state of the BGP session with the peer, Up = 1 or Down = 0 ``advertised_routes`` ``vrouter``, ``neighbor``, ``neighbor_asn``, ``afi``, ``safi`` Enabled Number of routes advertised to the peer ``received_routes`` ``vrouter``, ``neighbor``, ``neighbor_asn``, ``afi``, ``safi`` Enabled Number of routes received from the peer ====================== =============================================================== ======== =================================================================== All metrics are enabled only when the BGP Control Plane is enabled. cilium-operator --------------- Configuration ^^^^^^^^^^^^^ ``cilium-operator`` can be configured to serve metrics by running with the option ``--enable-metrics``. By default, the operator will expose metrics on port 9963, the port can be changed with the option ``--operator-prometheus-serve-addr``. Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium_operator_`` Prometheus namespace. .. _ipam_metrics: IPAM ~~~~ .. Note:: IPAM metrics are all ``Enabled`` only if using the AWS, Alibabacloud or Azure IPAM plugins. ======================================== ================================================================= ========== ======================================================== Name Labels Default Description ======================================== ================================================================= ========== ======================================================== ``ipam_ips`` ``type`` Enabled Number of IPs allocated ``ipam_ip_allocation_ops`` ``subnet_id`` Enabled Number of IP allocation operations. ``ipam_ip_release_ops`` ``subnet_id`` Enabled Number of IP release operations. ``ipam_interface_creation_ops`` ``subnet_id`` Enabled Number of interfaces creation operations. ``ipam_release_duration_seconds`` ``type``, ``status``, ``subnet_id`` Enabled Release ip or interface latency in seconds ``ipam_allocation_duration_seconds`` ``type``, ``status``, ``subnet_id`` Enabled Allocation ip or interface latency in seconds ``ipam_available_interfaces`` Enabled Number of interfaces with addresses available ``ipam_nodes`` ``category`` Enabled Number of nodes by category { total | in-deficit | at-capacity } ``ipam_resync_total`` Enabled Number of synchronization operations with external IPAM API ``ipam_api_duration_seconds`` ``operation``, ``response_code`` Enabled Duration of interactions with external IPAM API. ``ipam_api_rate_limit_duration_seconds`` ``operation`` Enabled Duration of rate limiting while accessing external IPAM API ``ipam_available_ips`` ``target_node`` Enabled Number of available IPs on a node (taking into account plugin specific NIC/Address limits). ``ipam_used_ips`` ``target_node`` Enabled Number of currently used IPs on a node. ``ipam_needed_ips`` ``target_node`` Enabled Number of IPs needed to satisfy allocation on a node. ======================================== ================================================================= ========== ======================================================== LB-IPAM ~~~~~~~ ======================================== ================================================================= ========== ======================================================== Name Labels Default Description ======================================== ================================================================= ========== ======================================================== ``lbipam_conflicting_pools_total`` Enabled Number of conflicting pools ``lbipam_ips_available_total`` ``pool`` Enabled Number of available IPs per pool ``lbipam_ips_used_total`` ``pool`` Enabled Number of used IPs per pool ``lbipam_services_matching_total`` Enabled Number of matching services ``lbipam_services_unsatisfied_total`` Enabled Number of services which did not get requested IPs ======================================== ================================================================= ========== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers_group_runs_total`` ``status``, ``group_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers_group_runs_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag, or the ``prometheus.controllerGroupMetrics`` helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names "all" and "none" are supported. .. _ces_metrics: CiliumEndpointSlices (CES) ~~~~~~~~~~~~~~~~~~~~~~~~~~ ============================================== ================================ ======================================================== Name Labels Description ============================================== ================================ ======================================================== ``number_of_ceps_per_ces`` The number of CEPs batched in a CES ``number_of_cep_changes_per_ces`` ``opcode`` The number of changed CEPs in each CES update ``ces_sync_total`` ``outcome`` The number of completed CES syncs by outcome ``ces_queueing_delay_seconds`` CiliumEndpointSlice queueing delay in seconds ============================================== ================================ ======================================================== Unmanaged Pods ~~~~~~~~~~~~~~ ============================================ ======= ========== ==================================================================== Name Labels Default Description ============================================ ======= ========== ==================================================================== ``unmanaged_pods`` Enabled The total number of pods observed to be unmanaged by Cilium operator ============================================ ======= ========== ==================================================================== "Double Write" Identity Allocation Mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When the ":ref:`Double Write <double_write_migration>`" identity allocation mode is enabled, the following metrics are available: ============================================ ======= ========== ============================================================ Name Labels Default Description ============================================ ======= ========== ============================================================ ``doublewrite_identity_crd_total_count`` Enabled The total number of CRD identities ``doublewrite_identity_kvstore_total_count`` Enabled The total number of identities in the KVStore ``doublewrite_identity_crd_only_count`` Enabled The number of CRD identities not present in the KVStore ``doublewrite_identity_kvstore_only_count`` Enabled The number of identities in the KVStore not present as a CRD ============================================ ======= ========== ============================================================ Hubble ------ Configuration ^^^^^^^^^^^^^ Hubble metrics are served by a Hubble instance running inside ``cilium-agent``. The command-line options to configure them are ``--enable-hubble``, ``--hubble-metrics-server``, and ``--hubble-metrics``. ``--hubble-metrics-server`` takes an ``IP:Port`` pair, but passing an empty IP (e.g. ``:9965``) will bind the server to all available interfaces. ``--hubble-metrics`` takes a comma-separated list of metrics. It's also possible to configure Hubble metrics to listen with TLS and optionally use mTLS for authentication. For details see :ref:`hubble_configure_metrics_tls`. Some metrics can take additional semicolon-separated options per metric, e.g. ``--hubble-metrics="dns:query;ignoreAAAA,http:destinationContext=workload-name"`` will enable the ``dns`` metric with the ``query`` and ``ignoreAAAA`` options, and the ``http`` metric with the ``destinationContext=workload-name`` option. .. _hubble_context_options: Context Options ^^^^^^^^^^^^^^^ Hubble metrics support configuration via context options. Supported context options for all metrics: - ``sourceContext`` - Configures the ``source`` label on metrics for both egress and ingress traffic. - ``sourceEgressContext`` - Configures the ``source`` label on metrics for egress traffic (takes precedence over ``sourceContext``). - ``sourceIngressContext`` - Configures the ``source`` label on metrics for ingress traffic (takes precedence over ``sourceContext``). - ``destinationContext`` - Configures the ``destination`` label on metrics for both egress and ingress traffic. - ``destinationEgressContext`` - Configures the ``destination`` label on metrics for egress traffic (takes precedence over ``destinationContext``). - ``destinationIngressContext`` - Configures the ``destination`` label on metrics for ingress traffic (takes precedence over ``destinationContext``). - ``labelsContext`` - Configures a list of labels to be enabled on metrics. There are also some context options that are specific to certain metrics. See the documentation for the individual metrics to see what options are available for each. See below for details on each of the different context options. Most Hubble metrics can be configured to add the source and/or destination context as a label using the ``sourceContext`` and ``destinationContext`` options. The possible values are: ===================== =================================================================================== Option Value Description ===================== =================================================================================== ``identity`` All Cilium security identity labels ``namespace`` Kubernetes namespace name ``pod`` Kubernetes pod name and namespace name in the form of ``namespace/pod``. ``pod-name`` Kubernetes pod name. ``dns`` All known DNS names of the source or destination (comma-separated) ``ip`` The IPv4 or IPv6 address ``reserved-identity`` Reserved identity label. ``workload`` Kubernetes pod's workload name and namespace in the form of ``namespace/workload-name``. ``workload-name`` Kubernetes pod's workload name (workloads are: Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift), etc). ``app`` Kubernetes pod's app name, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ===================== =================================================================================== When specifying the source and/or destination context, multiple contexts can be specified by separating them via the ``|`` symbol. When multiple are specified, then the first non-empty value is added to the metric as a label. For example, a metric configuration of ``flow:destinationContext=dns|ip`` will first try to use the DNS name of the target for the label. If no DNS name is known for the target, it will fall back and use the IP address of the target instead. .. note:: There are 3 cases in which the identity label list contains multiple reserved labels: 1. ``reserved:kube-apiserver`` and ``reserved:host`` 2. ``reserved:kube-apiserver`` and ``reserved:remote-node`` 3. ``reserved:kube-apiserver`` and ``reserved:world`` In all of these 3 cases, ``reserved-identity`` context returns ``reserved:kube-apiserver``. Hubble metrics can also be configured with a ``labelsContext`` which allows providing a list of labels that should be added to the metric. Unlike ``sourceContext`` and ``destinationContext``, instead of different values being put into the same metric label, the ``labelsContext`` puts them into different label values. ============================== =============================================================================== Option Value Description ============================== =============================================================================== ``source_ip`` The source IP of the flow. ``source_namespace`` The namespace of the pod if the flow source is from a Kubernetes pod. ``source_pod`` The pod name if the flow source is from a Kubernetes pod. ``source_workload`` The name of the source pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). ``source_workload_kind`` The kind of the source pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). ``source_app`` The app name of the source pod, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ``destination_ip`` The destination IP of the flow. ``destination_namespace`` The namespace of the pod if the flow destination is from a Kubernetes pod. ``destination_pod`` The pod name if the flow destination is from a Kubernetes pod. ``destination_workload`` The name of the destination pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). ``destination_workload_kind`` The kind of the destination pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). ``destination_app`` The app name of the source pod, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ``traffic_direction`` Identifies the traffic direction of the flow. Possible values are ``ingress``, ``egress`` and ``unknown``. ============================== =============================================================================== When specifying the flow context, multiple values can be specified by separating them via the ``,`` symbol. All labels listed are included in the metric, even if empty. For example, a metric configuration of ``http:labelsContext=source_namespace,source_pod`` will add the ``source_namespace`` and ``source_pod`` labels to all Hubble HTTP metrics. .. note:: To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod deletion. Metric is considered to be bound to a specific pod when at least one of the following conditions is met: * ``sourceContext`` is set to ``pod`` and metric series has ``source`` label matching ``<pod_namespace>/<pod_name>`` * ``destinationContext`` is set to ``pod`` and metric series has ``destination`` label matching ``<pod_namespace>/<pod_name>`` * ``labelsContext`` contains both ``source_namespace`` and ``source_pod`` and metric series labels match namespace and name of deleted pod * ``labelsContext`` contains both ``destination_namespace`` and ``destination_pod`` and metric series labels match namespace and name of deleted pod .. _hubble_exported_metrics: Exported Metrics ^^^^^^^^^^^^^^^^ Hubble metrics are exported under the ``hubble_`` Prometheus namespace. lost events ~~~~~~~~~~~ This metric, unlike other ones, is not directly tied to network flows. It's enabled if any of the other metrics is enabled. ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``lost_events_total`` ``source`` Enabled Number of lost events ================================ ======================================== ========== ================================================== Labels """""" - ``source`` identifies the source of lost events, one of: - ``perf_event_ring_buffer`` - ``observer_events_queue`` - ``hubble_ring_buffer`` ``dns`` ~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``dns_queries_total`` ``rcode``, ``qtypes``, ``ips_returned`` Disabled Number of DNS queries observed ``dns_responses_total`` ``rcode``, ``qtypes``, ``ips_returned`` Disabled Number of DNS responses observed ``dns_response_types_total`` ``type``, ``qtypes`` Disabled Number of DNS response types ================================ ======================================== ========== =================================== Options """"""" ============== ============= ==================================================================================== Option Key Option Value Description ============== ============= ==================================================================================== ``query`` N/A Include the query as label "query" ``ignoreAAAA`` N/A Ignore any AAAA requests/responses ============== ============= ==================================================================================== This metric supports :ref:`Context Options<hubble_context_options>`. ``drop`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``drop_total`` ``reason``, ``protocol`` Disabled Number of drops ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``flow`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``flows_processed_total`` ``type``, ``subtype``, ``verdict`` Disabled Total number of flows processed ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``flows-to-world`` ~~~~~~~~~~~~~~~~~~ This metric counts all non-reply flows containing the ``reserved:world`` label in their destination identity. By default, dropped flows are counted if and only if the drop reason is ``Policy denied``. Set ``any-drop`` option to count all dropped flows. ================================ ======================================== ========== ============================================ Name Labels Default Description ================================ ======================================== ========== ============================================ ``flows_to_world_total`` ``protocol``, ``verdict`` Disabled Total number of flows to ``reserved:world``. ================================ ======================================== ========== ============================================ Options """"""" ============== ============= ====================================================== Option Key Option Value Description ============== ============= ====================================================== ``any-drop`` N/A Count any dropped flows regardless of the drop reason. ``port`` N/A Include the destination port as label ``port``. ``syn-only`` N/A Only count non-reply SYNs for TCP flows. ============== ============= ====================================================== This metric supports :ref:`Context Options<hubble_context_options>`. ``http`` ~~~~~~~~ Deprecated, use ``httpV2`` instead. These metrics can not be enabled at the same time as ``httpV2``. ================================= ======================================= ========== ============================================== Name Labels Default Description ================================= ======================================= ========== ============================================== ``http_requests_total`` ``method``, ``protocol``, ``reporter`` Disabled Count of HTTP requests ``http_responses_total`` ``method``, ``status``, ``reporter`` Disabled Count of HTTP responses ``http_request_duration_seconds`` ``method``, ``reporter`` Disabled Histogram of HTTP request duration in seconds ================================= ======================================= ========== ============================================== Labels """""" - ``method`` is the HTTP method of the request/response. - ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP/1.1``, ``HTTP/2``). - ``status`` is the HTTP status code of the response. - ``reporter`` identifies the origin of the request/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown. Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``httpV2`` ~~~~~~~~~~ ``httpV2`` is an updated version of the existing ``http`` metrics. These metrics can not be enabled at the same time as ``http``. The main difference is that ``http_requests_total`` and ``http_responses_total`` have been consolidated, and use the response flow data. Additionally, the ``http_request_duration_seconds`` metric source/destination related labels now are from the perspective of the request. In the ``http`` metrics, the source/destination were swapped, because the metric uses the response flow data, where the source/destination are swapped, but in ``httpV2`` we correctly account for this. ================================= =================================================== ========== ============================================== Name Labels Default Description ================================= =================================================== ========== ============================================== ``http_requests_total`` ``method``, ``protocol``, ``status``, ``reporter`` Disabled Count of HTTP requests ``http_request_duration_seconds`` ``method``, ``reporter`` Disabled Histogram of HTTP request duration in seconds ================================= =================================================== ========== ============================================== Labels """""" - ``method`` is the HTTP method of the request/response. - ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP/1.1``, ``HTTP/2``). - ``status`` is the HTTP status code of the response. - ``reporter`` identifies the origin of the request/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown. Options """"""" ============== ============== ============================================================================================================= Option Key Option Value Description ============== ============== ============================================================================================================= ``exemplars`` ``true`` Include extracted trace IDs in HTTP metrics. Requires :ref:`OpenMetrics to be enabled<hubble_open_metrics>`. ============== ============== ============================================================================================================= This metric supports :ref:`Context Options<hubble_context_options>`. ``icmp`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``icmp_total`` ``family``, ``type`` Disabled Number of ICMP messages ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``kafka`` ~~~~~~~~~ =================================== ===================================================== ========== ============================================== Name Labels Default Description =================================== ===================================================== ========== ============================================== ``kafka_requests_total`` ``topic``, ``api_key``, ``error_code``, ``reporter`` Disabled Count of Kafka requests by topic ``kafka_request_duration_seconds`` ``topic``, ``api_key``, ``reporter`` Disabled Histogram of Kafka request duration by topic =================================== ===================================================== ========== ============================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``port-distribution`` ~~~~~~~~~~~~~~~~~~~~~ ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``port_distribution_total`` ``protocol``, ``port`` Disabled Numbers of packets distributed by destination port ================================ ======================================== ========== ================================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. ``tcp`` ~~~~~~~ ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``tcp_flags_total`` ``flag``, ``family`` Disabled TCP flag occurrences ================================ ======================================== ========== ================================================== Options """"""" This metric supports :ref:`Context Options<hubble_context_options>`. dynamic_exporter_exporters_total ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic_exporter_exporters_total`` ``source`` Enabled Number of configured hubble exporters ==================================== ======================================== ========== ================================================== Labels """""" - ``status`` identifies status of exporters, can be one of: - ``active`` - ``inactive`` dynamic_exporter_up ~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic_exporter_up`` ``source`` Enabled Status of exporter (1 - active, 0 - inactive) ==================================== ======================================== ========== ================================================== Labels """""" - ``name`` identifies exporter name dynamic_exporter_reconfigurations_total ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. =========================================== ======================================== ========== ================================================== Name Labels Default Description =========================================== ======================================== ========== ================================================== ``dynamic_exporter_reconfigurations_total`` ``op`` Enabled Number of dynamic exporters reconfigurations =========================================== ======================================== ========== ================================================== Labels """""" - ``op`` identifies reconfiguration operation type, can be one of: - ``add`` - ``update`` - ``remove`` dynamic_exporter_config_hash ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic_exporter_config_hash`` Enabled Hash of last applied config ==================================== ======================================== ========== ================================================== dynamic_exporter_config_last_applied ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ======================================== ======================================== ========== ================================================== Name Labels Default Description ======================================== ======================================== ========== ================================================== ``dynamic_exporter_config_last_applied`` Enabled Timestamp of last applied config ======================================== ======================================== ========== ================================================== .. _clustermesh_apiserver_metrics_reference: clustermesh-apiserver --------------------- Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``clustermesh-apiserver`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9962``) will bind the server to all available interfaces (there is usually only one in a container). Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium_clustermesh_apiserver_`` Prometheus namespace. Bootstrap ~~~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``bootstrap_seconds`` ``source_cluster`` Duration in seconds to complete bootstrap ======================================== ============================================ ======================================================== KVstore ~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``kvstore_operations_duration_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation ``kvstore_events_queue_seconds`` ``action``, ``scope`` Seconds waited before a received event was queued ``kvstore_quorum_errors_total`` ``error`` Number of quorum errors ``kvstore_sync_errors_total`` ``scope``, ``source_cluster`` Number of times synchronization to the kvstore failed ``kvstore_sync_queue_size`` ``scope``, ``source_cluster`` Number of elements queued for synchronization in the kvstore ``kvstore_initial_sync_completed`` ``scope``, ``source_cluster``, ``action`` Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ======================================================== API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ======================================================== Name Labels Description ============================================== ========================================== ======================================================== ``api_limiter_processed_requests_total`` ``api_call``, ``outcome``, ``return_code`` Total number of API requests processed ``api_limiter_processing_duration_seconds`` ``api_call``, ``value`` Mean and estimated processing duration in seconds ``api_limiter_rate_limit`` ``api_call``, ``value`` Current rate limiting configuration (limit and burst) ``api_limiter_requests_in_flight`` ``api_call`` ``value`` Current and maximum allowed number of requests in flight ``api_limiter_wait_duration_seconds`` ``api_call``, ``value`` Mean, min, and max wait duration ============================================== ========================================== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers_group_runs_total`` ``status``, ``group_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers_group_runs_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag. The current default set for ``clustermesh-apiserver`` found in the Cilium Helm chart is the special name "all", which enables the metric for all controller groups. The special name "none" is also supported. .. _kvstoremesh_metrics_reference: kvstoremesh ----------- Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``kvstoremesh`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9964``) binds the server to all available interfaces (there is usually only one interface in a container). Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium_kvstoremesh_`` Prometheus namespace. Bootstrap ~~~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``bootstrap_seconds`` ``source_cluster`` Duration in seconds to complete bootstrap ======================================== ============================================ ======================================================== Remote clusters ~~~~~~~~~~~~~~~ ==================================== ======================================= ================================================================= Name Labels Description ==================================== ======================================= ================================================================= ``remote_clusters`` ``source_cluster`` The total number of remote clusters meshed with the local cluster ``remote_cluster_failures`` ``source_cluster``, ``target_cluster`` The total number of failures related to the remote cluster ``remote_cluster_last_failure_ts`` ``source_cluster``, ``target_cluster`` The timestamp of the last failure of the remote cluster ``remote_cluster_readiness_status`` ``source_cluster``, ``target_cluster`` The readiness status of the remote cluster ==================================== ======================================= ================================================================= KVstore ~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``kvstore_operations_duration_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation ``kvstore_events_queue_seconds`` ``action``, ``scope`` Seconds waited before a received event was queued ``kvstore_quorum_errors_total`` ``error`` Number of quorum errors ``kvstore_sync_errors_total`` ``scope``, ``source_cluster`` Number of times synchronization to the kvstore failed ``kvstore_sync_queue_size`` ``scope``, ``source_cluster`` Number of elements queued for synchronization in the kvstore ``kvstore_initial_sync_completed`` ``scope``, ``source_cluster``, ``action`` Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ======================================================== API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ======================================================== Name Labels Description ============================================== ========================================== ======================================================== ``api_limiter_processed_requests_total`` ``api_call``, ``outcome``, ``return_code`` Total number of API requests processed ``api_limiter_processing_duration_seconds`` ``api_call``, ``value`` Mean and estimated processing duration in seconds ``api_limiter_rate_limit`` ``api_call``, ``value`` Current rate limiting configuration (limit and burst) ``api_limiter_requests_in_flight`` ``api_call`` ``value`` Current and maximum allowed number of requests in flight ``api_limiter_wait_duration_seconds`` ``api_call``, ``value`` Mean, min, and max wait duration ============================================== ========================================== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers_group_runs_total`` ``status``, ``group_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers_group_runs_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag. The current default set for ``kvstoremesh`` found in the Cilium Helm chart is the special name "all", which enables the metric for all controller groups. The special name "none" is also supported. NAT ~~~ .. _nat_metrics: ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``nat_endpoint_max_connection`` ``family`` Enabled Saturation of the most saturated distinct NAT mapped connection, in terms of egress-IP and remote endpoint address. ======================================== ================================================== ========== ======================================================== These metrics are for monitoring Cilium's NAT mapping functionality. NAT is used by features such as Egress Gateway and BPF masquerading. The NAT map holds mappings for masqueraded connections. Connection held in the NAT table that are masqueraded with the same egress-IP and are going to the same remote endpoints IP and port all require a unique source port for the mapping. This means that any Node masquerading connections to a distinct external endpoint is limited by the possible ephemeral source ports. Given a Node forwarding one or more such egress-IP and remote endpoint tuples, the ``nat_endpoint_max_connection`` metric is the most saturated such connection in terms of a percent of possible source ports available. This metric is especially useful when using the egress gateway feature where it's possible to overload a Node if many connections are all going to the same endpoint. In general, this metric should normally be fairly low. A high number here may indicate that a Node is reaching its limit for connections to one or more external endpoints.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io metrics Monitoring Metrics Cilium and Hubble can both be configured to serve Prometheus https prometheus io metrics Prometheus is a pluggable metrics collection and storage system and can act as a data source for Grafana https grafana com a metrics visualization frontend Unlike some metrics collectors like statsd Prometheus requires the collectors to pull metrics from each source Cilium and Hubble metrics can be enabled independently of each other Cilium Metrics Cilium metrics provide insights into the state of Cilium itself namely of the cilium agent cilium envoy and cilium operator processes To run Cilium with Prometheus metrics enabled deploy it with the prometheus enabled true Helm value set Cilium metrics are exported under the cilium Prometheus namespace Envoy metrics are exported under the envoy Prometheus namespace of which the Cilium defined metrics are exported under the envoy cilium namespace When running and collecting in Kubernetes they will be tagged with a pod name and namespace Installation You can enable metrics for cilium agent including Envoy with the Helm value prometheus enabled true cilium operator metrics are enabled by default if you want to disable them set Helm value operator prometheus enabled false parsed literal helm install cilium CHART RELEASE namespace kube system set prometheus enabled true set operator prometheus enabled true The ports can be configured via prometheus port envoy prometheus port or operator prometheus port respectively When metrics are enabled all Cilium components will have the following annotations They can be used to signal Prometheus whether to scrape metrics code block yaml prometheus io scrape true prometheus io port 9962 To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named cilium agent with the prometheus io scrape true annotation set code block yaml prometheus io scrape true prometheus io port 9964 This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the scrape configs section code block yaml scrape configs job name kubernetes pods kubernetes sd configs role pod relabel configs source labels meta kubernetes pod annotation prometheus io scrape action keep regex true source labels address meta kubernetes pod annotation prometheus io port action replace regex d d replacement 1 2 target label address hubble metrics Hubble Metrics While Cilium metrics allow you to monitor the state Cilium itself Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium managed Kubernetes pods with respect to connectivity and security Installation To deploy Cilium with Hubble metrics enabled you need to enable Hubble with hubble enabled true and provide a set of Hubble metrics you want to enable via hubble metrics enabled Some of the metrics can also be configured with additional options See the ref Hubble exported metrics hubble exported metrics section for the full list of available metrics and their options parsed literal helm install cilium CHART RELEASE namespace kube system set prometheus enabled true set operator prometheus enabled true set hubble enabled true set hubble metrics enableOpenMetrics true set hubble metrics enabled dns drop tcp flow port distribution icmp httpV2 exemplars true labelsContext source ip source namespace source workload destination ip destination namespace destination workload traffic direction The port of the Hubble metrics can be configured with the hubble metrics port Helm value For details on enabling Hubble metrics with TLS see the ref hubble configure metrics tls section of the documentation Note L7 metrics such as HTTP are only emitted for pods that enable ref Layer 7 Protocol Visibility proxy visibility When deployed with a non empty hubble metrics enabled Helm value the Cilium chart will create a Kubernetes headless service named hubble metrics with the prometheus io scrape true annotation set code block yaml prometheus io scrape true prometheus io port 9965 Set the following options in the scrape configs section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically code block yaml scrape configs job name kubernetes endpoints scrape interval 30s kubernetes sd configs role endpoints relabel configs source labels meta kubernetes service annotation prometheus io scrape action keep regex true source labels address meta kubernetes service annotation prometheus io port action replace target label address regex d d replacement 1 2 hubble open metrics OpenMetrics Additionally you can opt in to OpenMetrics https openmetrics io by setting hubble metrics enableOpenMetrics true Enabling OpenMetrics configures the Hubble metrics endpoint to support exporting metrics in OpenMetrics format when explicitly requested by clients Using OpenMetrics supports additional functionality such as Exemplars which enables associating metrics with traces by embedding trace IDs into the exported metrics Prometheus needs to be configured to take advantage of OpenMetrics and will only scrape exemplars when the exemplars storage feature is enabled https prometheus io docs prometheus latest feature flags exemplars storage OpenMetrics imposes a few additional requirements on metrics names and labels so this functionality is currently opt in though we believe all of the Hubble metrics conform to the OpenMetrics requirements clustermesh apiserver metrics Cluster Mesh API Server Metrics Cluster Mesh API Server metrics provide insights into the state of the clustermesh apiserver process the kvstoremesh process if enabled and the sidecar etcd instance Cluster Mesh API Server metrics are exported under the cilium clustermesh apiserver Prometheus namespace KVStoreMesh metrics are exported under the cilium kvstoremesh Prometheus namespace Etcd metrics are exported under the etcd Prometheus namespace Installation You can enable the metrics for different Cluster Mesh API Server components by setting the following values clustermesh apiserver clustermesh apiserver metrics enabled true kvstoremesh clustermesh apiserver metrics kvstoremesh enabled true sidecar etcd instance clustermesh apiserver metrics etcd enabled true parsed literal helm install cilium CHART RELEASE namespace kube system set clustermesh useAPIServer true set clustermesh apiserver metrics enabled true set clustermesh apiserver metrics kvstoremesh enabled true set clustermesh apiserver metrics etcd enabled true You can figure the ports by way of clustermesh apiserver metrics port clustermesh apiserver metrics kvstoremesh port and clustermesh apiserver metrics etcd port respectively You can automatically create a Prometheus Operator https github com prometheus operator prometheus operator ServiceMonitor by setting clustermesh apiserver metrics serviceMonitor enabled true Example Prometheus Grafana Deployment If you don t have an existing Prometheus and Grafana stack running you can deploy a stack with parsed literal kubectl apply f SCM WEB examples kubernetes addons prometheus monitoring example yaml It will run Prometheus and Grafana in the cilium monitoring namespace If you have either enabled Cilium or Hubble metrics they will automatically be scraped by Prometheus You can then expose Grafana to access it via your browser code block shell session kubectl n cilium monitoring port forward service grafana address 0 0 0 0 address 3000 3000 Open your browser and access http localhost 3000 Metrics Reference cilium agent Configuration To expose any metrics invoke cilium agent with the prometheus serve addr option This option takes a IP Port pair but passing an empty IP e g 9962 will bind the server to all available interfaces there is usually only one in a container To customize cilium agent metrics configure the metrics option with metric a metric b metric c where means to enable disable the metric For example for really large clusters users may consider to disable the following two metrics as they generate too much data cilium node connectivity status cilium node connectivity latency seconds You can then configure the agent with metrics cilium node connectivity status cilium node connectivity latency seconds Exported Metrics Endpoint Name Labels Default Description endpoint Enabled Number of endpoints managed by this agent endpoint max ifindex Disabled Maximum interface index observed for existing endpoints endpoint regenerations total outcome Enabled Count of all endpoint regenerations that have completed endpoint regeneration time stats seconds scope Enabled Endpoint regeneration time stats endpoint state state Enabled Count of all endpoints The default enabled status of endpoint max ifindex is dynamic On earlier kernels typically with version lower than 5 10 Cilium must store the interface index for each endpoint in the conntrack map which reserves 16 bits for this field If Cilium is running on such a kernel this metric will be enabled by default It can be used to implement an alert if the ifindex is approaching the limit of 65535 This may be the case in instances of significant Endpoint churn Services Name Labels Default Description services events total Enabled Number of services events labeled by action type service implementation delay action Enabled Duration in seconds to propagate the data plane programming of a service its network and endpoints from the time the service or the service pod was changed excluding the event queue latency Cluster health Name Labels Default Description unreachable nodes Enabled Number of nodes that cannot be reached unreachable health endpoints Enabled Number of health endpoints that cannot be reached Node Connectivity Name Labels Default Description node connectivity status source cluster source node name target cluster target node name target node type type Enabled Deprecated will be removed in Cilium 1 18 use node health connectivity status instead The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes node connectivity latency seconds address type protocol source cluster source node name target cluster target node ip target node name target node type type Enabled Deprecated will be removed in Cilium 1 18 use node health connectivity latency seconds instead The last observed latency between the current Cilium agent and other Cilium nodes in seconds node health connectivity status source cluster source node name type status Enabled Number of endpoints with last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes node health connectivity latency seconds source cluster source node name type address type protocol Enabled Histogram of the last observed latency between the current Cilium agent and other Cilium nodes in seconds Clustermesh Name Labels Default Description clustermesh global services source cluster source node name Enabled The total number of global services in the cluster mesh clustermesh remote clusters source cluster source node name Enabled The total number of remote clusters meshed with the local cluster clustermesh remote cluster failures source cluster source node name target cluster Enabled The total number of failures related to the remote cluster clustermesh remote cluster nodes source cluster source node name target cluster Enabled The total number of nodes in the remote cluster clustermesh remote cluster last failure ts source cluster source node name target cluster Enabled The timestamp of the last failure of the remote cluster clustermesh remote cluster readiness status source cluster source node name target cluster Enabled The readiness status of the remote cluster Datapath Name Labels Default Description datapath conntrack dump resets total area name family Enabled Number of conntrack dump resets Happens when a BPF entry gets removed while dumping the map is in progress datapath conntrack gc runs total status Enabled Number of times that the conntrack garbage collector process was run datapath conntrack gc key fallbacks total Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family datapath conntrack gc entries family Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run datapath conntrack gc duration seconds status Enabled Duration in seconds of the garbage collector process IPsec Name Labels Default Description ipsec xfrm error error type Enabled Total number of xfrm errors ipsec keys Enabled Number of keys in use ipsec xfrm states direction Enabled Number of XFRM states ipsec xfrm policies direction Enabled Number of XFRM policies eBPF Name Labels Default Description bpf syscall duration seconds operation outcome Disabled Duration of eBPF system call performed bpf map ops total mapName deprecated map name operation outcome Enabled Number of eBPF map operations performed mapName is deprecated and will be removed in 1 10 Use map name instead bpf map pressure map name Enabled Map pressure is defined as a ratio of the required map size compared to its configured size Values 1 0 indicate the map s utilization while values 1 0 indicate that the map is full Policy map metrics are only reported when the ratio is over 0 1 ie 10 full bpf map capacity map group Enabled Maximum size of eBPF maps by group of maps type of map that have the same max capacity size Map types with size of 65536 are not emitted missing map types can be assumed to be 65536 bpf maps virtual memory max bytes Enabled Max memory used by eBPF maps installed in the system bpf progs virtual memory max bytes Enabled Max memory used by eBPF programs installed in the system bpf ratelimit dropped total usage Enabled Total drops resulting from BPF ratelimiter tagged by source of drop Both bpf maps virtual memory max bytes and bpf progs virtual memory max bytes are currently reporting the system wide memory usage of eBPF that is directly and not directly managed by Cilium This might change in the future and only report the eBPF memory usage directly managed by Cilium Drops Forwards L3 L4 Name Labels Default Description drop count total reason direction Enabled Total dropped packets drop bytes total reason direction Enabled Total dropped bytes forward count total direction Enabled Total forwarded packets forward bytes total direction Enabled Total forwarded bytes Policy Name Labels Default Description policy Enabled Number of policies currently loaded policy regeneration total Enabled Deprecated will be removed in Cilium 1 17 use endpoint regenerations total instead Total number of policies regenerated successfully policy regeneration time stats seconds scope Enabled Deprecated will be removed in Cilium 1 17 use endpoint regeneration time stats seconds instead Policy regeneration time stats labeled by the scope policy max revision Enabled Highest policy revision number in the agent policy change total Enabled Number of policy changes by outcome policy endpoint enforcement status Enabled Number of endpoints labeled by policy enforcement status policy implementation delay source Enabled Time in seconds between a policy change and it being fully deployed into the datapath labeled by the policy s source policy selector match count max class Enabled The maximum number of identities selected by a network policy selector Policy L7 HTTP Kafka FQDN Name Labels Default Description proxy redirects protocol Enabled Number of redirects installed for endpoints proxy upstream reply seconds error protocol l7 scope Enabled Seconds waited for upstream server to reply to a request proxy datapath update timeout total Disabled Number of total datapath update timeouts due to FQDN IP updates policy l7 total rule proxy type Enabled Number of total L7 requests responses Identity Name Labels Default Description identity type Enabled Number of identities currently allocated identity label sources source Enabled Number of identities which contain at least one label from the given label source identity gc entries identity type Enabled Number of alive and deleted identities at the end of a garbage collector run identity gc runs outcome identity type Enabled Number of times identity garbage collector has run identity gc latency outcome identity type Enabled Duration of the last successful identity GC run ipcache errors total type error Enabled Number of errors interacting with the ipcache ipcache events total type Enabled Number of events interacting with the ipcache identity cache timer duration name Enabled Seconds required to execute periodic policy processes name id alloc update policy maps is the time taken to apply incremental updates to the BPF policy maps identity cache timer trigger latency name Enabled Seconds spent waiting for a previous process to finish before starting the next round name id alloc update policy maps is the time waiting before applying incremental updates to the BPF policy maps identity cache timer trigger folds name Enabled Number of timer triggers that were coalesced in to one execution name id alloc update policy maps applies the incremental updates to the BPF policy maps Events external to Cilium Name Labels Default Description event ts source Enabled Last timestamp when Cilium received an event from a control plane source per resource and per action k8s event lag seconds source Disabled Lag for Kubernetes events computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube api server Controllers Name Labels Default Description controllers runs total status Enabled Number of times that a controller process was run controllers runs duration seconds status Enabled Duration in seconds of the controller process controllers group runs total status group name Enabled Number of times that a controller process was run labeled by controller group name controllers failing Enabled Number of failing controllers The controllers group runs total metric reports the success and failure count of each controller within the system labeled by controller group name and completion status Due to the large number of controllers enabling this metric is on a per controller basis This is configured using an allow list which is passed as the controller group metrics configuration flag or the prometheus controllerGroupMetrics helm value The current recommended default set of group names can be found in the values file of the Cilium Helm chart The special names all and none are supported SubProcess Name Labels Default Description subprocess start total subsystem Enabled Number of times that Cilium has started a subprocess Kubernetes Name Labels Default Description kubernetes events received total scope action validity equal Enabled Number of Kubernetes events received kubernetes events total scope action outcome Enabled Number of Kubernetes events processed k8s cnp status completion seconds attempts outcome Enabled Duration in seconds in how long it took to complete a CNP status update k8s terminating endpoints events total Enabled Number of terminating endpoint events received from Kubernetes Kubernetes Rest Client Name Labels Default Description k8s client api latency time seconds path method Enabled Duration of processed API calls labeled by path and method k8s client rate limiter duration seconds path method Enabled Kubernetes client rate limiter latency in seconds Broken down by path and method k8s client api calls total host method return code Enabled Number of API calls made to kube apiserver labeled by host method and return code Kubernetes workqueue Name Labels Default Description k8s workqueue depth name Enabled Current depth of workqueue k8s workqueue adds total name Enabled Total number of adds handled by workqueue k8s workqueue queue duration seconds name Enabled Duration in seconds an item stays in workqueue prior to request k8s workqueue work duration seconds name Enabled Duration in seconds to process an item from workqueue k8s workqueue unfinished work seconds name Enabled Duration in seconds of work in progress that hasn t been observed by work duration Large values indicate stuck threads You can deduce the number of stuck threads by observing the rate at which this value increases k8s workqueue longest running processor seconds name Enabled Duration in seconds of the longest running processor for workqueue k8s workqueue retries total name Enabled Total number of retries handled by workqueue IPAM Name Labels Default Description ipam capacity family Enabled Total number of IPs in the IPAM pool labeled by family ipam events total Enabled Number of IPAM events received labeled by action and datapath family type ip addresses family Enabled Number of allocated IP addresses KVstore Name Labels Default Description kvstore operations duration seconds action kind outcome scope Enabled Duration of kvstore operation kvstore events queue seconds action scope Enabled Seconds waited before a received event was queued kvstore quorum errors total error Enabled Number of quorum errors kvstore sync errors total scope source cluster Enabled Number of times synchronization to the kvstore failed kvstore sync queue size scope source cluster Enabled Number of elements queued for synchronization in the kvstore kvstore initial sync completed scope source cluster action Enabled Whether the initial synchronization from to the kvstore has completed Agent Name Labels Default Description agent bootstrap seconds scope outcome Enabled Duration of various bootstrap phases api process time seconds Enabled Processing time of all the API calls made to the cilium agent labeled by API method API path and returned HTTP code FQDN Name Labels Default Description fqdn gc deletions total Enabled Number of FQDNs that have been cleaned on FQDN garbage collector job fqdn active names endpoint Disabled Number of domains inside the DNS cache that have not expired by TTL per endpoint fqdn active ips endpoint Disabled Number of IPs inside the DNS cache associated with a domain that has not expired by TTL per endpoint fqdn alive zombie connections endpoint Disabled Number of IPs associated with domains that have expired by TTL yet still associated with an active connection aka zombie per endpoint fqdn selectors Enabled Number of registered ToFQDN selectors Jobs Name Labels Default Description jobs errors total job Enabled Number of jobs runs that returned an error jobs one shot run seconds job Enabled Histogram of one shot job run duration jobs timer run seconds job Enabled Histogram of timer job run duration jobs observer run seconds job Enabled Histogram of observer job run duration CIDRGroups Name Labels Default Description cidrgroups referenced Enabled Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup CNPs with empty or non existing CIDRGroupRefs are not considered cidrgroup translation time stats seconds Disabled CIDRGroup translation time stats metrics api rate limiting API Rate Limiting Name Labels Default Description api limiter adjustment factor api call Enabled Most recent adjustment factor for automatic adjustment api limiter processed requests total api call outcome return code Enabled Total number of API requests processed api limiter processing duration seconds api call value Enabled Mean and estimated processing duration in seconds api limiter rate limit api call value Enabled Current rate limiting configuration limit and burst api limiter requests in flight api call value Enabled Current and maximum allowed number of requests in flight api limiter wait duration seconds api call value Enabled Mean min and max wait duration api limiter wait history duration seconds api call Disabled Histogram of wait duration per API call processed metrics bgp control plane BGP Control Plane Name Labels Default Description session state vrouter neighbor neighbor asn Enabled Current state of the BGP session with the peer Up 1 or Down 0 advertised routes vrouter neighbor neighbor asn afi safi Enabled Number of routes advertised to the peer received routes vrouter neighbor neighbor asn afi safi Enabled Number of routes received from the peer All metrics are enabled only when the BGP Control Plane is enabled cilium operator Configuration cilium operator can be configured to serve metrics by running with the option enable metrics By default the operator will expose metrics on port 9963 the port can be changed with the option operator prometheus serve addr Exported Metrics All metrics are exported under the cilium operator Prometheus namespace ipam metrics IPAM Note IPAM metrics are all Enabled only if using the AWS Alibabacloud or Azure IPAM plugins Name Labels Default Description ipam ips type Enabled Number of IPs allocated ipam ip allocation ops subnet id Enabled Number of IP allocation operations ipam ip release ops subnet id Enabled Number of IP release operations ipam interface creation ops subnet id Enabled Number of interfaces creation operations ipam release duration seconds type status subnet id Enabled Release ip or interface latency in seconds ipam allocation duration seconds type status subnet id Enabled Allocation ip or interface latency in seconds ipam available interfaces Enabled Number of interfaces with addresses available ipam nodes category Enabled Number of nodes by category total in deficit at capacity ipam resync total Enabled Number of synchronization operations with external IPAM API ipam api duration seconds operation response code Enabled Duration of interactions with external IPAM API ipam api rate limit duration seconds operation Enabled Duration of rate limiting while accessing external IPAM API ipam available ips target node Enabled Number of available IPs on a node taking into account plugin specific NIC Address limits ipam used ips target node Enabled Number of currently used IPs on a node ipam needed ips target node Enabled Number of IPs needed to satisfy allocation on a node LB IPAM Name Labels Default Description lbipam conflicting pools total Enabled Number of conflicting pools lbipam ips available total pool Enabled Number of available IPs per pool lbipam ips used total pool Enabled Number of used IPs per pool lbipam services matching total Enabled Number of matching services lbipam services unsatisfied total Enabled Number of services which did not get requested IPs Controllers Name Labels Default Description controllers group runs total status group name Enabled Number of times that a controller process was run labeled by controller group name The controllers group runs total metric reports the success and failure count of each controller within the system labeled by controller group name and completion status Due to the large number of controllers enabling this metric is on a per controller basis This is configured using an allow list which is passed as the controller group metrics configuration flag or the prometheus controllerGroupMetrics helm value The current recommended default set of group names can be found in the values file of the Cilium Helm chart The special names all and none are supported ces metrics CiliumEndpointSlices CES Name Labels Description number of ceps per ces The number of CEPs batched in a CES number of cep changes per ces opcode The number of changed CEPs in each CES update ces sync total outcome The number of completed CES syncs by outcome ces queueing delay seconds CiliumEndpointSlice queueing delay in seconds Unmanaged Pods Name Labels Default Description unmanaged pods Enabled The total number of pods observed to be unmanaged by Cilium operator Double Write Identity Allocation Mode When the ref Double Write double write migration identity allocation mode is enabled the following metrics are available Name Labels Default Description doublewrite identity crd total count Enabled The total number of CRD identities doublewrite identity kvstore total count Enabled The total number of identities in the KVStore doublewrite identity crd only count Enabled The number of CRD identities not present in the KVStore doublewrite identity kvstore only count Enabled The number of identities in the KVStore not present as a CRD Hubble Configuration Hubble metrics are served by a Hubble instance running inside cilium agent The command line options to configure them are enable hubble hubble metrics server and hubble metrics hubble metrics server takes an IP Port pair but passing an empty IP e g 9965 will bind the server to all available interfaces hubble metrics takes a comma separated list of metrics It s also possible to configure Hubble metrics to listen with TLS and optionally use mTLS for authentication For details see ref hubble configure metrics tls Some metrics can take additional semicolon separated options per metric e g hubble metrics dns query ignoreAAAA http destinationContext workload name will enable the dns metric with the query and ignoreAAAA options and the http metric with the destinationContext workload name option hubble context options Context Options Hubble metrics support configuration via context options Supported context options for all metrics sourceContext Configures the source label on metrics for both egress and ingress traffic sourceEgressContext Configures the source label on metrics for egress traffic takes precedence over sourceContext sourceIngressContext Configures the source label on metrics for ingress traffic takes precedence over sourceContext destinationContext Configures the destination label on metrics for both egress and ingress traffic destinationEgressContext Configures the destination label on metrics for egress traffic takes precedence over destinationContext destinationIngressContext Configures the destination label on metrics for ingress traffic takes precedence over destinationContext labelsContext Configures a list of labels to be enabled on metrics There are also some context options that are specific to certain metrics See the documentation for the individual metrics to see what options are available for each See below for details on each of the different context options Most Hubble metrics can be configured to add the source and or destination context as a label using the sourceContext and destinationContext options The possible values are Option Value Description identity All Cilium security identity labels namespace Kubernetes namespace name pod Kubernetes pod name and namespace name in the form of namespace pod pod name Kubernetes pod name dns All known DNS names of the source or destination comma separated ip The IPv4 or IPv6 address reserved identity Reserved identity label workload Kubernetes pod s workload name and namespace in the form of namespace workload name workload name Kubernetes pod s workload name workloads are Deployment Statefulset Daemonset ReplicationController CronJob Job DeploymentConfig OpenShift etc app Kubernetes pod s app name derived from pod labels app kubernetes io name k8s app or app When specifying the source and or destination context multiple contexts can be specified by separating them via the symbol When multiple are specified then the first non empty value is added to the metric as a label For example a metric configuration of flow destinationContext dns ip will first try to use the DNS name of the target for the label If no DNS name is known for the target it will fall back and use the IP address of the target instead note There are 3 cases in which the identity label list contains multiple reserved labels 1 reserved kube apiserver and reserved host 2 reserved kube apiserver and reserved remote node 3 reserved kube apiserver and reserved world In all of these 3 cases reserved identity context returns reserved kube apiserver Hubble metrics can also be configured with a labelsContext which allows providing a list of labels that should be added to the metric Unlike sourceContext and destinationContext instead of different values being put into the same metric label the labelsContext puts them into different label values Option Value Description source ip The source IP of the flow source namespace The namespace of the pod if the flow source is from a Kubernetes pod source pod The pod name if the flow source is from a Kubernetes pod source workload The name of the source pod s workload Deployment Statefulset Daemonset ReplicationController CronJob Job DeploymentConfig OpenShift source workload kind The kind of the source pod s workload for example Deployment Statefulset Daemonset ReplicationController CronJob Job DeploymentConfig OpenShift source app The app name of the source pod derived from pod labels app kubernetes io name k8s app or app destination ip The destination IP of the flow destination namespace The namespace of the pod if the flow destination is from a Kubernetes pod destination pod The pod name if the flow destination is from a Kubernetes pod destination workload The name of the destination pod s workload Deployment Statefulset Daemonset ReplicationController CronJob Job DeploymentConfig OpenShift destination workload kind The kind of the destination pod s workload for example Deployment Statefulset Daemonset ReplicationController CronJob Job DeploymentConfig OpenShift destination app The app name of the source pod derived from pod labels app kubernetes io name k8s app or app traffic direction Identifies the traffic direction of the flow Possible values are ingress egress and unknown When specifying the flow context multiple values can be specified by separating them via the symbol All labels listed are included in the metric even if empty For example a metric configuration of http labelsContext source namespace source pod will add the source namespace and source pod labels to all Hubble HTTP metrics note To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod deletion Metric is considered to be bound to a specific pod when at least one of the following conditions is met sourceContext is set to pod and metric series has source label matching pod namespace pod name destinationContext is set to pod and metric series has destination label matching pod namespace pod name labelsContext contains both source namespace and source pod and metric series labels match namespace and name of deleted pod labelsContext contains both destination namespace and destination pod and metric series labels match namespace and name of deleted pod hubble exported metrics Exported Metrics Hubble metrics are exported under the hubble Prometheus namespace lost events This metric unlike other ones is not directly tied to network flows It s enabled if any of the other metrics is enabled Name Labels Default Description lost events total source Enabled Number of lost events Labels source identifies the source of lost events one of perf event ring buffer observer events queue hubble ring buffer dns Name Labels Default Description dns queries total rcode qtypes ips returned Disabled Number of DNS queries observed dns responses total rcode qtypes ips returned Disabled Number of DNS responses observed dns response types total type qtypes Disabled Number of DNS response types Options Option Key Option Value Description query N A Include the query as label query ignoreAAAA N A Ignore any AAAA requests responses This metric supports ref Context Options hubble context options drop Name Labels Default Description drop total reason protocol Disabled Number of drops Options This metric supports ref Context Options hubble context options flow Name Labels Default Description flows processed total type subtype verdict Disabled Total number of flows processed Options This metric supports ref Context Options hubble context options flows to world This metric counts all non reply flows containing the reserved world label in their destination identity By default dropped flows are counted if and only if the drop reason is Policy denied Set any drop option to count all dropped flows Name Labels Default Description flows to world total protocol verdict Disabled Total number of flows to reserved world Options Option Key Option Value Description any drop N A Count any dropped flows regardless of the drop reason port N A Include the destination port as label port syn only N A Only count non reply SYNs for TCP flows This metric supports ref Context Options hubble context options http Deprecated use httpV2 instead These metrics can not be enabled at the same time as httpV2 Name Labels Default Description http requests total method protocol reporter Disabled Count of HTTP requests http responses total method status reporter Disabled Count of HTTP responses http request duration seconds method reporter Disabled Histogram of HTTP request duration in seconds Labels method is the HTTP method of the request response protocol is the HTTP protocol of the request For example HTTP 1 1 HTTP 2 status is the HTTP status code of the response reporter identifies the origin of the request response It is set to client if it originated from the client server if it originated from the server or unknown if its origin is unknown Options This metric supports ref Context Options hubble context options httpV2 httpV2 is an updated version of the existing http metrics These metrics can not be enabled at the same time as http The main difference is that http requests total and http responses total have been consolidated and use the response flow data Additionally the http request duration seconds metric source destination related labels now are from the perspective of the request In the http metrics the source destination were swapped because the metric uses the response flow data where the source destination are swapped but in httpV2 we correctly account for this Name Labels Default Description http requests total method protocol status reporter Disabled Count of HTTP requests http request duration seconds method reporter Disabled Histogram of HTTP request duration in seconds Labels method is the HTTP method of the request response protocol is the HTTP protocol of the request For example HTTP 1 1 HTTP 2 status is the HTTP status code of the response reporter identifies the origin of the request response It is set to client if it originated from the client server if it originated from the server or unknown if its origin is unknown Options Option Key Option Value Description exemplars true Include extracted trace IDs in HTTP metrics Requires ref OpenMetrics to be enabled hubble open metrics This metric supports ref Context Options hubble context options icmp Name Labels Default Description icmp total family type Disabled Number of ICMP messages Options This metric supports ref Context Options hubble context options kafka Name Labels Default Description kafka requests total topic api key error code reporter Disabled Count of Kafka requests by topic kafka request duration seconds topic api key reporter Disabled Histogram of Kafka request duration by topic Options This metric supports ref Context Options hubble context options port distribution Name Labels Default Description port distribution total protocol port Disabled Numbers of packets distributed by destination port Options This metric supports ref Context Options hubble context options tcp Name Labels Default Description tcp flags total flag family Disabled TCP flag occurrences Options This metric supports ref Context Options hubble context options dynamic exporter exporters total This is dynamic hubble exporter metric Name Labels Default Description dynamic exporter exporters total source Enabled Number of configured hubble exporters Labels status identifies status of exporters can be one of active inactive dynamic exporter up This is dynamic hubble exporter metric Name Labels Default Description dynamic exporter up source Enabled Status of exporter 1 active 0 inactive Labels name identifies exporter name dynamic exporter reconfigurations total This is dynamic hubble exporter metric Name Labels Default Description dynamic exporter reconfigurations total op Enabled Number of dynamic exporters reconfigurations Labels op identifies reconfiguration operation type can be one of add update remove dynamic exporter config hash This is dynamic hubble exporter metric Name Labels Default Description dynamic exporter config hash Enabled Hash of last applied config dynamic exporter config last applied This is dynamic hubble exporter metric Name Labels Default Description dynamic exporter config last applied Enabled Timestamp of last applied config clustermesh apiserver metrics reference clustermesh apiserver Configuration To expose any metrics invoke clustermesh apiserver with the prometheus serve addr option This option takes a IP Port pair but passing an empty IP e g 9962 will bind the server to all available interfaces there is usually only one in a container Exported Metrics All metrics are exported under the cilium clustermesh apiserver Prometheus namespace Bootstrap Name Labels Description bootstrap seconds source cluster Duration in seconds to complete bootstrap KVstore Name Labels Description kvstore operations duration seconds action kind outcome scope Duration of kvstore operation kvstore events queue seconds action scope Seconds waited before a received event was queued kvstore quorum errors total error Number of quorum errors kvstore sync errors total scope source cluster Number of times synchronization to the kvstore failed kvstore sync queue size scope source cluster Number of elements queued for synchronization in the kvstore kvstore initial sync completed scope source cluster action Whether the initial synchronization from to the kvstore has completed API Rate Limiting Name Labels Description api limiter processed requests total api call outcome return code Total number of API requests processed api limiter processing duration seconds api call value Mean and estimated processing duration in seconds api limiter rate limit api call value Current rate limiting configuration limit and burst api limiter requests in flight api call value Current and maximum allowed number of requests in flight api limiter wait duration seconds api call value Mean min and max wait duration Controllers Name Labels Default Description controllers group runs total status group name Enabled Number of times that a controller process was run labeled by controller group name The controllers group runs total metric reports the success and failure count of each controller within the system labeled by controller group name and completion status Enabling this metric is on a per controller basis This is configured using an allow list which is passed as the controller group metrics configuration flag The current default set for clustermesh apiserver found in the Cilium Helm chart is the special name all which enables the metric for all controller groups The special name none is also supported kvstoremesh metrics reference kvstoremesh Configuration To expose any metrics invoke kvstoremesh with the prometheus serve addr option This option takes a IP Port pair but passing an empty IP e g 9964 binds the server to all available interfaces there is usually only one interface in a container Exported Metrics All metrics are exported under the cilium kvstoremesh Prometheus namespace Bootstrap Name Labels Description bootstrap seconds source cluster Duration in seconds to complete bootstrap Remote clusters Name Labels Description remote clusters source cluster The total number of remote clusters meshed with the local cluster remote cluster failures source cluster target cluster The total number of failures related to the remote cluster remote cluster last failure ts source cluster target cluster The timestamp of the last failure of the remote cluster remote cluster readiness status source cluster target cluster The readiness status of the remote cluster KVstore Name Labels Description kvstore operations duration seconds action kind outcome scope Duration of kvstore operation kvstore events queue seconds action scope Seconds waited before a received event was queued kvstore quorum errors total error Number of quorum errors kvstore sync errors total scope source cluster Number of times synchronization to the kvstore failed kvstore sync queue size scope source cluster Number of elements queued for synchronization in the kvstore kvstore initial sync completed scope source cluster action Whether the initial synchronization from to the kvstore has completed API Rate Limiting Name Labels Description api limiter processed requests total api call outcome return code Total number of API requests processed api limiter processing duration seconds api call value Mean and estimated processing duration in seconds api limiter rate limit api call value Current rate limiting configuration limit and burst api limiter requests in flight api call value Current and maximum allowed number of requests in flight api limiter wait duration seconds api call value Mean min and max wait duration Controllers Name Labels Default Description controllers group runs total status group name Enabled Number of times that a controller process was run labeled by controller group name The controllers group runs total metric reports the success and failure count of each controller within the system labeled by controller group name and completion status Enabling this metric is on a per controller basis This is configured using an allow list which is passed as the controller group metrics configuration flag The current default set for kvstoremesh found in the Cilium Helm chart is the special name all which enables the metric for all controller groups The special name none is also supported NAT nat metrics Name Labels Default Description nat endpoint max connection family Enabled Saturation of the most saturated distinct NAT mapped connection in terms of egress IP and remote endpoint address These metrics are for monitoring Cilium s NAT mapping functionality NAT is used by features such as Egress Gateway and BPF masquerading The NAT map holds mappings for masqueraded connections Connection held in the NAT table that are masqueraded with the same egress IP and are going to the same remote endpoints IP and port all require a unique source port for the mapping This means that any Node masquerading connections to a distinct external endpoint is limited by the possible ephemeral source ports Given a Node forwarding one or more such egress IP and remote endpoint tuples the nat endpoint max connection metric is the most saturated such connection in terms of a percent of possible source ports available This metric is especially useful when using the egress gateway feature where it s possible to overload a Node if many connections are all going to the same endpoint In general this metric should normally be fairly low A high number here may indicate that a Node is reaching its limit for connections to one or more external endpoints
cilium Install Prometheus Grafana docs cilium io Running Prometheus Grafana installmetrics
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _install_metrics: **************************** Running Prometheus & Grafana **************************** Install Prometheus & Grafana ============================ This is an example deployment that includes Prometheus and Grafana in a single deployment. .. admonition:: Video :class: attention You can also see Cilium and Grafana in action on `eCHO episode 68: Cilium & Grafana <https://www.youtube.com/watch?v=DdWksYq5Pv4>`__. The default installation contains: - **Grafana**: A visualization dashboard with Cilium Dashboard pre-loaded. - **Prometheus**: a time series database and monitoring system. .. parsed-literal:: $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/addons/prometheus/monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus unchanged clusterrolebinding.rbac.authorization.k8s.io/prometheus unchanged service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created This example deployment of Prometheus and Grafana will automatically scrape the Cilium and Hubble metrics. See the :ref:`metrics` configuration guide on how to configure a custom Prometheus instance. Deploy Cilium and Hubble with metrics enabled ============================================= *Cilium*, *Hubble*, and *Cilium Operator* do not expose metrics by default. Enabling metrics for these services will open ports ``9962``, ``9965``, and ``9963`` respectively on all nodes of your cluster where these components are running. The metrics for Cilium, Hubble, and Cilium Operator can all be enabled independently of each other with the following Helm values: - ``prometheus.enabled=true``: Enables metrics for ``cilium-agent``. - ``operator.prometheus.enabled=true``: Enables metrics for ``cilium-operator``. - ``hubble.metrics.enabled``: Enables the provided list of Hubble metrics. For Hubble metrics to work, Hubble itself needs to be enabled with ``hubble.enabled=true``. See :ref:`Hubble exported metrics<hubble_exported_metrics>` for the list of available Hubble metrics. Refer to :ref:`metrics` for more details about the individual metrics. .. include:: ../installation/k8s-install-download-release.rst Deploy Cilium via Helm as follows to enable all metrics: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true \\ --set hubble.enabled=true \\ --set hubble.metrics.enableOpenMetrics=true \\ --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\\,source_namespace\\,source_workload\\,destination_ip\\,destination_namespace\\,destination_workload\\,traffic_direction}" .. note:: You can combine the above Helm options with any of the other installation guides. How to access Grafana ===================== Expose the port on your local machine .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000 Access it via your browser: http://localhost:3000 How to access Prometheus ======================== Expose the port on your local machine .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/prometheus --address 0.0.0.0 --address :: 9090:9090 Access it via your browser: http://localhost:9090 Examples ======== Generic ------- .. image:: images/grafana_generic.png Network ------- .. image:: images/grafana_network.png Policy ------- .. image:: images/grafana_policy.png .. image:: images/grafana_policy2.png Endpoints --------- .. image:: images/grafana_endpoints.png Controllers ----------- .. image:: images/grafana_controllers.png Kubernetes ---------- .. image:: images/grafana_k8s.png Hubble General Processing ------------------------- .. image:: images/grafana_hubble_general_processing.png Hubble Networking ----------------- .. note:: The ``port-distribution`` metric is disabled by default. Refer to :ref:`metrics` for more details about the individual metrics. .. image:: images/grafana_hubble_network.png .. image:: images/grafana_hubble_tcp.png .. image:: images/grafana_hubble_icmp.png Hubble DNS ---------- .. image:: images/grafana_hubble_dns.png Hubble HTTP ----------- .. image:: images/grafana_hubble_http.png Hubble Network Policy --------------------- .. image:: images/grafana_hubble_network_policy.png
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io install metrics Running Prometheus Grafana Install Prometheus Grafana This is an example deployment that includes Prometheus and Grafana in a single deployment admonition Video class attention You can also see Cilium and Grafana in action on eCHO episode 68 Cilium Grafana https www youtube com watch v DdWksYq5Pv4 The default installation contains Grafana A visualization dashboard with Cilium Dashboard pre loaded Prometheus a time series database and monitoring system parsed literal kubectl apply f SCM WEB examples kubernetes addons prometheus monitoring example yaml namespace cilium monitoring created serviceaccount prometheus k8s created configmap grafana config created configmap grafana cilium dashboard created configmap grafana cilium operator dashboard created configmap grafana hubble dashboard created configmap prometheus created clusterrole rbac authorization k8s io prometheus unchanged clusterrolebinding rbac authorization k8s io prometheus unchanged service grafana created service prometheus created deployment apps grafana created deployment apps prometheus created This example deployment of Prometheus and Grafana will automatically scrape the Cilium and Hubble metrics See the ref metrics configuration guide on how to configure a custom Prometheus instance Deploy Cilium and Hubble with metrics enabled Cilium Hubble and Cilium Operator do not expose metrics by default Enabling metrics for these services will open ports 9962 9965 and 9963 respectively on all nodes of your cluster where these components are running The metrics for Cilium Hubble and Cilium Operator can all be enabled independently of each other with the following Helm values prometheus enabled true Enables metrics for cilium agent operator prometheus enabled true Enables metrics for cilium operator hubble metrics enabled Enables the provided list of Hubble metrics For Hubble metrics to work Hubble itself needs to be enabled with hubble enabled true See ref Hubble exported metrics hubble exported metrics for the list of available Hubble metrics Refer to ref metrics for more details about the individual metrics include installation k8s install download release rst Deploy Cilium via Helm as follows to enable all metrics parsed literal helm install cilium CHART RELEASE namespace kube system set prometheus enabled true set operator prometheus enabled true set hubble enabled true set hubble metrics enableOpenMetrics true set hubble metrics enabled dns drop tcp flow port distribution icmp httpV2 exemplars true labelsContext source ip source namespace source workload destination ip destination namespace destination workload traffic direction note You can combine the above Helm options with any of the other installation guides How to access Grafana Expose the port on your local machine code block shell session kubectl n cilium monitoring port forward service grafana address 0 0 0 0 address 3000 3000 Access it via your browser http localhost 3000 How to access Prometheus Expose the port on your local machine code block shell session kubectl n cilium monitoring port forward service prometheus address 0 0 0 0 address 9090 9090 Access it via your browser http localhost 9090 Examples Generic image images grafana generic png Network image images grafana network png Policy image images grafana policy png image images grafana policy2 png Endpoints image images grafana endpoints png Controllers image images grafana controllers png Kubernetes image images grafana k8s png Hubble General Processing image images grafana hubble general processing png Hubble Networking note The port distribution metric is disabled by default Refer to ref metrics for more details about the individual metrics image images grafana hubble network png image images grafana hubble tcp png image images grafana hubble icmp png Hubble DNS image images grafana hubble dns png Hubble HTTP image images grafana hubble http png Hubble Network Policy image images grafana hubble network policy png
cilium Generic are healthy and ready tion cli download rst shell session Validate that the as well as the pods
Service Mesh Troubleshooting ============================ Install the Cilium CLI ---------------------- .. include:: /installation/cli-download.rst Generic ------- #. Validate that the ``ds/cilium`` as well as the ``deployment/cilium-operator`` pods are healthy and ready. .. code-block:: shell-session $ cilium status Manual Verification of Setup ---------------------------- #. Validate that ``nodePort.enabled`` is true. .. code-block:: shell-session $ kubectl exec -n kube-system ds/cilium -- cilium-dbg status --verbose ... KubeProxyReplacement Details: ... Services: - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) ... #. Validate that runtime the values of ``enable-envoy-config`` and ``enable-ingress-controller`` are true. Ingress controller flag is optional if customer only uses ``CiliumEnvoyConfig`` or ``CiliumClusterwideEnvoyConfig`` CRDs. .. code-block:: shell-session $ kubectl -n kube-system get cm cilium-config -o json | egrep "enable-ingress-controller|enable-envoy-config" "enable-envoy-config": "true", "enable-ingress-controller": "true", Ingress Troubleshooting ----------------------- Internally, the Cilium Ingress controller will create one Load Balancer service, one ``CiliumEnvoyConfig`` and one dummy Endpoint resource for each Ingress resource. .. code-block:: shell-session $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE basic-ingress cilium * 10.97.60.117 80 16m # For dedicated Load Balancer mode $ kubectl get service cilium-ingress-basic-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress-basic-ingress LoadBalancer 10.97.60.117 10.97.60.117 80:31911/TCP 17m # For dedicated Load Balancer mode $ kubectl get cec cilium-ingress-default-basic-ingress NAME AGE cilium-ingress-default-basic-ingress 18m # For shared Load Balancer mode $ kubectl get services -n kube-system cilium-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress LoadBalancer 10.111.109.99 10.111.109.99 80:32690/TCP,443:31566/TCP 38m # For shared Load Balancer mode $ kubectl get cec -n kube-system cilium-ingress NAME AGE cilium-ingress 15m #. Validate that the Load Balancer service has either an external IP or FQDN assigned. If it's not available after a long time, please check the Load Balancer related documentation from your respective cloud provider. #. Check if there is any warning or error message while Cilium is trying to provision the ``CiliumEnvoyConfig`` resource. This is unlikely to happen for CEC resources originating from the Cilium Ingress controller. .. include:: /network/servicemesh/warning.rst Connectivity Troubleshooting ---------------------------- This section is for troubleshooting connectivity issues mainly for Ingress resources, but the same steps can be applied to manually configured ``CiliumEnvoyConfig`` resources as well. It's best to have ``debug`` and ``debug-verbose`` enabled with below values. Kindly note that any change of Cilium flags requires a restart of the Cilium agent and operator. .. code-block:: shell-session $ kubectl get -n kube-system cm cilium-config -o json | grep "debug" "debug": "true", "debug-verbose": "flow", .. note:: The originating source IP is used for enforcing ingress traffic. The request normally traverses from LoadBalancer service to pre-assigned port of your node, then gets forwarded to the Cilium Envoy proxy, and finally gets proxied to the actual backend service. #. The first step between cloud Load Balancer to node port is out of Cilium scope. Please check related documentation from your respective cloud provider to make sure your clusters are configured properly. #. The second step could be checked by connecting with SSH to your underlying host, and sending the similar request to localhost on the relevant port: .. code-block:: shell-session $ kubectl get service cilium-ingress-basic-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress-basic-ingress LoadBalancer 10.97.60.117 10.97.60.117 80:31911/TCP 17m # After ssh to any of k8s node $ curl -v http://localhost:31911/ * Trying 127.0.0.1:31911... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 31911 (#0) > GET / HTTP/1.1 > Host: localhost:31911 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 503 Service Unavailable < content-length: 19 < content-type: text/plain < date: Thu, 07 Jul 2022 12:25:56 GMT < server: envoy < * Connection #0 to host localhost left intact # Flows for world identity $ kubectl -n kube-system exec ds/cilium -- hubble observe -f --identity 2 Jul 7 12:28:27.970: 127.0.0.1:54704 <- 127.0.0.1:13681 http-response FORWARDED (HTTP/1.1 503 0ms (GET http://localhost:31911/)) Alternatively, you can also send a request directly to the Envoy proxy port. For Ingress, the proxy port is randomly assigned by the Cilium Ingress controller. For manually configured ``CiliumEnvoyConfig`` resources, the proxy port is retrieved directly from the spec. .. code-block:: shell-session $ kubectl logs -f -n kube-system ds/cilium --timestamps | egrep "envoy|proxy" ... 2022-07-08T08:05:13.986649816Z level=info msg="Adding new proxy port rules for cilium-ingress-default-basic-ingress:19672" proxy port name=cilium-ingress-default-basic-ingress subsys=proxy # After ssh to any of k8s node, send request to Envoy proxy port directly $ curl -v http://localhost:19672 * Trying 127.0.0.1:19672... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 19672 (#0) > GET / HTTP/1.1 > Host: localhost:19672 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 503 Service Unavailable < content-length: 19 < content-type: text/plain < date: Fri, 08 Jul 2022 08:12:35 GMT < server: envoy If you see a response similar to the above, it means that the request is being redirected to proxy successfully. The http response will have one special header ``server: envoy`` accordingly. The same can be observed from ``hubble observe`` command :ref:`hubble_troubleshooting`. The most common root cause is either that the Cilium Envoy proxy is not running on the node, or there is some other issue with CEC resource provisioning. .. code-block:: shell-session $ kubectl exec -n kube-system ds/cilium -- cilium-dbg status ... Controller Status: 49/49 healthy Proxy Status: OK, ip 10.0.0.25, 6 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 #. Assuming that the above steps are done successfully, you can proceed to send a request via an external IP or via FQDN next. Double-check whether your backend service is up and healthy. The Envoy Discovery Service (EDS) has a name that follows the convention ``<namespace>/<service-name>:<port>``. .. code-block:: shell-session $ LB_IP=$(kubectl get ingress basic-ingress -o json | jq '.status.loadBalancer.ingress[0].ip' | jq -r .) $ curl -s http://$LB_IP/details/1 no healthy upstream $ kubectl get cec cilium-ingress-default-basic-ingress -o json | jq '.spec.resources[] | select(.type=="EDS")' { "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", "connectTimeout": "5s", "name": "default/details:9080", "outlierDetection": { "consecutiveLocalOriginFailure": 2, "splitExternalLocalOriginErrors": true }, "type": "EDS", "typedExtensionProtocolOptions": { "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": { "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions", "useDownstreamProtocolConfig": { "http2ProtocolOptions": {} } } } } { "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", "connectTimeout": "5s", "name": "default/productpage:9080", "outlierDetection": { "consecutiveLocalOriginFailure": 2, "splitExternalLocalOriginErrors": true }, "type": "EDS", "typedExtensionProtocolOptions": { "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": { "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions", "useDownstreamProtocolConfig": { "http2ProtocolOptions": {} } } } } If everything is configured correctly, you will be able to see the flows from ``world`` (identity 2), ``ingress`` (identity 8) and your backend pod as per below. .. code-block:: shell-session # Flows for world identity $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 2 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 192.168.49.1:59608 -> default/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP/1.1 GET http://10.97.60.117/details/1) Jul 7 13:07:46.727: 192.168.49.1:59608 <- default/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.97.60.117/details/1)) # Flows for Ingress identity (e.g. envoy proxy) $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 8 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 13:07:46.726: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.727: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH) # Flows for backend pod, the identity can be retrieved via cilium identity list command $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 48847 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 13:07:46.726: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.726: 192.168.49.1:59608 -> default/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP/1.1 GET http://10.97.60.117/details/1) Jul 7 13:07:46.727: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.727: 192.168.49.1:59608 <- default/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.97.60.117/details/1)) Jul 7 13:08:16.757: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, FIN) Jul 7 13:08:16.757: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) # Sample output of cilium-dbg monitor $ ksysex ds/cilium -- cilium-dbg monitor level=info msg="Initializing dissection cache..." subsys=monitor -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state new ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp SYN -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp SYN, ACK -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK -> Request http from 0 ([reserved:world]) to 212 ([k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details]), identity 2->61131, verdict Forwarded GET http://10.99.74.157/details/1 => 0 -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp ACK -> Response http to 0 ([reserved:world]) from 212 ([k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details]), identity 61131->2, verdict Forwarded GET http://10.99.74.157/details/1 => 200
cilium
Service Mesh Troubleshooting Install the Cilium CLI include installation cli download rst Generic Validate that the ds cilium as well as the deployment cilium operator pods are healthy and ready code block shell session cilium status Manual Verification of Setup Validate that nodePort enabled is true code block shell session kubectl exec n kube system ds cilium cilium dbg status verbose KubeProxyReplacement Details Services ClusterIP Enabled NodePort Enabled Range 30000 32767 Validate that runtime the values of enable envoy config and enable ingress controller are true Ingress controller flag is optional if customer only uses CiliumEnvoyConfig or CiliumClusterwideEnvoyConfig CRDs code block shell session kubectl n kube system get cm cilium config o json egrep enable ingress controller enable envoy config enable envoy config true enable ingress controller true Ingress Troubleshooting Internally the Cilium Ingress controller will create one Load Balancer service one CiliumEnvoyConfig and one dummy Endpoint resource for each Ingress resource code block shell session kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE basic ingress cilium 10 97 60 117 80 16m For dedicated Load Balancer mode kubectl get service cilium ingress basic ingress NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE cilium ingress basic ingress LoadBalancer 10 97 60 117 10 97 60 117 80 31911 TCP 17m For dedicated Load Balancer mode kubectl get cec cilium ingress default basic ingress NAME AGE cilium ingress default basic ingress 18m For shared Load Balancer mode kubectl get services n kube system cilium ingress NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE cilium ingress LoadBalancer 10 111 109 99 10 111 109 99 80 32690 TCP 443 31566 TCP 38m For shared Load Balancer mode kubectl get cec n kube system cilium ingress NAME AGE cilium ingress 15m Validate that the Load Balancer service has either an external IP or FQDN assigned If it s not available after a long time please check the Load Balancer related documentation from your respective cloud provider Check if there is any warning or error message while Cilium is trying to provision the CiliumEnvoyConfig resource This is unlikely to happen for CEC resources originating from the Cilium Ingress controller include network servicemesh warning rst Connectivity Troubleshooting This section is for troubleshooting connectivity issues mainly for Ingress resources but the same steps can be applied to manually configured CiliumEnvoyConfig resources as well It s best to have debug and debug verbose enabled with below values Kindly note that any change of Cilium flags requires a restart of the Cilium agent and operator code block shell session kubectl get n kube system cm cilium config o json grep debug debug true debug verbose flow note The originating source IP is used for enforcing ingress traffic The request normally traverses from LoadBalancer service to pre assigned port of your node then gets forwarded to the Cilium Envoy proxy and finally gets proxied to the actual backend service The first step between cloud Load Balancer to node port is out of Cilium scope Please check related documentation from your respective cloud provider to make sure your clusters are configured properly The second step could be checked by connecting with SSH to your underlying host and sending the similar request to localhost on the relevant port code block shell session kubectl get service cilium ingress basic ingress NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE cilium ingress basic ingress LoadBalancer 10 97 60 117 10 97 60 117 80 31911 TCP 17m After ssh to any of k8s node curl v http localhost 31911 Trying 127 0 0 1 31911 TCP NODELAY set Connected to localhost 127 0 0 1 port 31911 0 GET HTTP 1 1 Host localhost 31911 User Agent curl 7 68 0 Accept Mark bundle as not supporting multiuse HTTP 1 1 503 Service Unavailable content length 19 content type text plain date Thu 07 Jul 2022 12 25 56 GMT server envoy Connection 0 to host localhost left intact Flows for world identity kubectl n kube system exec ds cilium hubble observe f identity 2 Jul 7 12 28 27 970 127 0 0 1 54704 127 0 0 1 13681 http response FORWARDED HTTP 1 1 503 0ms GET http localhost 31911 Alternatively you can also send a request directly to the Envoy proxy port For Ingress the proxy port is randomly assigned by the Cilium Ingress controller For manually configured CiliumEnvoyConfig resources the proxy port is retrieved directly from the spec code block shell session kubectl logs f n kube system ds cilium timestamps egrep envoy proxy 2022 07 08T08 05 13 986649816Z level info msg Adding new proxy port rules for cilium ingress default basic ingress 19672 proxy port name cilium ingress default basic ingress subsys proxy After ssh to any of k8s node send request to Envoy proxy port directly curl v http localhost 19672 Trying 127 0 0 1 19672 TCP NODELAY set Connected to localhost 127 0 0 1 port 19672 0 GET HTTP 1 1 Host localhost 19672 User Agent curl 7 68 0 Accept Mark bundle as not supporting multiuse HTTP 1 1 503 Service Unavailable content length 19 content type text plain date Fri 08 Jul 2022 08 12 35 GMT server envoy If you see a response similar to the above it means that the request is being redirected to proxy successfully The http response will have one special header server envoy accordingly The same can be observed from hubble observe command ref hubble troubleshooting The most common root cause is either that the Cilium Envoy proxy is not running on the node or there is some other issue with CEC resource provisioning code block shell session kubectl exec n kube system ds cilium cilium dbg status Controller Status 49 49 healthy Proxy Status OK ip 10 0 0 25 6 redirects active on ports 10000 20000 Global Identity Range min 256 max 65535 Assuming that the above steps are done successfully you can proceed to send a request via an external IP or via FQDN next Double check whether your backend service is up and healthy The Envoy Discovery Service EDS has a name that follows the convention namespace service name port code block shell session LB IP kubectl get ingress basic ingress o json jq status loadBalancer ingress 0 ip jq r curl s http LB IP details 1 no healthy upstream kubectl get cec cilium ingress default basic ingress o json jq spec resources select type EDS type type googleapis com envoy config cluster v3 Cluster connectTimeout 5s name default details 9080 outlierDetection consecutiveLocalOriginFailure 2 splitExternalLocalOriginErrors true type EDS typedExtensionProtocolOptions envoy extensions upstreams http v3 HttpProtocolOptions type type googleapis com envoy extensions upstreams http v3 HttpProtocolOptions useDownstreamProtocolConfig http2ProtocolOptions type type googleapis com envoy config cluster v3 Cluster connectTimeout 5s name default productpage 9080 outlierDetection consecutiveLocalOriginFailure 2 splitExternalLocalOriginErrors true type EDS typedExtensionProtocolOptions envoy extensions upstreams http v3 HttpProtocolOptions type type googleapis com envoy extensions upstreams http v3 HttpProtocolOptions useDownstreamProtocolConfig http2ProtocolOptions If everything is configured correctly you will be able to see the flows from world identity 2 ingress identity 8 and your backend pod as per below code block shell session Flows for world identity kubectl exec n kube system ds cilium hubble observe identity 2 f Defaulted container cilium agent out of cilium agent mount cgroup init apply sysctl overwrites init mount bpf fs init clean cilium state init Jul 7 13 07 46 726 192 168 49 1 59608 default details v1 5498c86cf5 cnt9q 9080 http request FORWARDED HTTP 1 1 GET http 10 97 60 117 details 1 Jul 7 13 07 46 727 192 168 49 1 59608 default details v1 5498c86cf5 cnt9q 9080 http response FORWARDED HTTP 1 1 200 1ms GET http 10 97 60 117 details 1 Flows for Ingress identity e g envoy proxy kubectl exec n kube system ds cilium hubble observe identity 8 f Defaulted container cilium agent out of cilium agent mount cgroup init apply sysctl overwrites init mount bpf fs init clean cilium state init Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags SYN Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED TCP Flags SYN ACK Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags ACK Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags ACK PSH Jul 7 13 07 46 727 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED TCP Flags ACK PSH Flows for backend pod the identity can be retrieved via cilium identity list command kubectl exec n kube system ds cilium hubble observe identity 48847 f Defaulted container cilium agent out of cilium agent mount cgroup init apply sysctl overwrites init mount bpf fs init clean cilium state init Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags SYN Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED TCP Flags SYN ACK Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags ACK Jul 7 13 07 46 726 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags ACK PSH Jul 7 13 07 46 726 192 168 49 1 59608 default details v1 5498c86cf5 cnt9q 9080 http request FORWARDED HTTP 1 1 GET http 10 97 60 117 details 1 Jul 7 13 07 46 727 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED TCP Flags ACK PSH Jul 7 13 07 46 727 192 168 49 1 59608 default details v1 5498c86cf5 cnt9q 9080 http response FORWARDED HTTP 1 1 200 1ms GET http 10 97 60 117 details 1 Jul 7 13 08 16 757 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED TCP Flags ACK FIN Jul 7 13 08 16 757 10 0 0 95 42509 default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED TCP Flags ACK FIN Sample output of cilium dbg monitor ksysex ds cilium cilium dbg monitor level info msg Initializing dissection cache subsys monitor endpoint 212 flow 0x3000e251 identity ingress 61131 state new ifindex lxcfc90a8580fd6 orig ip 10 0 0 192 10 0 0 192 34219 10 0 0 164 9080 tcp SYN stack flow 0x2481d648 identity 61131 ingress state reply ifindex 0 orig ip 0 0 0 0 10 0 0 164 9080 10 0 0 192 34219 tcp SYN ACK endpoint 212 flow 0x3000e251 identity ingress 61131 state established ifindex lxcfc90a8580fd6 orig ip 10 0 0 192 10 0 0 192 34219 10 0 0 164 9080 tcp ACK endpoint 212 flow 0x3000e251 identity ingress 61131 state established ifindex lxcfc90a8580fd6 orig ip 10 0 0 192 10 0 0 192 34219 10 0 0 164 9080 tcp ACK Request http from 0 reserved world to 212 k8s io cilium k8s namespace labels kubernetes io metadata name default k8s io cilium k8s policy cluster minikube k8s io cilium k8s policy serviceaccount bookinfo details k8s io kubernetes pod namespace default k8s version v1 k8s app details identity 2 61131 verdict Forwarded GET http 10 99 74 157 details 1 0 stack flow 0x2481d648 identity 61131 ingress state reply ifindex 0 orig ip 0 0 0 0 10 0 0 164 9080 10 0 0 192 34219 tcp ACK Response http to 0 reserved world from 212 k8s io kubernetes pod namespace default k8s version v1 k8s app details k8s io cilium k8s namespace labels kubernetes io metadata name default k8s io cilium k8s policy cluster minikube k8s io cilium k8s policy serviceaccount bookinfo details identity 61131 2 verdict Forwarded GET http 10 99 74 157 details 1 200
cilium Upgrade Guide docs cilium io adminupgrade upgradegeneral
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _admin_upgrade: ************* Upgrade Guide ************* .. _upgrade_general: This upgrade guide is intended for Cilium running on Kubernetes. If you have questions, feel free to ping us on `Cilium Slack`_. .. include:: upgrade-warning.rst .. _pre_flight: Running pre-flight check (Required) =================================== When rolling out an upgrade with Kubernetes, Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image. In order to reduce the downtime of the agent and to prevent ``ErrImagePull`` errors during upgrade, the pre-flight check pre-pulls the new image version. If you are running in :ref:`kubeproxy-free` mode you must also pass on the Kubernetes API Server IP and / or the Kubernetes API Server Port when generating the ``cilium-preflight.yaml`` file. .. tabs:: .. group-tab:: kubectl .. parsed-literal:: helm template |CHART_RELEASE| \\ --namespace=kube-system \\ --set preflight.enabled=true \\ --set agent=false \\ --set operator.enabled=false \\ > cilium-preflight.yaml kubectl create -f cilium-preflight.yaml .. group-tab:: Helm .. parsed-literal:: helm install cilium-preflight |CHART_RELEASE| \\ --namespace=kube-system \\ --set preflight.enabled=true \\ --set agent=false \\ --set operator.enabled=false .. group-tab:: kubectl (kubeproxy-free) .. parsed-literal:: helm template |CHART_RELEASE| \\ --namespace=kube-system \\ --set preflight.enabled=true \\ --set agent=false \\ --set operator.enabled=false \\ --set k8sServiceHost=API_SERVER_IP \\ --set k8sServicePort=API_SERVER_PORT \\ > cilium-preflight.yaml kubectl create -f cilium-preflight.yaml .. group-tab:: Helm (kubeproxy-free) .. parsed-literal:: helm install cilium-preflight |CHART_RELEASE| \\ --namespace=kube-system \\ --set preflight.enabled=true \\ --set agent=false \\ --set operator.enabled=false \\ --set k8sServiceHost=API_SERVER_IP \\ --set k8sServicePort=API_SERVER_PORT After applying the ``cilium-preflight.yaml``, ensure that the number of READY pods is the same number of Cilium pods running. .. code-block:: shell-session $ kubectl get daemonset -n kube-system | sed -n '1p;/cilium/p' NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE cilium 2 2 2 2 2 <none> 1h20m cilium-pre-flight-check 2 2 2 2 2 <none> 7m15s Once the number of READY pods are equal, make sure the Cilium pre-flight deployment is also marked as READY 1/1. If it shows READY 0/1, consult the :ref:`cnp_validation` section and resolve issues with the deployment before continuing with the upgrade. .. code-block:: shell-session $ kubectl get deployment -n kube-system cilium-pre-flight-check -w NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 1/1 1 0 12s .. _cleanup_preflight_check: Clean up pre-flight check ------------------------- Once the number of READY for the preflight :term:`DaemonSet` is the same as the number of cilium pods running and the preflight ``Deployment`` is marked as READY ``1/1`` you can delete the cilium-preflight and proceed with the upgrade. .. tabs:: .. group-tab:: kubectl .. code-block:: shell-session kubectl delete -f cilium-preflight.yaml .. group-tab:: Helm .. code-block:: shell-session helm delete cilium-preflight --namespace=kube-system .. _upgrade_minor: Upgrading Cilium ================ During normal cluster operations, all Cilium components should run the same version. Upgrading just one of them (e.g., upgrading the agent without upgrading the operator) could result in unexpected cluster behavior. The following steps will describe how to upgrade all of the components from one stable release to a later stable release. .. include:: upgrade-warning.rst Step 1: Upgrade to latest patch version --------------------------------------- When upgrading from one minor release to another minor release, for example 1.x to 1.y, it is recommended to upgrade to the `latest patch release <https://github.com/cilium/cilium#stable-releases>`__ for a Cilium release series first. Upgrading to the latest patch release ensures the most seamless experience if a rollback is required following the minor release upgrade. The upgrade guides for previous versions can be found for each minor version at the bottom left corner. Step 2: Use Helm to Upgrade your Cilium deployment -------------------------------------------------------------------------------------- :term:`Helm` can be used to either upgrade Cilium directly or to generate a new set of YAML files that can be used to upgrade an existing deployment via ``kubectl``. By default, Helm will generate the new templates using the default values files packaged with each new release. You still need to ensure that you are specifying the equivalent options as used for the initial deployment, either by specifying a them at the command line or by committing the values to a YAML file. .. include:: ../installation/k8s-install-download-release.rst To minimize datapath disruption during the upgrade, the ``upgradeCompatibility`` option should be set to the initial Cilium version which was installed in this cluster. .. tabs:: .. group-tab:: kubectl Generate the required YAML file and deploy it: .. parsed-literal:: helm template |CHART_RELEASE| \\ --set upgradeCompatibility=1.X \\ --namespace kube-system \\ > cilium.yaml kubectl apply -f cilium.yaml .. group-tab:: Helm Deploy Cilium release via Helm: .. parsed-literal:: helm upgrade cilium |CHART_RELEASE| \\ --namespace=kube-system \\ --set upgradeCompatibility=1.X .. note:: Instead of using ``--set``, you can also save the values relative to your deployment in a YAML file and use it to regenerate the YAML for the latest Cilium version. Running any of the previous commands will overwrite the existing cluster's :term:`ConfigMap` so it is critical to preserve any existing options, either by setting them at the command line or storing them in a YAML file, similar to: .. code-block:: yaml agent: true upgradeCompatibility: "1.8" ipam: mode: "kubernetes" k8sServiceHost: "API_SERVER_IP" k8sServicePort: "API_SERVER_PORT" kubeProxyReplacement: "true" You can then upgrade using this values file by running: .. parsed-literal:: helm upgrade cilium |CHART_RELEASE| \\ --namespace=kube-system \\ -f my-values.yaml When upgrading from one minor release to another minor release using ``helm upgrade``, do *not* use Helm's ``--reuse-values`` flag. The ``--reuse-values`` flag ignores any newly introduced values present in the new release and thus may cause the Helm template to render incorrectly. Instead, if you want to reuse the values from your existing installation, save the old values in a values file, check the file for any renamed or deprecated values, and then pass it to the ``helm upgrade`` command as described above. You can retrieve and save the values from an existing installation with the following command: .. code-block:: shell-session helm get values cilium --namespace=kube-system -o yaml > old-values.yaml The ``--reuse-values`` flag may only be safely used if the Cilium chart version remains unchanged, for example when ``helm upgrade`` is used to apply configuration changes without upgrading Cilium. Step 3: Rolling Back -------------------- Occasionally, it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade. To undo the rollout run: .. tabs:: .. group-tab:: kubectl .. code-block:: shell-session kubectl rollout undo daemonset/cilium -n kube-system .. group-tab:: Helm .. code-block:: shell-session helm history cilium --namespace=kube-system helm rollback cilium [REVISION] --namespace=kube-system This will revert the latest changes to the Cilium ``DaemonSet`` and return Cilium to the state it was in prior to the upgrade. .. note:: When rolling back after new features of the new minor version have already been consumed, consult the :ref:`version_notes` to check and prepare for incompatible feature use before downgrading/rolling back. This step is only required after new functionality introduced in the new minor version has already been explicitly used by creating new resources or by opting into new features via the :term:`ConfigMap`. .. _version_notes: .. _upgrade_version_specifics: Version Specific Notes ====================== This section details the upgrade notes specific to |CURRENT_RELEASE|. Read them carefully and take the suggested actions before upgrading Cilium to |CURRENT_RELEASE|. For upgrades to earlier releases, see the :prev-docs:`upgrade notes to the previous version <operations/upgrade/#upgrade-notes>`. The only tested upgrade and rollback path is between consecutive minor releases. Always perform upgrades and rollbacks between one minor release at a time. Additionally, always update to the latest patch release of your current version before attempting an upgrade. Tested upgrades are expected to have minimal to no impact on new and existing connections matched by either no Network Policies, or L3/L4 Network Policies only. Any traffic flowing via user space proxies (for example, because an L7 policy is in place, or using Ingress/Gateway API) will be disrupted during upgrade. Endpoints communicating via the proxy must reconnect to re-establish connections. .. _current_release_required_changes: .. _1.17_upgrade_notes: 1.17 Upgrade Notes ------------------ * Operating Cilium in ``--datapath-mode=lb-only`` for plain Docker mode now requires to add an additional ``--enable-k8s=false`` to the command line, otherwise it is assumed that Kubernetes is present. * The Kubernetes clients used by Cilium Agent and Cilium Operator now have separately configurable rate limits. The default rate limit for Cilium Operator K8s clients has been increased to 100 QPS/200 Burst. To configure the rate limit for Cilium Operator, use the ``--operator-k8s-client-qps`` and ``--operator-k8s-client-burst`` flags or the corresponding Helm values. * Support for Consul, deprecated since v1.12, has been removed. * Cilium now supports services protocol differentiation, which allows the agent to distinguish two services on the same port with different protocols (e.g. TCP and UDP). This feature, enabled by default, can be controlled with the ``--bpf-lb-proto-diff`` flag. After the upgrade, existing services without a protocol set will be preserved as such, to avoid any connection disruptions, and will need to be deleted and recreated in order for their protocol to be taken into account by the agent. In case of downgrades to a version that doesn't support services protocol differentiation, existing services with the protocol set will be deleted and recreated, without the protocol, by the agent, causing connection disruptions for such services. * MTU auto-detection is now continuous during agent lifetime, changing device MTU no longer requires restarting the agent to pick up the new MTU. * MTU auto-detection will now use the lowest MTU of all external interfaces. Before, only the primary interface was considered. One exception to this is in ENI mode where the secondary interfaces are not considered for MTU auto-detection. MTU can still be configured manually via the ``MTU`` helm option, ``--mtu`` agent flag or ``mtu`` option in CNI configuration. * Support for L7 protocol visibility using Pod annotations (``policy.cilium.io/proxy-visibility``), deprecated since v1.15, has been removed. * The Cilium cluster name validation cannot be bypassed anymore, both for the local and remote clusters. The cluster name is strictly enforced to consist of at most 32 lower case alphanumeric characters and '-', start and end with an alphanumeric character. * Cilium could previously be run in a configuration where the Etcd instances that distribute Cilium state between nodes would be managed in pod network by Cilium itself. This support, which had been previously deprecated as complicated and error prone, has now been removed. Refer to :ref:`k8s_install_etcd` for alternatives for running Cilium with Etcd. * For IPsec, support for a single key has been removed. Per-tunnel keys will now be used regardless of the presence of the ``+`` sign in the secret. * The option to run a synchronous probe using ``cilium-health status --probe`` is no longer supported, and is now a hidden option that returns the results of the most recent cached probe. It will be removed in a future release. * The Cilium status API now reports the KVStore subsystem with ``Disabled`` state when disabled, instead of ``OK`` state and ``Disabled`` message. * Support for ``metallb-bgp``, deprecated since 1.14, has been removed. Removed Options ~~~~~~~~~~~~~~~ * The previously deprecated ``clustermesh-ip-identities-sync-timeout`` flag has been removed in favor of ``clustermesh-sync-timeout``. * The previously deprecated built-in WireGuard userspace-mode fallback (Helm ``wireguard.userspaceFallback``) has been removed. Users of WireGuard transparent encryption are required to use a Linux kernel with WireGuard support. * The previously deprecated ``metallb-bgp`` flags ``bgp-config-path``, ``bgp-announce-lb-ip`` and ``bgp-announce-pod-cidr`` have been removed. Users are now required to use Cilium BGP control plane for BGP advertisements. Deprecated Options ~~~~~~~~~~~~~~~~~~ * The high-scale mode for ipcache has been deprecated and will be removed in v1.18. * The hubble-relay flag ``--dial-timeout`` has been deprecated (now a no-op) and will be removed in Cilium 1.18. Helm Options ~~~~~~~~~~~~ * The Helm options ``hubble.tls.server.cert``, ``hubble.tls.server.key``, ``hubble.relay.tls.client.cert``, ``hubble.relay.tls.client.key``, ``hubble.relay.tls.server.cert``, ``hubble.relay.tls.server.key``, ``hubble.ui.tls.client.cert``, and ``hubble.ui.tls.client.key`` have been deprecated in favor of the associated ``existingSecret`` options and will be removed in a future release. * The default value of ``hubble.tls.auto.certValidityDuration`` has been lowered from 1095 days to 365 days because recent versions of MacOS will fail to validate certificates with expirations longer than 825 days. * The Helm option ``hubble.relay.dialTimeout`` has been deprecated (now a no-op) and will be removed in Cilium 1.18. * The ``metallb-bgp`` integration Helm options ``bgp.enabled``, ``bgp.announce.podCIDR``, and ``bgp.announce.loadbalancerIP`` have been removed. Users are now required to use Cilium BGP control plane options available under ``bgpControlPlane`` for BGP announcements. * The default value of ``dnsProxy.endpointMaxIpPerHostname`` and its corresponding agent option has been increased from 50 to 1000 to reflect improved scaling of toFQDNs policies and to better handle domains which return a large number of IPs with short TTLs. Agent Options ~~~~~~~~~~~~~ * The ``CONNTRACK_LOCAL`` option has been deprecated and will be removed in a future release. Bugtool Options ~~~~~~~~~~~~~~~ * The flag ``k8s-mode`` (and related flags ``cilium-agent-container-name``, ``k8s-namespace`` & ``k8s-label``) have been deprecated and will be removed in a Cilium 1.18. Cilium CLI should be used to gather a sysdump from a K8s cluster. Added Metrics ~~~~~~~~~~~~~ * ``cilium_node_health_connectivity_status`` * ``cilium_node_health_connectivity_latency_seconds`` * ``cilium_operator_unmanaged_pods`` * ``cilium_policy_selector_match_count_max`` * ``cilium_identity_cache_timer_duration`` * ``cilium_identity_cache_timer_trigger_latency`` * ``cilium_identity_cache_timer_trigger_folds`` Removed Metrics ~~~~~~~~~~~~~~~ * ``cilium_cidrgroup_translation_time_stats_seconds`` has been removed, as the measured code path no longer exists. * ``cilium_triggers_policy_update_total`` has been removed. * ``cilium_triggers_policy_update_folds`` has been removed. * ``cilium_triggers_policy_update_call_duration`` has been removed. Changed Metrics ~~~~~~~~~~~~~~~ Deprecated Metrics ~~~~~~~~~~~~~~~~~~ * ``cilium_node_connectivity_status`` is now deprecated. Please use ``cilium_node_health_connectivity_status`` instead. * ``cilium_node_connectivity_latency_seconds`` is now deprecated. Please use ``cilium_node_health_connectivity_latency_seconds`` instead. Hubble CLI ~~~~~~~~~~ * the ``--cluster`` behavior changed to show flows emitted from nodes outside of the provided cluster name (either coming from or going to the target cluster). This change brings consistency between the ``--cluster`` and ``--namespace`` flags and removed the incompatibility between the ``--cluster`` and ``--node-name`` flags. The previous behavior of ``--cluster foo`` can be reproduced with ``--node-name foo/`` (shows all flows emitted from a node in cluster ``foo``). Advanced ======== Upgrade Impact -------------- Upgrades are designed to have minimal impact on your running deployment. Networking connectivity, policy enforcement and load balancing will remain functional in general. The following is a list of operations that will not be available during the upgrade: * API-aware policy rules are enforced in user space proxies and are running as part of the Cilium pod. Upgrading Cilium causes the proxy to restart, which results in a connectivity outage and causes the connection to reset. * Existing policy will remain effective but implementation of new policy rules will be postponed to after the upgrade has been completed on a particular node. * Monitoring components such as ``cilium-dbg monitor`` will experience a brief outage while the Cilium pod is restarting. Events are queued up and read after the upgrade. If the number of events exceeds the event buffer size, events will be lost. .. _upgrade_configmap: Rebasing a ConfigMap -------------------- This section describes the procedure to rebase an existing :term:`ConfigMap` to the template of another version. Export the current ConfigMap ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :: $ kubectl get configmap -n kube-system cilium-config -o yaml --export > cilium-cm-old.yaml $ cat ./cilium-cm-old.yaml apiVersion: v1 data: clean-cilium-state: "false" debug: "true" disable-ipv4: "false" etcd-config: |- --- endpoints: - https://192.168.60.11:2379 # # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line # and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config trusted-ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt' # # In case you want client to server authentication, uncomment the following # lines and add the certificate and key in cilium-etcd-secrets below key-file: '/var/lib/etcd-secrets/etcd-client.key' cert-file: '/var/lib/etcd-secrets/etcd-client.crt' kind: ConfigMap metadata: creationTimestamp: null name: cilium-config selfLink: /api/v1/namespaces/kube-system/configmaps/cilium-config In the :term:`ConfigMap` above, we can verify that Cilium is using ``debug`` with ``true``, it has a etcd endpoint running with `TLS <https://etcd.io/docs/latest/op-guide/security/>`_, and the etcd is set up to have `client to server authentication <https://etcd.io/docs/latest/op-guide/security/#example-2-client-to-server-authentication-with-https-client-certificates>`_. Generate the latest ConfigMap ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: shell-session helm template cilium \ --namespace=kube-system \ --set agent=false \ --set config.enabled=true \ --set operator.enabled=false \ > cilium-configmap.yaml Add new options ~~~~~~~~~~~~~~~ Add the new options manually to your old :term:`ConfigMap`, and make the necessary changes. In this example, the ``debug`` option is meant to be kept with ``true``, the ``etcd-config`` is kept unchanged, and ``monitor-aggregation`` is a new option, but after reading the :ref:`version_notes` the value was kept unchanged from the default value. After making the necessary changes, the old :term:`ConfigMap` was migrated with the new options while keeping the configuration that we wanted: :: $ cat ./cilium-cm-old.yaml apiVersion: v1 data: debug: "true" disable-ipv4: "false" # If you want to clean cilium state; change this value to true clean-cilium-state: "false" monitor-aggregation: "medium" etcd-config: |- --- endpoints: - https://192.168.60.11:2379 # # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line # and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config trusted-ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt' # # In case you want client to server authentication, uncomment the following # lines and add the certificate and key in cilium-etcd-secrets below key-file: '/var/lib/etcd-secrets/etcd-client.key' cert-file: '/var/lib/etcd-secrets/etcd-client.crt' kind: ConfigMap metadata: creationTimestamp: null name: cilium-config selfLink: /api/v1/namespaces/kube-system/configmaps/cilium-config Apply new ConfigMap ~~~~~~~~~~~~~~~~~~~ After adding the options, manually save the file with your changes and install the :term:`ConfigMap` in the ``kube-system`` namespace of your cluster. .. code-block:: shell-session $ kubectl apply -n kube-system -f ./cilium-cm-old.yaml As the :term:`ConfigMap` is successfully upgraded we can start upgrading Cilium ``DaemonSet`` and ``RBAC`` which will pick up the latest configuration from the :term:`ConfigMap`. Migrating from kvstore-backed identities to Kubernetes CRD-backed identities ---------------------------------------------------------------------------- Beginning with Cilium 1.6, Kubernetes CRD-backed security identities can be used for smaller clusters. Along with other changes in 1.6, this allows kvstore-free operation if desired. It is possible to migrate identities from an existing kvstore deployment to CRD-backed identities. This minimizes disruptions to traffic as the update rolls out through the cluster. Migration ~~~~~~~~~ When identities change, existing connections can be disrupted while Cilium initializes and synchronizes with the shared identity store. The disruption occurs when new numeric identities are used for existing pods on some instances and others are used on others. When converting to CRD-backed identities, it is possible to pre-allocate CRD identities so that the numeric identities match those in the kvstore. This allows new and old Cilium instances in the rollout to agree. There are two ways to achieve this: you can either run a one-off ``cilium preflight migrate-identity`` script which will perform a point-in-time copy of all identities from the kvstore to CRDs (added in Cilium 1.6), or use the "Double Write" identity allocation mode which will have Cilium manage identities in both the kvstore and CRD at the same time for a seamless migration (added in Cilium 1.17). Migration with the ``cilium preflight migrate-identity`` script ############################################################### The ``cilium preflight migrate-identity`` script is a one-off tool that can be used to copy identities from the kvstore into CRDs. It has a couple of limitations: * If an identity is created in the kvstore after the one-off migration has been completed, it will not be copied into a CRD. This means that you need to perform the migration on a cluster with no identity churn. * There is no easy way to revert back to ``--identity-allocation-mode=kvstore`` if something goes wrong after Cilium has been migrated to ``--identity-allocation-mode=crd`` If these limitations are not acceptable, it is recommended to use the ":ref:`Double Write <double_write_migration>`" identity allocation mode instead. The following steps show an example of performing the migration using the ``cilium preflight migrate-identity`` script. It is safe to re-run the command if desired. It will identify already allocated identities or ones that cannot be migrated. Note that identity ``34815`` is migrated, ``17003`` is already migrated, and ``11730`` has a conflict and a new ID allocated for those labels. The steps below assume a stable cluster with no new identities created during the rollout. Once Cilium using CRD-backed identities is running, it may begin allocating identities in a way that conflicts with older ones in the kvstore. The cilium preflight manifest requires etcd support and can be built with: .. code-block:: shell-session helm template cilium \ --namespace=kube-system \ --set preflight.enabled=true \ --set agent=false \ --set config.enabled=false \ --set operator.enabled=false \ --set etcd.enabled=true \ --set etcd.ssl=true \ > cilium-preflight.yaml kubectl create -f cilium-preflight.yaml Example migration ~~~~~~~~~~~~~~~~~ .. code-block:: shell-session $ kubectl exec -n kube-system cilium-pre-flight-check-1234 -- cilium-dbg preflight migrate-identity INFO[0000] Setting up kvstore client INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" subsys=kvstore INFO[0000] Setting up kubernetes client INFO[0000] Establishing connection to apiserver host="https://192.168.60.11:6443" subsys=k8s INFO[0000] Connected to apiserver subsys=k8s INFO[0000] Got lease ID 29c66c67db8870c8 subsys=kvstore INFO[0000] Got lock lease ID 29c66c67db8870ca subsys=kvstore INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" etcdEndpoint="https://192.168.60.11:2379" subsys=kvstore version=3.3.13 INFO[0000] CRD (CustomResourceDefinition) is installed and up-to-date name=CiliumNetworkPolicy/v2 subsys=k8s INFO[0000] Updating CRD (CustomResourceDefinition)... name=v2.CiliumEndpoint subsys=k8s INFO[0001] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumEndpoint subsys=k8s INFO[0001] Updating CRD (CustomResourceDefinition)... name=v2.CiliumNode subsys=k8s INFO[0002] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumNode subsys=k8s INFO[0002] Updating CRD (CustomResourceDefinition)... name=v2.CiliumIdentity subsys=k8s INFO[0003] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumIdentity subsys=k8s INFO[0003] Listing identities in kvstore INFO[0003] Migrating identities to CRD INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Migrated identity identity=34815 identityLabels="k8s:class=tiefighter;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" WARN[0003] ID is allocated to a different key in CRD. A new ID will be allocated for the this key identityLabels="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" oldIdentity=11730 INFO[0003] Reusing existing global key key="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" subsys=allocator INFO[0003] New ID allocated for key in CRD identity=17281 identityLabels="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" oldIdentity=11730 INFO[0003] ID was already allocated to this key. It is already migrated identity=17003 identityLabels="k8s:class=xwing;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=alliance;" .. note:: It is also possible to use the ``--k8s-kubeconfig-path`` and ``--kvstore-opt`` ``cilium`` CLI options with the preflight command. The default is to derive the configuration as cilium-agent does. .. code-block:: shell-session cilium preflight migrate-identity --k8s-kubeconfig-path /var/lib/cilium/cilium.kubeconfig --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd-config.yml Once the migration is complete, confirm the endpoint identities match by listing the endpoints stored in CRDs and in etcd: .. code-block:: shell-session $ kubectl get ciliumendpoints -A # new CRD-backed endpoints $ kubectl exec -n kube-system cilium-1234 -- cilium-dbg endpoint list # existing etcd-backed endpoints Clearing CRD identities ~~~~~~~~~~~~~~~~~~~~~~~ If a migration has gone wrong, it possible to start with a clean slate. Ensure that no Cilium instances are running with ``--identity-allocation-mode=crd`` and execute: .. code-block:: shell-session $ kubectl delete ciliumid --all .. _double_write_migration: Migration with the "Double Write" identity allocation mode ########################################################## .. include:: ../beta.rst The "Double Write" Identity Allocation Mode allows Cilium to allocate identities as KVStore values *and* as CRDs at the same time. This mode also has two versions: one where the source of truth comes from the kvstore (``--identity-allocation-mode=doublewrite-readkvstore``), and one where the source of truth comes from CRDs (``--identity-allocation-mode=doublewrite-readcrd``). .. note:: "Double Write" mode is not compatible with Consul as the KVStore The high-level migration plan looks as follows: #. Starting state: Cilium is running in KVStore mode. #. Switch Cilium to “Double Write” mode with all reads happening from the KVStore. This is almost the same as the pure KVStore mode with the only difference being that all identities are duplicated as CRDs but are not used. #. Switch Cilium to “Double Write” mode with all reads happening from CRDs. This is equivalent to Cilium running in pure CRD mode but identities will still be updated in the KVStore to allow for the possibility of a fast rollback. #. Switch Cilium to CRD mode. The KVStore will no longer be used and will be ready for decommission. This will allow you to perform a gradual and seamless migration with the possibility of a fast rollback at steps two or three. Furthermore, when the "Double Write" mode is enabled, the Operator will emit additional metrics to help monitor the migration progress. These metrics can be used for alerting about identity inconsistencies between the KVStore and CRDs. Note that you can also use this to migrate from CRD to KVStore mode. All operations simply need to be repeated in reverse order. Rollout Instructions ~~~~~~~~~~~~~~~~~~~~ #. Re-deploy first the Operator and then the Agents with ``--identity-allocation-mode=doublewrite-readkvstore``. #. Monitor the Operator metrics and logs to ensure that all identities have converged between the KVStore and CRDs. The relevant metrics emitted by the Operator are: * ``cilium_operator_identity_crd_total_count`` and ``cilium_operator_identity_kvstore_total_count`` report the total number of identities in CRDs and KVStore respectively. * ``cilium_operator_identity_crd_only_count`` and ``cilium_operator_identity_kvstore_only_count`` report the number of identities that are only in CRDs or only in the KVStore respectively, to help detect inconsistencies. In case further investigation is needed, the Operator logs will contain detailed information about the discrepancies between KVStore and CRD identities. Note that Garbage Collection for KVStore identities and CRD identities happens at slightly different times, so it is possible to see discrepancies in the metrics for certain periods of time, depending on ``--identity-gc-interval`` and ``--identity-heartbeat-timeout`` settings. #. Once all identities have converged, re-deploy the Operator and the Agents with ``--identity-allocation-mode=doublewrite-readcrd``. This will cause Cilium to read identities only from CRDs, but continue to write them to the KVStore. #. Once you are ready to decommission the KVStore, re-deploy first the Agents and then the Operator with ``--identity-allocation-mode=crd``. This will make Cilium read and write identities only to CRDs. #. You can now decommission the KVStore. .. _cnp_validation: CNP Validation -------------- Running the CNP Validator will make sure the policies deployed in the cluster are valid. It is important to run this validation before an upgrade so it will make sure Cilium has a correct behavior after upgrade. Avoiding doing this validation might cause Cilium from updating its ``NodeStatus`` in those invalid Network Policies as well as in the worst case scenario it might give a false sense of security to the user if a policy is badly formatted and Cilium is not enforcing that policy due a bad validation schema. This CNP Validator is automatically executed as part of the pre-flight check :ref:`pre_flight`. Start by deployment the ``cilium-pre-flight-check`` and check if the ``Deployment`` shows READY 1/1, if it does not check the pod logs. .. code-block:: shell-session $ kubectl get deployment -n kube-system cilium-pre-flight-check -w NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 0/1 1 0 12s $ kubectl logs -n kube-system deployment/cilium-pre-flight-check -c cnp-validator --previous level=info msg="Setting up kubernetes client" level=info msg="Establishing connection to apiserver" host="https://172.20.0.1:443" subsys=k8s level=info msg="Connected to apiserver" subsys=k8s level=info msg="Validating CiliumNetworkPolicy 'default/cidr-rule': OK! level=error msg="Validating CiliumNetworkPolicy 'default/cnp-update': unexpected validation error: spec.labels: Invalid value: \"string\": spec.labels in body must be of type object: \"string\"" level=error msg="Found invalid CiliumNetworkPolicy" In this example, we can see the ``CiliumNetworkPolicy`` in the ``default`` namespace with the name ``cnp-update`` is not valid for the Cilium version we are trying to upgrade. In order to fix this policy we need to edit it, we can do this by saving the policy locally and modify it. For this example it seems the ``.spec.labels`` has set an array of strings which is not correct as per the official schema. .. code-block:: shell-session $ kubectl get cnp -n default cnp-update -o yaml > cnp-bad.yaml $ cat cnp-bad.yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy [...] spec: endpointSelector: matchLabels: id: app1 ingress: - fromEndpoints: - matchLabels: id: app2 toPorts: - ports: - port: "80" protocol: TCP labels: - custom=true [...] To fix this policy we need to set the ``.spec.labels`` with the right format and commit these changes into Kubernetes. .. code-block:: shell-session $ cat cnp-bad.yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy [...] spec: endpointSelector: matchLabels: id: app1 ingress: - fromEndpoints: - matchLabels: id: app2 toPorts: - ports: - port: "80" protocol: TCP labels: - key: "custom" value: "true" [...] $ $ kubectl apply -f ./cnp-bad.yaml After applying the fixed policy we can delete the pod that was validating the policies so that Kubernetes creates a new pod immediately to verify if the fixed policies are now valid. .. code-block:: shell-session $ kubectl delete pod -n kube-system -l k8s-app=cilium-pre-flight-check-deployment pod "cilium-pre-flight-check-86dfb69668-ngbql" deleted $ kubectl get deployment -n kube-system cilium-pre-flight-check NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 1/1 1 1 55m $ kubectl logs -n kube-system deployment/cilium-pre-flight-check -c cnp-validator level=info msg="Setting up kubernetes client" level=info msg="Establishing connection to apiserver" host="https://172.20.0.1:443" subsys=k8s level=info msg="Connected to apiserver" subsys=k8s level=info msg="Validating CiliumNetworkPolicy 'default/cidr-rule': OK! level=info msg="Validating CiliumNetworkPolicy 'default/cnp-update': OK! level=info msg="All CCNPs and CNPs valid!" Once they are valid you can continue with the upgrade process. :ref:`cleanup_preflight_check`
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io admin upgrade Upgrade Guide upgrade general This upgrade guide is intended for Cilium running on Kubernetes If you have questions feel free to ping us on Cilium Slack include upgrade warning rst pre flight Running pre flight check Required When rolling out an upgrade with Kubernetes Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image In order to reduce the downtime of the agent and to prevent ErrImagePull errors during upgrade the pre flight check pre pulls the new image version If you are running in ref kubeproxy free mode you must also pass on the Kubernetes API Server IP and or the Kubernetes API Server Port when generating the cilium preflight yaml file tabs group tab kubectl parsed literal helm template CHART RELEASE namespace kube system set preflight enabled true set agent false set operator enabled false cilium preflight yaml kubectl create f cilium preflight yaml group tab Helm parsed literal helm install cilium preflight CHART RELEASE namespace kube system set preflight enabled true set agent false set operator enabled false group tab kubectl kubeproxy free parsed literal helm template CHART RELEASE namespace kube system set preflight enabled true set agent false set operator enabled false set k8sServiceHost API SERVER IP set k8sServicePort API SERVER PORT cilium preflight yaml kubectl create f cilium preflight yaml group tab Helm kubeproxy free parsed literal helm install cilium preflight CHART RELEASE namespace kube system set preflight enabled true set agent false set operator enabled false set k8sServiceHost API SERVER IP set k8sServicePort API SERVER PORT After applying the cilium preflight yaml ensure that the number of READY pods is the same number of Cilium pods running code block shell session kubectl get daemonset n kube system sed n 1p cilium p NAME DESIRED CURRENT READY UP TO DATE AVAILABLE NODE SELECTOR AGE cilium 2 2 2 2 2 none 1h20m cilium pre flight check 2 2 2 2 2 none 7m15s Once the number of READY pods are equal make sure the Cilium pre flight deployment is also marked as READY 1 1 If it shows READY 0 1 consult the ref cnp validation section and resolve issues with the deployment before continuing with the upgrade code block shell session kubectl get deployment n kube system cilium pre flight check w NAME READY UP TO DATE AVAILABLE AGE cilium pre flight check 1 1 1 0 12s cleanup preflight check Clean up pre flight check Once the number of READY for the preflight term DaemonSet is the same as the number of cilium pods running and the preflight Deployment is marked as READY 1 1 you can delete the cilium preflight and proceed with the upgrade tabs group tab kubectl code block shell session kubectl delete f cilium preflight yaml group tab Helm code block shell session helm delete cilium preflight namespace kube system upgrade minor Upgrading Cilium During normal cluster operations all Cilium components should run the same version Upgrading just one of them e g upgrading the agent without upgrading the operator could result in unexpected cluster behavior The following steps will describe how to upgrade all of the components from one stable release to a later stable release include upgrade warning rst Step 1 Upgrade to latest patch version When upgrading from one minor release to another minor release for example 1 x to 1 y it is recommended to upgrade to the latest patch release https github com cilium cilium stable releases for a Cilium release series first Upgrading to the latest patch release ensures the most seamless experience if a rollback is required following the minor release upgrade The upgrade guides for previous versions can be found for each minor version at the bottom left corner Step 2 Use Helm to Upgrade your Cilium deployment term Helm can be used to either upgrade Cilium directly or to generate a new set of YAML files that can be used to upgrade an existing deployment via kubectl By default Helm will generate the new templates using the default values files packaged with each new release You still need to ensure that you are specifying the equivalent options as used for the initial deployment either by specifying a them at the command line or by committing the values to a YAML file include installation k8s install download release rst To minimize datapath disruption during the upgrade the upgradeCompatibility option should be set to the initial Cilium version which was installed in this cluster tabs group tab kubectl Generate the required YAML file and deploy it parsed literal helm template CHART RELEASE set upgradeCompatibility 1 X namespace kube system cilium yaml kubectl apply f cilium yaml group tab Helm Deploy Cilium release via Helm parsed literal helm upgrade cilium CHART RELEASE namespace kube system set upgradeCompatibility 1 X note Instead of using set you can also save the values relative to your deployment in a YAML file and use it to regenerate the YAML for the latest Cilium version Running any of the previous commands will overwrite the existing cluster s term ConfigMap so it is critical to preserve any existing options either by setting them at the command line or storing them in a YAML file similar to code block yaml agent true upgradeCompatibility 1 8 ipam mode kubernetes k8sServiceHost API SERVER IP k8sServicePort API SERVER PORT kubeProxyReplacement true You can then upgrade using this values file by running parsed literal helm upgrade cilium CHART RELEASE namespace kube system f my values yaml When upgrading from one minor release to another minor release using helm upgrade do not use Helm s reuse values flag The reuse values flag ignores any newly introduced values present in the new release and thus may cause the Helm template to render incorrectly Instead if you want to reuse the values from your existing installation save the old values in a values file check the file for any renamed or deprecated values and then pass it to the helm upgrade command as described above You can retrieve and save the values from an existing installation with the following command code block shell session helm get values cilium namespace kube system o yaml old values yaml The reuse values flag may only be safely used if the Cilium chart version remains unchanged for example when helm upgrade is used to apply configuration changes without upgrading Cilium Step 3 Rolling Back Occasionally it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade To undo the rollout run tabs group tab kubectl code block shell session kubectl rollout undo daemonset cilium n kube system group tab Helm code block shell session helm history cilium namespace kube system helm rollback cilium REVISION namespace kube system This will revert the latest changes to the Cilium DaemonSet and return Cilium to the state it was in prior to the upgrade note When rolling back after new features of the new minor version have already been consumed consult the ref version notes to check and prepare for incompatible feature use before downgrading rolling back This step is only required after new functionality introduced in the new minor version has already been explicitly used by creating new resources or by opting into new features via the term ConfigMap version notes upgrade version specifics Version Specific Notes This section details the upgrade notes specific to CURRENT RELEASE Read them carefully and take the suggested actions before upgrading Cilium to CURRENT RELEASE For upgrades to earlier releases see the prev docs upgrade notes to the previous version operations upgrade upgrade notes The only tested upgrade and rollback path is between consecutive minor releases Always perform upgrades and rollbacks between one minor release at a time Additionally always update to the latest patch release of your current version before attempting an upgrade Tested upgrades are expected to have minimal to no impact on new and existing connections matched by either no Network Policies or L3 L4 Network Policies only Any traffic flowing via user space proxies for example because an L7 policy is in place or using Ingress Gateway API will be disrupted during upgrade Endpoints communicating via the proxy must reconnect to re establish connections current release required changes 1 17 upgrade notes 1 17 Upgrade Notes Operating Cilium in datapath mode lb only for plain Docker mode now requires to add an additional enable k8s false to the command line otherwise it is assumed that Kubernetes is present The Kubernetes clients used by Cilium Agent and Cilium Operator now have separately configurable rate limits The default rate limit for Cilium Operator K8s clients has been increased to 100 QPS 200 Burst To configure the rate limit for Cilium Operator use the operator k8s client qps and operator k8s client burst flags or the corresponding Helm values Support for Consul deprecated since v1 12 has been removed Cilium now supports services protocol differentiation which allows the agent to distinguish two services on the same port with different protocols e g TCP and UDP This feature enabled by default can be controlled with the bpf lb proto diff flag After the upgrade existing services without a protocol set will be preserved as such to avoid any connection disruptions and will need to be deleted and recreated in order for their protocol to be taken into account by the agent In case of downgrades to a version that doesn t support services protocol differentiation existing services with the protocol set will be deleted and recreated without the protocol by the agent causing connection disruptions for such services MTU auto detection is now continuous during agent lifetime changing device MTU no longer requires restarting the agent to pick up the new MTU MTU auto detection will now use the lowest MTU of all external interfaces Before only the primary interface was considered One exception to this is in ENI mode where the secondary interfaces are not considered for MTU auto detection MTU can still be configured manually via the MTU helm option mtu agent flag or mtu option in CNI configuration Support for L7 protocol visibility using Pod annotations policy cilium io proxy visibility deprecated since v1 15 has been removed The Cilium cluster name validation cannot be bypassed anymore both for the local and remote clusters The cluster name is strictly enforced to consist of at most 32 lower case alphanumeric characters and start and end with an alphanumeric character Cilium could previously be run in a configuration where the Etcd instances that distribute Cilium state between nodes would be managed in pod network by Cilium itself This support which had been previously deprecated as complicated and error prone has now been removed Refer to ref k8s install etcd for alternatives for running Cilium with Etcd For IPsec support for a single key has been removed Per tunnel keys will now be used regardless of the presence of the sign in the secret The option to run a synchronous probe using cilium health status probe is no longer supported and is now a hidden option that returns the results of the most recent cached probe It will be removed in a future release The Cilium status API now reports the KVStore subsystem with Disabled state when disabled instead of OK state and Disabled message Support for metallb bgp deprecated since 1 14 has been removed Removed Options The previously deprecated clustermesh ip identities sync timeout flag has been removed in favor of clustermesh sync timeout The previously deprecated built in WireGuard userspace mode fallback Helm wireguard userspaceFallback has been removed Users of WireGuard transparent encryption are required to use a Linux kernel with WireGuard support The previously deprecated metallb bgp flags bgp config path bgp announce lb ip and bgp announce pod cidr have been removed Users are now required to use Cilium BGP control plane for BGP advertisements Deprecated Options The high scale mode for ipcache has been deprecated and will be removed in v1 18 The hubble relay flag dial timeout has been deprecated now a no op and will be removed in Cilium 1 18 Helm Options The Helm options hubble tls server cert hubble tls server key hubble relay tls client cert hubble relay tls client key hubble relay tls server cert hubble relay tls server key hubble ui tls client cert and hubble ui tls client key have been deprecated in favor of the associated existingSecret options and will be removed in a future release The default value of hubble tls auto certValidityDuration has been lowered from 1095 days to 365 days because recent versions of MacOS will fail to validate certificates with expirations longer than 825 days The Helm option hubble relay dialTimeout has been deprecated now a no op and will be removed in Cilium 1 18 The metallb bgp integration Helm options bgp enabled bgp announce podCIDR and bgp announce loadbalancerIP have been removed Users are now required to use Cilium BGP control plane options available under bgpControlPlane for BGP announcements The default value of dnsProxy endpointMaxIpPerHostname and its corresponding agent option has been increased from 50 to 1000 to reflect improved scaling of toFQDNs policies and to better handle domains which return a large number of IPs with short TTLs Agent Options The CONNTRACK LOCAL option has been deprecated and will be removed in a future release Bugtool Options The flag k8s mode and related flags cilium agent container name k8s namespace k8s label have been deprecated and will be removed in a Cilium 1 18 Cilium CLI should be used to gather a sysdump from a K8s cluster Added Metrics cilium node health connectivity status cilium node health connectivity latency seconds cilium operator unmanaged pods cilium policy selector match count max cilium identity cache timer duration cilium identity cache timer trigger latency cilium identity cache timer trigger folds Removed Metrics cilium cidrgroup translation time stats seconds has been removed as the measured code path no longer exists cilium triggers policy update total has been removed cilium triggers policy update folds has been removed cilium triggers policy update call duration has been removed Changed Metrics Deprecated Metrics cilium node connectivity status is now deprecated Please use cilium node health connectivity status instead cilium node connectivity latency seconds is now deprecated Please use cilium node health connectivity latency seconds instead Hubble CLI the cluster behavior changed to show flows emitted from nodes outside of the provided cluster name either coming from or going to the target cluster This change brings consistency between the cluster and namespace flags and removed the incompatibility between the cluster and node name flags The previous behavior of cluster foo can be reproduced with node name foo shows all flows emitted from a node in cluster foo Advanced Upgrade Impact Upgrades are designed to have minimal impact on your running deployment Networking connectivity policy enforcement and load balancing will remain functional in general The following is a list of operations that will not be available during the upgrade API aware policy rules are enforced in user space proxies and are running as part of the Cilium pod Upgrading Cilium causes the proxy to restart which results in a connectivity outage and causes the connection to reset Existing policy will remain effective but implementation of new policy rules will be postponed to after the upgrade has been completed on a particular node Monitoring components such as cilium dbg monitor will experience a brief outage while the Cilium pod is restarting Events are queued up and read after the upgrade If the number of events exceeds the event buffer size events will be lost upgrade configmap Rebasing a ConfigMap This section describes the procedure to rebase an existing term ConfigMap to the template of another version Export the current ConfigMap kubectl get configmap n kube system cilium config o yaml export cilium cm old yaml cat cilium cm old yaml apiVersion v1 data clean cilium state false debug true disable ipv4 false etcd config endpoints https 192 168 60 11 2379 In case you want to use TLS in etcd uncomment the trusted ca file line and create a kubernetes secret by following the tutorial in https cilium link etcd config trusted ca file var lib etcd secrets etcd client ca crt In case you want client to server authentication uncomment the following lines and add the certificate and key in cilium etcd secrets below key file var lib etcd secrets etcd client key cert file var lib etcd secrets etcd client crt kind ConfigMap metadata creationTimestamp null name cilium config selfLink api v1 namespaces kube system configmaps cilium config In the term ConfigMap above we can verify that Cilium is using debug with true it has a etcd endpoint running with TLS https etcd io docs latest op guide security and the etcd is set up to have client to server authentication https etcd io docs latest op guide security example 2 client to server authentication with https client certificates Generate the latest ConfigMap code block shell session helm template cilium namespace kube system set agent false set config enabled true set operator enabled false cilium configmap yaml Add new options Add the new options manually to your old term ConfigMap and make the necessary changes In this example the debug option is meant to be kept with true the etcd config is kept unchanged and monitor aggregation is a new option but after reading the ref version notes the value was kept unchanged from the default value After making the necessary changes the old term ConfigMap was migrated with the new options while keeping the configuration that we wanted cat cilium cm old yaml apiVersion v1 data debug true disable ipv4 false If you want to clean cilium state change this value to true clean cilium state false monitor aggregation medium etcd config endpoints https 192 168 60 11 2379 In case you want to use TLS in etcd uncomment the trusted ca file line and create a kubernetes secret by following the tutorial in https cilium link etcd config trusted ca file var lib etcd secrets etcd client ca crt In case you want client to server authentication uncomment the following lines and add the certificate and key in cilium etcd secrets below key file var lib etcd secrets etcd client key cert file var lib etcd secrets etcd client crt kind ConfigMap metadata creationTimestamp null name cilium config selfLink api v1 namespaces kube system configmaps cilium config Apply new ConfigMap After adding the options manually save the file with your changes and install the term ConfigMap in the kube system namespace of your cluster code block shell session kubectl apply n kube system f cilium cm old yaml As the term ConfigMap is successfully upgraded we can start upgrading Cilium DaemonSet and RBAC which will pick up the latest configuration from the term ConfigMap Migrating from kvstore backed identities to Kubernetes CRD backed identities Beginning with Cilium 1 6 Kubernetes CRD backed security identities can be used for smaller clusters Along with other changes in 1 6 this allows kvstore free operation if desired It is possible to migrate identities from an existing kvstore deployment to CRD backed identities This minimizes disruptions to traffic as the update rolls out through the cluster Migration When identities change existing connections can be disrupted while Cilium initializes and synchronizes with the shared identity store The disruption occurs when new numeric identities are used for existing pods on some instances and others are used on others When converting to CRD backed identities it is possible to pre allocate CRD identities so that the numeric identities match those in the kvstore This allows new and old Cilium instances in the rollout to agree There are two ways to achieve this you can either run a one off cilium preflight migrate identity script which will perform a point in time copy of all identities from the kvstore to CRDs added in Cilium 1 6 or use the Double Write identity allocation mode which will have Cilium manage identities in both the kvstore and CRD at the same time for a seamless migration added in Cilium 1 17 Migration with the cilium preflight migrate identity script The cilium preflight migrate identity script is a one off tool that can be used to copy identities from the kvstore into CRDs It has a couple of limitations If an identity is created in the kvstore after the one off migration has been completed it will not be copied into a CRD This means that you need to perform the migration on a cluster with no identity churn There is no easy way to revert back to identity allocation mode kvstore if something goes wrong after Cilium has been migrated to identity allocation mode crd If these limitations are not acceptable it is recommended to use the ref Double Write double write migration identity allocation mode instead The following steps show an example of performing the migration using the cilium preflight migrate identity script It is safe to re run the command if desired It will identify already allocated identities or ones that cannot be migrated Note that identity 34815 is migrated 17003 is already migrated and 11730 has a conflict and a new ID allocated for those labels The steps below assume a stable cluster with no new identities created during the rollout Once Cilium using CRD backed identities is running it may begin allocating identities in a way that conflicts with older ones in the kvstore The cilium preflight manifest requires etcd support and can be built with code block shell session helm template cilium namespace kube system set preflight enabled true set agent false set config enabled false set operator enabled false set etcd enabled true set etcd ssl true cilium preflight yaml kubectl create f cilium preflight yaml Example migration code block shell session kubectl exec n kube system cilium pre flight check 1234 cilium dbg preflight migrate identity INFO 0000 Setting up kvstore client INFO 0000 Connecting to etcd server config var lib cilium etcd config yml endpoints https 192 168 60 11 2379 subsys kvstore INFO 0000 Setting up kubernetes client INFO 0000 Establishing connection to apiserver host https 192 168 60 11 6443 subsys k8s INFO 0000 Connected to apiserver subsys k8s INFO 0000 Got lease ID 29c66c67db8870c8 subsys kvstore INFO 0000 Got lock lease ID 29c66c67db8870ca subsys kvstore INFO 0000 Successfully verified version of etcd endpoint config var lib cilium etcd config yml endpoints https 192 168 60 11 2379 etcdEndpoint https 192 168 60 11 2379 subsys kvstore version 3 3 13 INFO 0000 CRD CustomResourceDefinition is installed and up to date name CiliumNetworkPolicy v2 subsys k8s INFO 0000 Updating CRD CustomResourceDefinition name v2 CiliumEndpoint subsys k8s INFO 0001 CRD CustomResourceDefinition is installed and up to date name v2 CiliumEndpoint subsys k8s INFO 0001 Updating CRD CustomResourceDefinition name v2 CiliumNode subsys k8s INFO 0002 CRD CustomResourceDefinition is installed and up to date name v2 CiliumNode subsys k8s INFO 0002 Updating CRD CustomResourceDefinition name v2 CiliumIdentity subsys k8s INFO 0003 CRD CustomResourceDefinition is installed and up to date name v2 CiliumIdentity subsys k8s INFO 0003 Listing identities in kvstore INFO 0003 Migrating identities to CRD INFO 0003 Skipped non kubernetes labels when labelling ciliumidentity All labels will still be used in identity determination labels map subsys crd allocator INFO 0003 Skipped non kubernetes labels when labelling ciliumidentity All labels will still be used in identity determination labels map subsys crd allocator INFO 0003 Skipped non kubernetes labels when labelling ciliumidentity All labels will still be used in identity determination labels map subsys crd allocator INFO 0003 Migrated identity identity 34815 identityLabels k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire WARN 0003 ID is allocated to a different key in CRD A new ID will be allocated for the this key identityLabels k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire oldIdentity 11730 INFO 0003 Reusing existing global key key k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire subsys allocator INFO 0003 New ID allocated for key in CRD identity 17281 identityLabels k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire oldIdentity 11730 INFO 0003 ID was already allocated to this key It is already migrated identity 17003 identityLabels k8s class xwing k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org alliance note It is also possible to use the k8s kubeconfig path and kvstore opt cilium CLI options with the preflight command The default is to derive the configuration as cilium agent does code block shell session cilium preflight migrate identity k8s kubeconfig path var lib cilium cilium kubeconfig kvstore etcd kvstore opt etcd config var lib cilium etcd config yml Once the migration is complete confirm the endpoint identities match by listing the endpoints stored in CRDs and in etcd code block shell session kubectl get ciliumendpoints A new CRD backed endpoints kubectl exec n kube system cilium 1234 cilium dbg endpoint list existing etcd backed endpoints Clearing CRD identities If a migration has gone wrong it possible to start with a clean slate Ensure that no Cilium instances are running with identity allocation mode crd and execute code block shell session kubectl delete ciliumid all double write migration Migration with the Double Write identity allocation mode include beta rst The Double Write Identity Allocation Mode allows Cilium to allocate identities as KVStore values and as CRDs at the same time This mode also has two versions one where the source of truth comes from the kvstore identity allocation mode doublewrite readkvstore and one where the source of truth comes from CRDs identity allocation mode doublewrite readcrd note Double Write mode is not compatible with Consul as the KVStore The high level migration plan looks as follows Starting state Cilium is running in KVStore mode Switch Cilium to Double Write mode with all reads happening from the KVStore This is almost the same as the pure KVStore mode with the only difference being that all identities are duplicated as CRDs but are not used Switch Cilium to Double Write mode with all reads happening from CRDs This is equivalent to Cilium running in pure CRD mode but identities will still be updated in the KVStore to allow for the possibility of a fast rollback Switch Cilium to CRD mode The KVStore will no longer be used and will be ready for decommission This will allow you to perform a gradual and seamless migration with the possibility of a fast rollback at steps two or three Furthermore when the Double Write mode is enabled the Operator will emit additional metrics to help monitor the migration progress These metrics can be used for alerting about identity inconsistencies between the KVStore and CRDs Note that you can also use this to migrate from CRD to KVStore mode All operations simply need to be repeated in reverse order Rollout Instructions Re deploy first the Operator and then the Agents with identity allocation mode doublewrite readkvstore Monitor the Operator metrics and logs to ensure that all identities have converged between the KVStore and CRDs The relevant metrics emitted by the Operator are cilium operator identity crd total count and cilium operator identity kvstore total count report the total number of identities in CRDs and KVStore respectively cilium operator identity crd only count and cilium operator identity kvstore only count report the number of identities that are only in CRDs or only in the KVStore respectively to help detect inconsistencies In case further investigation is needed the Operator logs will contain detailed information about the discrepancies between KVStore and CRD identities Note that Garbage Collection for KVStore identities and CRD identities happens at slightly different times so it is possible to see discrepancies in the metrics for certain periods of time depending on identity gc interval and identity heartbeat timeout settings Once all identities have converged re deploy the Operator and the Agents with identity allocation mode doublewrite readcrd This will cause Cilium to read identities only from CRDs but continue to write them to the KVStore Once you are ready to decommission the KVStore re deploy first the Agents and then the Operator with identity allocation mode crd This will make Cilium read and write identities only to CRDs You can now decommission the KVStore cnp validation CNP Validation Running the CNP Validator will make sure the policies deployed in the cluster are valid It is important to run this validation before an upgrade so it will make sure Cilium has a correct behavior after upgrade Avoiding doing this validation might cause Cilium from updating its NodeStatus in those invalid Network Policies as well as in the worst case scenario it might give a false sense of security to the user if a policy is badly formatted and Cilium is not enforcing that policy due a bad validation schema This CNP Validator is automatically executed as part of the pre flight check ref pre flight Start by deployment the cilium pre flight check and check if the Deployment shows READY 1 1 if it does not check the pod logs code block shell session kubectl get deployment n kube system cilium pre flight check w NAME READY UP TO DATE AVAILABLE AGE cilium pre flight check 0 1 1 0 12s kubectl logs n kube system deployment cilium pre flight check c cnp validator previous level info msg Setting up kubernetes client level info msg Establishing connection to apiserver host https 172 20 0 1 443 subsys k8s level info msg Connected to apiserver subsys k8s level info msg Validating CiliumNetworkPolicy default cidr rule OK level error msg Validating CiliumNetworkPolicy default cnp update unexpected validation error spec labels Invalid value string spec labels in body must be of type object string level error msg Found invalid CiliumNetworkPolicy In this example we can see the CiliumNetworkPolicy in the default namespace with the name cnp update is not valid for the Cilium version we are trying to upgrade In order to fix this policy we need to edit it we can do this by saving the policy locally and modify it For this example it seems the spec labels has set an array of strings which is not correct as per the official schema code block shell session kubectl get cnp n default cnp update o yaml cnp bad yaml cat cnp bad yaml apiVersion cilium io v2 kind CiliumNetworkPolicy spec endpointSelector matchLabels id app1 ingress fromEndpoints matchLabels id app2 toPorts ports port 80 protocol TCP labels custom true To fix this policy we need to set the spec labels with the right format and commit these changes into Kubernetes code block shell session cat cnp bad yaml apiVersion cilium io v2 kind CiliumNetworkPolicy spec endpointSelector matchLabels id app1 ingress fromEndpoints matchLabels id app2 toPorts ports port 80 protocol TCP labels key custom value true kubectl apply f cnp bad yaml After applying the fixed policy we can delete the pod that was validating the policies so that Kubernetes creates a new pod immediately to verify if the fixed policies are now valid code block shell session kubectl delete pod n kube system l k8s app cilium pre flight check deployment pod cilium pre flight check 86dfb69668 ngbql deleted kubectl get deployment n kube system cilium pre flight check NAME READY UP TO DATE AVAILABLE AGE cilium pre flight check 1 1 1 1 55m kubectl logs n kube system deployment cilium pre flight check c cnp validator level info msg Setting up kubernetes client level info msg Establishing connection to apiserver host https 172 20 0 1 443 subsys k8s level info msg Connected to apiserver subsys k8s level info msg Validating CiliumNetworkPolicy default cidr rule OK level info msg Validating CiliumNetworkPolicy default cnp update OK level info msg All CCNPs and CNPs valid Once they are valid you can continue with the upgrade process ref cleanup preflight check
cilium docs cilium io This document describes how to troubleshoot Cilium in different deployment adminguide modes It focuses on a full deployment of Cilium within a datacenter or public Troubleshooting
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _admin_guide: ############### Troubleshooting ############### This document describes how to troubleshoot Cilium in different deployment modes. It focuses on a full deployment of Cilium within a datacenter or public cloud. If you are just looking for a simple way to experiment, we highly recommend trying out the :ref:`getting_started` guide instead. This guide assumes that you have read the :ref:`network_root` and `security_root` which explain all the components and concepts. We use GitHub issues to maintain a list of `Cilium Frequently Asked Questions (FAQ)`_. You can also check there to see if your question(s) is already addressed. Component & Cluster Health ========================== Kubernetes ---------- An initial overview of Cilium can be retrieved by listing all pods to verify whether all pods have the status ``Running``: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-2hq5z 1/1 Running 0 4d cilium-6kbtz 1/1 Running 0 4d cilium-klj4b 1/1 Running 0 4d cilium-zmjj9 1/1 Running 0 4d If Cilium encounters a problem that it cannot recover from, it will automatically report the failure state via ``cilium-dbg status`` which is regularly queried by the Kubernetes liveness probe to automatically restart Cilium pods. If a Cilium pod is in state ``CrashLoopBackoff`` then this indicates a permanent failure scenario. Detailed Status ~~~~~~~~~~~~~~~ If a particular Cilium pod is not in running state, the status and health of the agent on that node can be retrieved by running ``cilium-dbg status`` in the context of that pod: .. code-block:: shell-session $ kubectl -n kube-system exec cilium-2hq5z -- cilium-dbg status KVStore: Ok etcd: 1/1 connected: http://demo-etcd-lab--a.etcd.tgraf.test1.lab.corp.isovalent.link:2379 - 3.2.5 (Leader) ContainerRuntime: Ok docker daemon: OK Kubernetes: Ok OK Kubernetes APIs: ["cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service", "core/v1::Endpoint", "core/v1::Node", "CustomResourceDefinition"] Cilium: Ok OK NodeMonitor: Disabled Cilium health daemon: Ok Controller Status: 14/14 healthy Proxy Status: OK, ip 10.2.0.172, port-range 10000-20000 Cluster health: 4/4 reachable (2018-06-16T09:49:58Z) Alternatively, the ``k8s-cilium-exec.sh`` script can be used to run ``cilium-dbg status`` on all nodes. This will provide detailed status and health information of all nodes in the cluster: .. code-block:: shell-session curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-cilium-exec.sh chmod +x ./k8s-cilium-exec.sh ... and run ``cilium-dbg status`` on all nodes: .. code-block:: shell-session $ ./k8s-cilium-exec.sh cilium-dbg status KVStore: Ok Etcd: http://127.0.0.1:2379 - (Leader) 3.1.10 ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["networking.k8s.io/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service", "core/v1::Endpoint"] Cilium: Ok OK NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon: Ok Controller Status: 7/7 healthy Proxy Status: OK, ip 10.15.28.238, 0 redirects, port-range 10000-20000 Cluster health: 1/1 reachable (2018-02-27T00:24:34Z) Detailed information about the status of Cilium can be inspected with the ``cilium-dbg status --verbose`` command. Verbose output includes detailed IPAM state (allocated addresses), Cilium controller status, and details of the Proxy status. .. _ts_agent_logs: Logs ~~~~ To retrieve log files of a cilium pod, run (replace ``cilium-1234`` with a pod name returned by ``kubectl -n kube-system get pods -l k8s-app=cilium``) .. code-block:: shell-session kubectl -n kube-system logs --timestamps cilium-1234 If the cilium pod was already restarted due to the liveness problem after encountering an issue, it can be useful to retrieve the logs of the pod before the last restart: .. code-block:: shell-session kubectl -n kube-system logs --timestamps -p cilium-1234 Generic ------- When logged in a host running Cilium, the cilium CLI can be invoked directly, e.g.: .. code-block:: shell-session $ cilium-dbg status KVStore: Ok etcd: 1/1 connected: https://192.168.60.11:2379 - 3.2.7 (Leader) ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["core/v1::Endpoint", "networking.k8s.io/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service"] Cilium: Ok OK NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPv4 address pool: 261/65535 allocated IPv6 address pool: 4/4294967295 allocated Controller Status: 20/20 healthy Proxy Status: OK, ip 10.0.28.238, port-range 10000-20000 Hubble: Ok Current/Max Flows: 2542/4096 (62.06%), Flows/s: 164.21 Metrics: Disabled Cluster health: 2/2 reachable (2018-04-11T15:41:01Z) .. _hubble_troubleshooting: Observing Flows with Hubble =========================== Hubble is a built-in observability tool which allows you to inspect recent flow events on all endpoints managed by Cilium. Ensure Hubble is running correctly ---------------------------------- To ensure the Hubble client can connect to the Hubble server running inside Cilium, you may use the ``hubble status`` command from within a Cilium pod: .. code-block:: shell-session $ hubble status Healthcheck (via unix:///var/run/cilium/hubble.sock): Ok Current/Max Flows: 4095/4095 (100.00%) Flows/s: 164.21 ``cilium-agent`` must be running with the ``--enable-hubble`` option (default) in order for the Hubble server to be enabled. When deploying Cilium with Helm, make sure to set the ``hubble.enabled=true`` value. To check if Hubble is enabled in your deployment, you may look for the following output in ``cilium-dbg status``: .. code-block:: shell-session $ cilium status ... Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 164.21 Metrics: Disabled ... .. note:: Pods need to be managed by Cilium in order to be observable by Hubble. See how to :ref:`ensure a pod is managed by Cilium<ensure_managed_pod>` for more details. Observing flows of a specific pod --------------------------------- In order to observe the traffic of a specific pod, you will first have to :ref:`retrieve the name of the cilium instance managing it<retrieve_cilium_pod>`. The Hubble CLI is part of the Cilium container image and can be accessed via ``kubectl exec``. The following query for example will show all events related to flows which either originated or terminated in the ``default/tiefighter`` pod in the last three minutes: .. code-block:: shell-session $ kubectl exec -n kube-system cilium-77lk6 -- hubble observe --since 3m --pod default/tiefighter May 4 12:47:08.811: default/tiefighter:53875 -> kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 -> kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 <- kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 <- kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: SYN) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: SYN, ACK) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, PSH) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, PSH) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, FIN) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, FIN) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK) You may also use ``-o json`` to obtain more detailed information about each flow event. .. note:: **Hubble Relay** allows you to query multiple Hubble instances simultaneously without having to first manually target a specific node. See `Observing flows with Hubble Relay`_ for more information. Observing flows with Hubble Relay ================================= Hubble Relay is a service which allows to query multiple Hubble instances simultaneously and aggregate the results. See :ref:`hubble_setup` to enable Hubble Relay if it is not yet enabled and install the Hubble CLI on your local machine. You may access the Hubble Relay service by port-forwarding it locally: .. code-block:: shell-session kubectl -n kube-system port-forward service/hubble-relay --address 0.0.0.0 --address :: 4245:80 This will forward the Hubble Relay service port (``80``) to your local machine on port ``4245`` on all of it's IP addresses. You can verify that Hubble Relay can be reached by using the Hubble CLI and running the following command from your local machine: .. code-block:: shell-session hubble status This command should return an output similar to the following: :: Healthcheck (via localhost:4245): Ok Current/Max Flows: 16380/16380 (100.00%) Flows/s: 46.19 Connected Nodes: 4/4 You may see details about nodes that Hubble Relay is connected to by running the following command: .. code-block:: shell-session hubble list nodes As Hubble Relay shares the same API as individual Hubble instances, you may follow the `Observing flows with Hubble`_ section keeping in mind that limitations with regards to what can be seen from individual Hubble instances no longer apply. Connectivity Problems ===================== Cilium connectivity tests ------------------------------------ The Cilium connectivity test deploys a series of services, deployments, and CiliumNetworkPolicy which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. .. note:: The connectivity tests this will only work in a namespace with no other pods or network policies applied. If there is a Cilium Clusterwide Network Policy enabled, that may also break this connectivity check. To run the connectivity tests create an isolated test namespace called ``cilium-test`` to deploy the tests with. .. parsed-literal:: kubectl create ns cilium-test kubectl apply --namespace=cilium-test -f \ |SCM_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml The tests cover various functionality of the system. Below we call out each test type. If tests pass, it suggests functionality of the referenced subsystem. +----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+ | Pod-to-pod (intra-host) | Pod-to-pod (inter-host) | Pod-to-service (intra-host) | Pod-to-service (inter-host) | Pod-to-external resource | +============================+=============================+===============================+=============================+========================================+ | eBPF routing is functional | Data plane, routing, network| eBPF service map lookup | VXLAN overlay port if used | Egress, CiliumNetworkPolicy, masquerade| +----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+ The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test: .. code-block:: shell-session $ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-6788c799fd-42qxx 1/1 Running 0 69s echo-b-59757679d4-pjtdl 1/1 Running 0 69s echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s pod-to-b-intra-node-nodeport-8484fb6d89-bwj8q 1/1 Running 0 68s pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s pod-to-b-multi-node-nodeport-74bd8d7bd5-kmfmm 1/1 Running 0 68s pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s Information about test failures can be determined by describing a failed test pod .. code-block:: shell-session $ kubectl describe pod pod-to-b-intra-node-hostport Warning Unhealthy 6s (x6 over 56s) kubelet, agent1 Readiness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused Warning Unhealthy 2s (x3 over 52s) kubelet, agent1 Liveness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused .. _cluster_connectivity_health: Checking cluster connectivity health ------------------------------------ Cilium can rule out network fabric related issues when troubleshooting connectivity issues by providing reliable health and latency probes between all cluster nodes and a simulated workload running on each node. By default when Cilium is run, it launches instances of ``cilium-health`` in the background to determine the overall connectivity status of the cluster. This tool periodically runs bidirectional traffic across multiple paths through the cluster and through each node using different protocols to determine the health status of each path and protocol. At any point in time, cilium-health may be queried for the connectivity status of the last probe. .. code-block:: shell-session $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-health status Probe time: 2018-06-16T09:51:58Z Nodes: ip-172-0-52-116.us-west-2.compute.internal (localhost): Host connectivity to 172.0.52.116: ICMP to stack: OK, RTT=315.254µs HTTP to agent: OK, RTT=368.579µs Endpoint connectivity to 10.2.0.183: ICMP to stack: OK, RTT=190.658µs HTTP to agent: OK, RTT=536.665µs ip-172-0-117-198.us-west-2.compute.internal: Host connectivity to 172.0.117.198: ICMP to stack: OK, RTT=1.009679ms HTTP to agent: OK, RTT=1.808628ms Endpoint connectivity to 10.2.1.234: ICMP to stack: OK, RTT=1.016365ms HTTP to agent: OK, RTT=2.29877ms For each node, the connectivity will be displayed for each protocol and path, both to the node itself and to an endpoint on that node. The latency specified is a snapshot at the last time a probe was run, which is typically once per minute. The ICMP connectivity row represents Layer 3 connectivity to the networking stack, while the HTTP connectivity row represents connection to an instance of the ``cilium-health`` agent running on the host or as an endpoint. .. _monitor: Monitoring Datapath State ------------------------- Sometimes you may experience broken connectivity, which may be due to a number of different causes. A main cause can be unwanted packet drops on the networking level. The tool ``cilium-dbg monitor`` allows you to quickly inspect and see if and where packet drops happen. Following is an example output (use ``kubectl exec`` as in previous examples if running with Kubernetes): .. code-block:: shell-session $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-dbg monitor --type drop Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl-C to quit xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest xx drop (Invalid destination mac) to endpoint 0, identity 0->0: fe80::5c25:ddff:fe8e:78d8 -> ff02::2 RouterSolicitation The above indicates that a packet to endpoint ID ``25729`` has been dropped due to violation of the Layer 3 policy. Handling drop (CT: Map insertion failed) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If connectivity fails and ``cilium-dbg monitor --type drop`` shows ``xx drop (CT: Map insertion failed)``, then it is likely that the connection tracking table is filling up and the automatic adjustment of the garbage collector interval is insufficient. Setting ``--conntrack-gc-interval`` to an interval lower than the current value may help. This controls the time interval between two garbage collection runs. By default ``--conntrack-gc-interval`` is set to 0 which translates to using a dynamic interval. In that case, the interval is updated after each garbage collection run depending on how many entries were garbage collected. If very few or no entries were garbage collected, the interval will increase; if many entries were garbage collected, it will decrease. The current interval value is reported in the Cilium agent logs. Alternatively, the value for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` can be increased. Setting both of these options will be a trade-off of CPU for ``conntrack-gc-interval``, and for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` the amount of memory consumed. You can track conntrack garbage collection related metrics such as ``datapath_conntrack_gc_runs_total`` and ``datapath_conntrack_gc_entries`` to get visibility into garbage collection runs. Refer to :ref:`metrics` for more details. Enabling datapath debug messages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, datapath debug messages are disabled, and therefore not shown in ``cilium-dbg monitor -v`` output. To enable them, add ``"datapath"`` to the ``debug-verbose`` option. Policy Troubleshooting ====================== .. _ensure_managed_pod: Ensure pod is managed by Cilium ------------------------------- A potential cause for policy enforcement not functioning as expected is that the networking of the pod selected by the policy is not being managed by Cilium. The following situations result in unmanaged pods: * The pod is running in host networking and will use the host's IP address directly. Such pods have full network connectivity but Cilium will not provide security policy enforcement for such pods by default. To enforce policy against these pods, either set ``hostNetwork`` to false or use :ref:`HostPolicies`. * The pod was started before Cilium was deployed. Cilium only manages pods that have been deployed after Cilium itself was started. Cilium will not provide security policy enforcement for such pods. These pods should be restarted in order to ensure that Cilium can provide security policy enforcement. If pod networking is not managed by Cilium. Ingress and egress policy rules selecting the respective pods will not be applied. See the section :ref:`network_policy` for more details. For a quick assessment of whether any pods are not managed by Cilium, the `Cilium CLI <https://github.com/cilium/cilium-cli>`_ will print the number of managed pods. If this prints that all of the pods are managed by Cilium, then there is no problem: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: OK \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 2 Cluster Pods: 5/5 managed by Cilium ... You can run the following script to list the pods which are *not* managed by Cilium: .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-unmanaged.sh $ chmod +x k8s-unmanaged.sh $ ./k8s-unmanaged.sh kube-system/cilium-hqpk7 kube-system/kube-addon-manager-minikube kube-system/kube-dns-54cccfbdf8-zmv2c kube-system/kubernetes-dashboard-77d8b98585-g52k5 kube-system/storage-provisioner Understand the rendering of your policy --------------------------------------- There are always multiple ways to approach a problem. Cilium can provide the rendering of the aggregate policy provided to it, leaving you to simply compare with what you expect the policy to actually be rather than search (and potentially overlook) every policy. At the expense of reading a very large dump of an endpoint, this is often a faster path to discovering errant policy requests in the Kubernetes API. Start by finding the endpoint you are debugging from the following list. There are several cross references for you to use in this list, including the IP address and pod labels: .. code-block:: shell-session kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint list When you find the correct endpoint, the first column of every row is the endpoint ID. Use that to dump the full endpoint information: .. code-block:: shell-session kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint get 59084 .. image:: images/troubleshooting_policy.png :align: center Importing this dump into a JSON-friendly editor can help browse and navigate the information here. At the top level of the dump, there are two nodes of note: * ``spec``: The desired state of the endpoint * ``status``: The current state of the endpoint This is the standard Kubernetes control loop pattern. Cilium is the controller here, and it is iteratively working to bring the ``status`` in line with the ``spec``. Opening the ``status``, we can drill down through ``policy.realized.l4``. Do your ``ingress`` and ``egress`` rules match what you expect? If not, the reference to the errant rules can be found in the ``derived-from-rules`` node. Policymap pressure and overflow ------------------------------- The most important step in debugging policymap pressure is finding out which node(s) are impacted. The ``cilium_bpf_map_pressure{map_name="cilium_policy_v2_*"}`` metric monitors the endpoint's BPF policymap pressure. This metric exposes the maximum BPF map pressure on the node, meaning the policymap experiencing the most pressure on a particular node. Once the node is known, the troubleshooting steps are as follows: 1. Find the Cilium pod on the node experiencing the problematic policymap pressure and obtain a shell via ``kubectl exec``. 2. Use ``cilium policy selectors`` to get an overview of which selectors are selecting many identities. The output of this command as of Cilium v1.15 additionally displays the namespace and name of the policy resource of each selector. 3. The type of selector tells you what sort of policy rule could be having an impact. The three existing types of selectors are explained below, each with specific steps depending on the selector. See the steps below corresponding to the type of selector. 4. Consider bumping the policymap size as a last resort. However, keep in mind the following implications: * Increased memory consumption for each policymap. * Generally, as identities increase in the cluster, the more work Cilium performs. * At a broader level, if the policy posture is such that all or nearly all identities are selected, this suggests that the posture is too permissive. +---------------+------------------------------------------------------------------------------------------------------------+ | Selector type | Form in ``cilium policy selectors`` output | +===============+============================================================================================================+ | CIDR | ``&LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1/32: ,}`` | +---------------+------------------------------------------------------------------------------------------------------------+ | FQDN | ``MatchName: , MatchPattern: *`` | +---------------+------------------------------------------------------------------------------------------------------------+ | Label | ``&LabelSelector{MatchLabels:map[string]string{any.name: curl,k8s.io.kubernetes.pod.namespace: default,}`` | +---------------+------------------------------------------------------------------------------------------------------------+ An example output of ``cilium policy selectors``: .. code-block:: shell-session root@kind-worker:/home/cilium# cilium policy selectors SELECTOR LABELS USERS IDENTITIES &LabelSelector{MatchLabels:map[string]string{k8s.io.kubernetes.pod.namespace: kube-system,k8s.k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16500 &LabelSelector{MatchLabels:map[string]string{reserved.none: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 MatchName: , MatchPattern: * default/tofqdn-dns-visibility 1 16777231 16777232 16777233 16860295 16860322 16860323 16860324 16860325 16860326 16860327 16860328 &LabelSelector{MatchLabels:map[string]string{any.name: netperf,k8s.io.kubernetes.pod.namespace: default,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860329 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.2/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860330 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.3/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860331 From the output above, we see that all three selectors are in use. The significant action here is to determine which selector is selecting the most identities, because the policy containing that selector is the likely cause for the policymap pressure. Label ~~~~~ See section on :ref:`identity-relevant labels <identity-relevant-labels>`. Another aspect to consider is the permissiveness of the policies and whether it could be reduced. CIDR ~~~~ One way to reduce the number of identities selected by a CIDR selector is to broaden the range of the CIDR, if possible. For example, in the above example output, the policy contains a ``/32`` rule for each CIDR, rather than using a wider range like ``/30`` instead. Updating the policy with this rule creates an identity that represents all IPs within the ``/30`` and therefore, only requires the selector to select 1 identity. FQDN ~~~~ See section on :ref:`isolating the source of toFQDNs issues regarding identities and policy <isolating-source-toFQDNs-issues-identities-policy>`. etcd (kvstore) ============== Introduction ------------ Cilium can be operated in CRD-mode and kvstore/etcd mode. When cilium is running in kvstore/etcd mode, the kvstore becomes a vital component of the overall cluster health as it is required to be available for several operations. Operations for which the kvstore is strictly required when running in etcd mode: Scheduling of new workloads: As part of scheduling workloads/endpoints, agents will perform security identity allocation which requires interaction with the kvstore. If a workload can be scheduled due to re-using a known security identity, then state propagation of the endpoint details to other nodes will still depend on the kvstore and thus packets drops due to policy enforcement may be observed as other nodes in the cluster will not be aware of the new workload. Multi cluster: All state propagation between clusters depends on the kvstore. Node discovery: New nodes require to register themselves in the kvstore. Agent bootstrap: The Cilium agent will eventually fail if it can't connect to the kvstore at bootstrap time, however, the agent will still perform all possible operations while waiting for the kvstore to appear. Operations which *do not* require kvstore availability: All datapath operations: All datapath forwarding, policy enforcement and visibility functions for existing workloads/endpoints do not depend on the kvstore. Packets will continue to be forwarded and network policy rules will continue to be enforced. However, if the agent requires to restart as part of the :ref:`etcd_recovery_behavior`, there can be delays in: * processing of flow events and metrics * short unavailability of layer 7 proxies NetworkPolicy updates: Network policy updates will continue to be processed and applied. Services updates: All updates to services will be processed and applied. Understanding etcd status ------------------------- The etcd status is reported when running ``cilium-dbg status``. The following line represents the status of etcd:: KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.60.11:2379 - 3.4.9 (Leader) OK: The overall status. Either ``OK`` or ``Failure``. 1/1 connected: Number of total etcd endpoints and how many of them are reachable. lease-ID: UUID of the lease used for all keys owned by this agent. lock lease-ID: UUID of the lease used for locks acquired by this agent. has-quorum: Status of etcd quorum. Either ``true`` or set to an error. consecutive-errors: Number of consecutive quorum errors. Only printed if errors are present. https://192.168.60.11:2379 - 3.4.9 (Leader): List of all etcd endpoints stating the etcd version and whether the particular endpoint is currently the elected leader. If an etcd endpoint cannot be reached, the error is shown. .. _etcd_recovery_behavior: Recovery behavior ----------------- In the event of an etcd endpoint becoming unhealthy, etcd should automatically resolve this by electing a new leader and by failing over to a healthy etcd endpoint. As long as quorum is preserved, the etcd cluster will remain functional. In addition, Cilium performs a background check in an interval to determine etcd health and potentially take action. The interval depends on the overall cluster size. The larger the cluster, the longer the `interval <https://pkg.go.dev/github.com/cilium/cilium/pkg/kvstore?tab=doc#ExtraOptions.StatusCheckInterval>`_: * If no etcd endpoints can be reached, Cilium will report failure in ``cilium-dbg status``. This will cause the liveness and readiness probe of Kubernetes to fail and Cilium will be restarted. * A lock is acquired and released to test a write operation which requires quorum. If this operation fails, loss of quorum is reported. If quorum fails for three or more intervals in a row, Cilium is declared unhealthy. * The Cilium operator will constantly write to a heartbeat key (``cilium/.heartbeat``). All Cilium agents will watch for updates to this heartbeat key. This validates the ability for an agent to receive key updates from etcd. If the heartbeat key is not updated in time, the quorum check is declared to have failed and Cilium is declared unhealthy after 3 or more consecutive failures. Example of a status with a quorum failure which has not yet reached the threshold:: KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.60.11:2379 - 3.4.9 (Leader) Example of a status with the number of quorum failures exceeding the threshold:: KVStore: Failure Err: quorum check failed 8 times in a row: 4m28.446600949s since last heartbeat update has been received .. _troubleshooting_clustermesh: .. include:: ./troubleshooting_clustermesh.rst .. _troubleshooting_servicemesh: .. include:: troubleshooting_servicemesh.rst Symptom Library =============== Node to node traffic is being dropped ------------------------------------- Symptom ~~~~~~~ Endpoint to endpoint communication on a single node succeeds but communication fails between endpoints across multiple nodes. Troubleshooting steps: ~~~~~~~~~~~~~~~~~~~~~~ #. Run ``cilium-health status`` on the node of the source and destination endpoint. It should describe the connectivity from that node to other nodes in the cluster, and to a simulated endpoint on each other node. Identify points in the cluster that cannot talk to each other. If the command does not describe the status of the other node, there may be an issue with the KV-Store. #. Run ``cilium-dbg monitor`` on the node of the source and destination endpoint. Look for packet drops. When running in :ref:`arch_overlay` mode: #. Run ``cilium-dbg bpf tunnel list`` and verify that each Cilium node is aware of the other nodes in the cluster. If not, check the logfile for errors. #. If nodes are being populated correctly, run ``tcpdump -n -i cilium_vxlan`` on each node to verify whether cross node traffic is being forwarded correctly between nodes. If packets are being dropped, * verify that the node IP listed in ``cilium-dbg bpf tunnel list`` can reach each other. * verify that the firewall on each node allows UDP port 8472. When running in :ref:`arch_direct_routing` mode: #. Run ``ip route`` or check your cloud provider router and verify that you have routes installed to route the endpoint prefix between all nodes. #. Verify that the firewall on each node permits to route the endpoint IPs. Useful Scripts ============== .. _retrieve_cilium_pod: Retrieve Cilium pod managing a particular pod --------------------------------------------- Identifies the Cilium pod that is managing a particular pod in a namespace: .. code-block:: shell-session k8s-get-cilium-pod.sh <pod> <namespace> **Example:** .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-get-cilium-pod.sh $ chmod +x k8s-get-cilium-pod.sh $ ./k8s-get-cilium-pod.sh luke-pod default cilium-zmjj9 cilium-node-init-v7r9p cilium-operator-f576f7977-s5gpq Execute a command in all Kubernetes Cilium pods ----------------------------------------------- Run a command within all Cilium pods of a cluster .. code-block:: shell-session k8s-cilium-exec.sh <command> **Example:** .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-cilium-exec.sh $ chmod +x k8s-cilium-exec.sh $ ./k8s-cilium-exec.sh uptime 10:15:16 up 6 days, 7:37, 0 users, load average: 0.00, 0.02, 0.00 10:15:16 up 6 days, 7:32, 0 users, load average: 0.00, 0.03, 0.04 10:15:16 up 6 days, 7:30, 0 users, load average: 0.75, 0.27, 0.15 10:15:16 up 6 days, 7:28, 0 users, load average: 0.14, 0.04, 0.01 List unmanaged Kubernetes pods ------------------------------ Lists all Kubernetes pods in the cluster for which Cilium does *not* provide networking. This includes pods running in host-networking mode and pods that were started before Cilium was deployed. .. code-block:: shell-session k8s-unmanaged.sh **Example:** .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-unmanaged.sh $ chmod +x k8s-unmanaged.sh $ ./k8s-unmanaged.sh kube-system/cilium-hqpk7 kube-system/kube-addon-manager-minikube kube-system/kube-dns-54cccfbdf8-zmv2c kube-system/kubernetes-dashboard-77d8b98585-g52k5 kube-system/storage-provisioner Reporting a problem =================== Before you report a problem, make sure to retrieve the necessary information from your cluster before the failure state is lost. .. _sysdump: Automatic log & state collection -------------------------------- .. include:: ../installation/cli-download.rst Then, execute ``cilium sysdump`` command to collect troubleshooting information from your Kubernetes cluster: .. code-block:: shell-session cilium sysdump Note that by default ``cilium sysdump`` will attempt to collect as much logs as possible and for all the nodes in the cluster. If your cluster size is above 20 nodes, consider setting the following options to limit the size of the sysdump. This is not required, but useful for those who have a constraint on bandwidth or upload size. * set the ``--node-list`` option to pick only a few nodes in case the cluster has many of them. * set the ``--logs-since-time`` option to go back in time to when the issues started. * set the ``--logs-limit-bytes`` option to limit the size of the log files (note: passed onto ``kubectl logs``; does not apply to entire collection archive). Ideally, a sysdump that has a full history of select nodes, rather than a brief history of all the nodes, would be preferred (by using ``--node-list``). The second recommended way would be to use ``--logs-since-time`` if you are able to narrow down when the issues started. Lastly, if the Cilium agent and Operator logs are too large, consider ``--logs-limit-bytes``. Use ``--help`` to see more options: .. code-block:: shell-session cilium sysdump --help Single Node Bugtool ~~~~~~~~~~~~~~~~~~~ If you are not running Kubernetes, it is also possible to run the bug collection tool manually with the scope of a single node: The ``cilium-bugtool`` captures potentially useful information about your environment for debugging. The tool is meant to be used for debugging a single Cilium agent node. In the Kubernetes case, if you have multiple Cilium pods, the tool can retrieve debugging information from all of them. The tool works by archiving a collection of command output and files from several places. By default, it writes to the ``tmp`` directory. Note that the command needs to be run from inside the Cilium pod/container. .. code-block:: shell-session cilium-bugtool When running it with no option as shown above, it will try to copy various files and execute some commands. If ``kubectl`` is detected, it will search for Cilium pods. The default label being ``k8s-app=cilium``, but this and the namespace can be changed via ``k8s-namespace`` and ``k8s-label`` respectively. If you want to capture the archive from a Kubernetes pod, then the process is a bit different .. code-block:: shell-session $ # First we need to get the Cilium pod $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE cilium-kg8lv 1/1 Running 0 13m kube-addon-manager-minikube 1/1 Running 0 1h kube-dns-6fc954457d-sf2nk 3/3 Running 0 1h kubernetes-dashboard-6xvc7 1/1 Running 0 1h $ # Run the bugtool from this pod $ kubectl -n kube-system exec cilium-kg8lv -- cilium-bugtool [...] $ # Copy the archive from the pod $ kubectl cp kube-system/cilium-kg8lv:/tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar /tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar [...] .. note:: Please check the archive for sensitive information and strip it away before sharing it with us. Below is an approximate list of the kind of information in the archive. * Cilium status * Cilium version * Kernel configuration * Resolve configuration * Cilium endpoint state * Cilium logs * Docker logs * ``dmesg`` * ``ethtool`` * ``ip a`` * ``ip link`` * ``ip r`` * ``iptables-save`` * ``kubectl -n kube-system get pods`` * ``kubectl get pods,svc for all namespaces`` * ``uname`` * ``uptime`` * ``cilium-dbg bpf * list`` * ``cilium-dbg endpoint get for each endpoint`` * ``cilium-dbg endpoint list`` * ``hostname`` * ``cilium-dbg policy get`` * ``cilium-dbg service list`` Debugging information ~~~~~~~~~~~~~~~~~~~~~ If you are not running Kubernetes, you can use the ``cilium-dbg debuginfo`` command to retrieve useful debugging information. If you are running Kubernetes, this command is automatically run as part of the system dump. ``cilium-dbg debuginfo`` can print useful output from the Cilium API. The output format is in Markdown format so this can be used when reporting a bug on the `issue tracker`_. Running without arguments will print to standard output, but you can also redirect to a file like .. code-block:: shell-session cilium-dbg debuginfo -f debuginfo.md .. note:: Please check the debuginfo file for sensitive information and strip it away before sharing it with us. Slack assistance ---------------- The `Cilium Slack`_ community is a helpful first point of assistance to get help troubleshooting a problem or to discuss options on how to address a problem. The community is open to anyone. Report an issue via GitHub -------------------------- If you believe to have found an issue in Cilium, please report a `GitHub issue`_ and make sure to attach a system dump as described above to ensure that developers have the best chance to reproduce the issue. .. _NodeSelector: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector .. _RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ .. _CNI: https://github.com/containernetworking/cni .. _Volumes: https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/ .. _Cilium Frequently Asked Questions (FAQ): https://github.com/cilium/cilium/issues?utf8=%E2%9C%93&q=label%3Akind%2Fquestion%20 .. _issue tracker: https://github.com/cilium/cilium/issues .. _GitHub issue: `issue tracker`_
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io admin guide Troubleshooting This document describes how to troubleshoot Cilium in different deployment modes It focuses on a full deployment of Cilium within a datacenter or public cloud If you are just looking for a simple way to experiment we highly recommend trying out the ref getting started guide instead This guide assumes that you have read the ref network root and security root which explain all the components and concepts We use GitHub issues to maintain a list of Cilium Frequently Asked Questions FAQ You can also check there to see if your question s is already addressed Component Cluster Health Kubernetes An initial overview of Cilium can be retrieved by listing all pods to verify whether all pods have the status Running code block shell session kubectl n kube system get pods l k8s app cilium NAME READY STATUS RESTARTS AGE cilium 2hq5z 1 1 Running 0 4d cilium 6kbtz 1 1 Running 0 4d cilium klj4b 1 1 Running 0 4d cilium zmjj9 1 1 Running 0 4d If Cilium encounters a problem that it cannot recover from it will automatically report the failure state via cilium dbg status which is regularly queried by the Kubernetes liveness probe to automatically restart Cilium pods If a Cilium pod is in state CrashLoopBackoff then this indicates a permanent failure scenario Detailed Status If a particular Cilium pod is not in running state the status and health of the agent on that node can be retrieved by running cilium dbg status in the context of that pod code block shell session kubectl n kube system exec cilium 2hq5z cilium dbg status KVStore Ok etcd 1 1 connected http demo etcd lab a etcd tgraf test1 lab corp isovalent link 2379 3 2 5 Leader ContainerRuntime Ok docker daemon OK Kubernetes Ok OK Kubernetes APIs cilium v2 CiliumNetworkPolicy networking k8s io v1 NetworkPolicy core v1 Service core v1 Endpoint core v1 Node CustomResourceDefinition Cilium Ok OK NodeMonitor Disabled Cilium health daemon Ok Controller Status 14 14 healthy Proxy Status OK ip 10 2 0 172 port range 10000 20000 Cluster health 4 4 reachable 2018 06 16T09 49 58Z Alternatively the k8s cilium exec sh script can be used to run cilium dbg status on all nodes This will provide detailed status and health information of all nodes in the cluster code block shell session curl sLO https raw githubusercontent com cilium cilium main contrib k8s k8s cilium exec sh chmod x k8s cilium exec sh and run cilium dbg status on all nodes code block shell session k8s cilium exec sh cilium dbg status KVStore Ok Etcd http 127 0 0 1 2379 Leader 3 1 10 ContainerRuntime Ok Kubernetes Ok OK Kubernetes APIs networking k8s io v1beta1 Ingress core v1 Node CustomResourceDefinition cilium v2 CiliumNetworkPolicy networking k8s io v1 NetworkPolicy core v1 Service core v1 Endpoint Cilium Ok OK NodeMonitor Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon Ok Controller Status 7 7 healthy Proxy Status OK ip 10 15 28 238 0 redirects port range 10000 20000 Cluster health 1 1 reachable 2018 02 27T00 24 34Z Detailed information about the status of Cilium can be inspected with the cilium dbg status verbose command Verbose output includes detailed IPAM state allocated addresses Cilium controller status and details of the Proxy status ts agent logs Logs To retrieve log files of a cilium pod run replace cilium 1234 with a pod name returned by kubectl n kube system get pods l k8s app cilium code block shell session kubectl n kube system logs timestamps cilium 1234 If the cilium pod was already restarted due to the liveness problem after encountering an issue it can be useful to retrieve the logs of the pod before the last restart code block shell session kubectl n kube system logs timestamps p cilium 1234 Generic When logged in a host running Cilium the cilium CLI can be invoked directly e g code block shell session cilium dbg status KVStore Ok etcd 1 1 connected https 192 168 60 11 2379 3 2 7 Leader ContainerRuntime Ok Kubernetes Ok OK Kubernetes APIs core v1 Endpoint networking k8s io v1beta1 Ingress core v1 Node CustomResourceDefinition cilium v2 CiliumNetworkPolicy networking k8s io v1 NetworkPolicy core v1 Service Cilium Ok OK NodeMonitor Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon Ok IPv4 address pool 261 65535 allocated IPv6 address pool 4 4294967295 allocated Controller Status 20 20 healthy Proxy Status OK ip 10 0 28 238 port range 10000 20000 Hubble Ok Current Max Flows 2542 4096 62 06 Flows s 164 21 Metrics Disabled Cluster health 2 2 reachable 2018 04 11T15 41 01Z hubble troubleshooting Observing Flows with Hubble Hubble is a built in observability tool which allows you to inspect recent flow events on all endpoints managed by Cilium Ensure Hubble is running correctly To ensure the Hubble client can connect to the Hubble server running inside Cilium you may use the hubble status command from within a Cilium pod code block shell session hubble status Healthcheck via unix var run cilium hubble sock Ok Current Max Flows 4095 4095 100 00 Flows s 164 21 cilium agent must be running with the enable hubble option default in order for the Hubble server to be enabled When deploying Cilium with Helm make sure to set the hubble enabled true value To check if Hubble is enabled in your deployment you may look for the following output in cilium dbg status code block shell session cilium status Hubble Ok Current Max Flows 4095 4095 100 00 Flows s 164 21 Metrics Disabled note Pods need to be managed by Cilium in order to be observable by Hubble See how to ref ensure a pod is managed by Cilium ensure managed pod for more details Observing flows of a specific pod In order to observe the traffic of a specific pod you will first have to ref retrieve the name of the cilium instance managing it retrieve cilium pod The Hubble CLI is part of the Cilium container image and can be accessed via kubectl exec The following query for example will show all events related to flows which either originated or terminated in the default tiefighter pod in the last three minutes code block shell session kubectl exec n kube system cilium 77lk6 hubble observe since 3m pod default tiefighter May 4 12 47 08 811 default tiefighter 53875 kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED UDP May 4 12 47 08 811 default tiefighter 53875 kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED UDP May 4 12 47 08 811 default tiefighter 53875 kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED UDP May 4 12 47 08 811 default tiefighter 53875 kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED UDP May 4 12 47 08 811 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to overlay FORWARDED TCP Flags SYN May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED TCP Flags SYN ACK May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to overlay FORWARDED TCP Flags ACK May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to overlay FORWARDED TCP Flags ACK PSH May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED TCP Flags ACK PSH May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to overlay FORWARDED TCP Flags ACK FIN May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED TCP Flags ACK FIN May 4 12 47 08 812 default tiefighter 50214 default deathstar c74d84667 cx5kp 80 to overlay FORWARDED TCP Flags ACK You may also use o json to obtain more detailed information about each flow event note Hubble Relay allows you to query multiple Hubble instances simultaneously without having to first manually target a specific node See Observing flows with Hubble Relay for more information Observing flows with Hubble Relay Hubble Relay is a service which allows to query multiple Hubble instances simultaneously and aggregate the results See ref hubble setup to enable Hubble Relay if it is not yet enabled and install the Hubble CLI on your local machine You may access the Hubble Relay service by port forwarding it locally code block shell session kubectl n kube system port forward service hubble relay address 0 0 0 0 address 4245 80 This will forward the Hubble Relay service port 80 to your local machine on port 4245 on all of it s IP addresses You can verify that Hubble Relay can be reached by using the Hubble CLI and running the following command from your local machine code block shell session hubble status This command should return an output similar to the following Healthcheck via localhost 4245 Ok Current Max Flows 16380 16380 100 00 Flows s 46 19 Connected Nodes 4 4 You may see details about nodes that Hubble Relay is connected to by running the following command code block shell session hubble list nodes As Hubble Relay shares the same API as individual Hubble instances you may follow the Observing flows with Hubble section keeping in mind that limitations with regards to what can be seen from individual Hubble instances no longer apply Connectivity Problems Cilium connectivity tests The Cilium connectivity test deploys a series of services deployments and CiliumNetworkPolicy which will use various connectivity paths to connect to each other Connectivity paths include with and without service load balancing and various network policy combinations note The connectivity tests this will only work in a namespace with no other pods or network policies applied If there is a Cilium Clusterwide Network Policy enabled that may also break this connectivity check To run the connectivity tests create an isolated test namespace called cilium test to deploy the tests with parsed literal kubectl create ns cilium test kubectl apply namespace cilium test f SCM WEB examples kubernetes connectivity check connectivity check yaml The tests cover various functionality of the system Below we call out each test type If tests pass it suggests functionality of the referenced subsystem Pod to pod intra host Pod to pod inter host Pod to service intra host Pod to service inter host Pod to external resource eBPF routing is functional Data plane routing network eBPF service map lookup VXLAN overlay port if used Egress CiliumNetworkPolicy masquerade The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test code block shell session kubectl get pods n cilium test NAME READY STATUS RESTARTS AGE echo a 6788c799fd 42qxx 1 1 Running 0 69s echo b 59757679d4 pjtdl 1 1 Running 0 69s echo b host f86bd784d wnh4v 1 1 Running 0 68s host to b multi node clusterip 585db65b4d x74nz 1 1 Running 0 68s host to b multi node headless 77c64bc7d8 kgf8p 1 1 Running 0 67s pod to a allowed cnp 87b5895c8 bfw4x 1 1 Running 0 68s pod to a b76ddb6b4 2v4kb 1 1 Running 0 68s pod to a denied cnp 677d9f567b kkjp4 1 1 Running 0 68s pod to b intra node nodeport 8484fb6d89 bwj8q 1 1 Running 0 68s pod to b multi node clusterip f7655dbc8 h5bwk 1 1 Running 0 68s pod to b multi node headless 5fd98b9648 5bjj8 1 1 Running 0 68s pod to b multi node nodeport 74bd8d7bd5 kmfmm 1 1 Running 0 68s pod to external 1111 7489c7c46d jhtkr 1 1 Running 0 68s pod to external fqdn allow google cnp b7b6bcdcb 97p75 1 1 Running 0 68s Information about test failures can be determined by describing a failed test pod code block shell session kubectl describe pod pod to b intra node hostport Warning Unhealthy 6s x6 over 56s kubelet agent1 Readiness probe failed curl 7 Failed to connect to echo b host headless port 40000 Connection refused Warning Unhealthy 2s x3 over 52s kubelet agent1 Liveness probe failed curl 7 Failed to connect to echo b host headless port 40000 Connection refused cluster connectivity health Checking cluster connectivity health Cilium can rule out network fabric related issues when troubleshooting connectivity issues by providing reliable health and latency probes between all cluster nodes and a simulated workload running on each node By default when Cilium is run it launches instances of cilium health in the background to determine the overall connectivity status of the cluster This tool periodically runs bidirectional traffic across multiple paths through the cluster and through each node using different protocols to determine the health status of each path and protocol At any point in time cilium health may be queried for the connectivity status of the last probe code block shell session kubectl n kube system exec ti cilium 2hq5z cilium health status Probe time 2018 06 16T09 51 58Z Nodes ip 172 0 52 116 us west 2 compute internal localhost Host connectivity to 172 0 52 116 ICMP to stack OK RTT 315 254 s HTTP to agent OK RTT 368 579 s Endpoint connectivity to 10 2 0 183 ICMP to stack OK RTT 190 658 s HTTP to agent OK RTT 536 665 s ip 172 0 117 198 us west 2 compute internal Host connectivity to 172 0 117 198 ICMP to stack OK RTT 1 009679ms HTTP to agent OK RTT 1 808628ms Endpoint connectivity to 10 2 1 234 ICMP to stack OK RTT 1 016365ms HTTP to agent OK RTT 2 29877ms For each node the connectivity will be displayed for each protocol and path both to the node itself and to an endpoint on that node The latency specified is a snapshot at the last time a probe was run which is typically once per minute The ICMP connectivity row represents Layer 3 connectivity to the networking stack while the HTTP connectivity row represents connection to an instance of the cilium health agent running on the host or as an endpoint monitor Monitoring Datapath State Sometimes you may experience broken connectivity which may be due to a number of different causes A main cause can be unwanted packet drops on the networking level The tool cilium dbg monitor allows you to quickly inspect and see if and where packet drops happen Following is an example output use kubectl exec as in previous examples if running with Kubernetes code block shell session kubectl n kube system exec ti cilium 2hq5z cilium dbg monitor type drop Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl C to quit xx drop Policy denied to endpoint 25729 identity 261 264 fd02 c0a8 210b 0 bf00 fd02 c0a8 210b 0 6481 EchoRequest xx drop Policy denied to endpoint 25729 identity 261 264 fd02 c0a8 210b 0 bf00 fd02 c0a8 210b 0 6481 EchoRequest xx drop Policy denied to endpoint 25729 identity 261 264 10 11 13 37 10 11 101 61 EchoRequest xx drop Policy denied to endpoint 25729 identity 261 264 10 11 13 37 10 11 101 61 EchoRequest xx drop Invalid destination mac to endpoint 0 identity 0 0 fe80 5c25 ddff fe8e 78d8 ff02 2 RouterSolicitation The above indicates that a packet to endpoint ID 25729 has been dropped due to violation of the Layer 3 policy Handling drop CT Map insertion failed If connectivity fails and cilium dbg monitor type drop shows xx drop CT Map insertion failed then it is likely that the connection tracking table is filling up and the automatic adjustment of the garbage collector interval is insufficient Setting conntrack gc interval to an interval lower than the current value may help This controls the time interval between two garbage collection runs By default conntrack gc interval is set to 0 which translates to using a dynamic interval In that case the interval is updated after each garbage collection run depending on how many entries were garbage collected If very few or no entries were garbage collected the interval will increase if many entries were garbage collected it will decrease The current interval value is reported in the Cilium agent logs Alternatively the value for bpf ct global any max and bpf ct global tcp max can be increased Setting both of these options will be a trade off of CPU for conntrack gc interval and for bpf ct global any max and bpf ct global tcp max the amount of memory consumed You can track conntrack garbage collection related metrics such as datapath conntrack gc runs total and datapath conntrack gc entries to get visibility into garbage collection runs Refer to ref metrics for more details Enabling datapath debug messages By default datapath debug messages are disabled and therefore not shown in cilium dbg monitor v output To enable them add datapath to the debug verbose option Policy Troubleshooting ensure managed pod Ensure pod is managed by Cilium A potential cause for policy enforcement not functioning as expected is that the networking of the pod selected by the policy is not being managed by Cilium The following situations result in unmanaged pods The pod is running in host networking and will use the host s IP address directly Such pods have full network connectivity but Cilium will not provide security policy enforcement for such pods by default To enforce policy against these pods either set hostNetwork to false or use ref HostPolicies The pod was started before Cilium was deployed Cilium only manages pods that have been deployed after Cilium itself was started Cilium will not provide security policy enforcement for such pods These pods should be restarted in order to ensure that Cilium can provide security policy enforcement If pod networking is not managed by Cilium Ingress and egress policy rules selecting the respective pods will not be applied See the section ref network policy for more details For a quick assessment of whether any pods are not managed by Cilium the Cilium CLI https github com cilium cilium cli will print the number of managed pods If this prints that all of the pods are managed by Cilium then there is no problem code block shell session cilium status Cilium OK Operator OK Hubble OK ClusterMesh disabled Deployment cilium operator Desired 2 Ready 2 2 Available 2 2 Deployment hubble relay Desired 1 Ready 1 1 Available 1 1 Deployment hubble ui Desired 1 Ready 1 1 Available 1 1 DaemonSet cilium Desired 2 Ready 2 2 Available 2 2 Containers cilium operator Running 2 hubble relay Running 1 hubble ui Running 1 cilium Running 2 Cluster Pods 5 5 managed by Cilium You can run the following script to list the pods which are not managed by Cilium code block shell session curl sLO https raw githubusercontent com cilium cilium main contrib k8s k8s unmanaged sh chmod x k8s unmanaged sh k8s unmanaged sh kube system cilium hqpk7 kube system kube addon manager minikube kube system kube dns 54cccfbdf8 zmv2c kube system kubernetes dashboard 77d8b98585 g52k5 kube system storage provisioner Understand the rendering of your policy There are always multiple ways to approach a problem Cilium can provide the rendering of the aggregate policy provided to it leaving you to simply compare with what you expect the policy to actually be rather than search and potentially overlook every policy At the expense of reading a very large dump of an endpoint this is often a faster path to discovering errant policy requests in the Kubernetes API Start by finding the endpoint you are debugging from the following list There are several cross references for you to use in this list including the IP address and pod labels code block shell session kubectl n kube system exec ti cilium q8wvt cilium dbg endpoint list When you find the correct endpoint the first column of every row is the endpoint ID Use that to dump the full endpoint information code block shell session kubectl n kube system exec ti cilium q8wvt cilium dbg endpoint get 59084 image images troubleshooting policy png align center Importing this dump into a JSON friendly editor can help browse and navigate the information here At the top level of the dump there are two nodes of note spec The desired state of the endpoint status The current state of the endpoint This is the standard Kubernetes control loop pattern Cilium is the controller here and it is iteratively working to bring the status in line with the spec Opening the status we can drill down through policy realized l4 Do your ingress and egress rules match what you expect If not the reference to the errant rules can be found in the derived from rules node Policymap pressure and overflow The most important step in debugging policymap pressure is finding out which node s are impacted The cilium bpf map pressure map name cilium policy v2 metric monitors the endpoint s BPF policymap pressure This metric exposes the maximum BPF map pressure on the node meaning the policymap experiencing the most pressure on a particular node Once the node is known the troubleshooting steps are as follows 1 Find the Cilium pod on the node experiencing the problematic policymap pressure and obtain a shell via kubectl exec 2 Use cilium policy selectors to get an overview of which selectors are selecting many identities The output of this command as of Cilium v1 15 additionally displays the namespace and name of the policy resource of each selector 3 The type of selector tells you what sort of policy rule could be having an impact The three existing types of selectors are explained below each with specific steps depending on the selector See the steps below corresponding to the type of selector 4 Consider bumping the policymap size as a last resort However keep in mind the following implications Increased memory consumption for each policymap Generally as identities increase in the cluster the more work Cilium performs At a broader level if the policy posture is such that all or nearly all identities are selected this suggests that the posture is too permissive Selector type Form in cilium policy selectors output CIDR LabelSelector MatchLabels map string string cidr 1 1 1 1 32 FQDN MatchName MatchPattern Label LabelSelector MatchLabels map string string any name curl k8s io kubernetes pod namespace default An example output of cilium policy selectors code block shell session root kind worker home cilium cilium policy selectors SELECTOR LABELS USERS IDENTITIES LabelSelector MatchLabels map string string k8s io kubernetes pod namespace kube system k8s k8s app kube dns MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 16500 LabelSelector MatchLabels map string string reserved none MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 MatchName MatchPattern default tofqdn dns visibility 1 16777231 16777232 16777233 16860295 16860322 16860323 16860324 16860325 16860326 16860327 16860328 LabelSelector MatchLabels map string string any name netperf k8s io kubernetes pod namespace default MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 LabelSelector MatchLabels map string string cidr 1 1 1 1 32 MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 16860329 LabelSelector MatchLabels map string string cidr 1 1 1 2 32 MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 16860330 LabelSelector MatchLabels map string string cidr 1 1 1 3 32 MatchExpressions LabelSelectorRequirement default tofqdn dns visibility 1 16860331 From the output above we see that all three selectors are in use The significant action here is to determine which selector is selecting the most identities because the policy containing that selector is the likely cause for the policymap pressure Label See section on ref identity relevant labels identity relevant labels Another aspect to consider is the permissiveness of the policies and whether it could be reduced CIDR One way to reduce the number of identities selected by a CIDR selector is to broaden the range of the CIDR if possible For example in the above example output the policy contains a 32 rule for each CIDR rather than using a wider range like 30 instead Updating the policy with this rule creates an identity that represents all IPs within the 30 and therefore only requires the selector to select 1 identity FQDN See section on ref isolating the source of toFQDNs issues regarding identities and policy isolating source toFQDNs issues identities policy etcd kvstore Introduction Cilium can be operated in CRD mode and kvstore etcd mode When cilium is running in kvstore etcd mode the kvstore becomes a vital component of the overall cluster health as it is required to be available for several operations Operations for which the kvstore is strictly required when running in etcd mode Scheduling of new workloads As part of scheduling workloads endpoints agents will perform security identity allocation which requires interaction with the kvstore If a workload can be scheduled due to re using a known security identity then state propagation of the endpoint details to other nodes will still depend on the kvstore and thus packets drops due to policy enforcement may be observed as other nodes in the cluster will not be aware of the new workload Multi cluster All state propagation between clusters depends on the kvstore Node discovery New nodes require to register themselves in the kvstore Agent bootstrap The Cilium agent will eventually fail if it can t connect to the kvstore at bootstrap time however the agent will still perform all possible operations while waiting for the kvstore to appear Operations which do not require kvstore availability All datapath operations All datapath forwarding policy enforcement and visibility functions for existing workloads endpoints do not depend on the kvstore Packets will continue to be forwarded and network policy rules will continue to be enforced However if the agent requires to restart as part of the ref etcd recovery behavior there can be delays in processing of flow events and metrics short unavailability of layer 7 proxies NetworkPolicy updates Network policy updates will continue to be processed and applied Services updates All updates to services will be processed and applied Understanding etcd status The etcd status is reported when running cilium dbg status The following line represents the status of etcd KVStore Ok etcd 1 1 connected lease ID 29c6732d5d580cb5 lock lease ID 29c6732d5d580cb7 has quorum true https 192 168 60 11 2379 3 4 9 Leader OK The overall status Either OK or Failure 1 1 connected Number of total etcd endpoints and how many of them are reachable lease ID UUID of the lease used for all keys owned by this agent lock lease ID UUID of the lease used for locks acquired by this agent has quorum Status of etcd quorum Either true or set to an error consecutive errors Number of consecutive quorum errors Only printed if errors are present https 192 168 60 11 2379 3 4 9 Leader List of all etcd endpoints stating the etcd version and whether the particular endpoint is currently the elected leader If an etcd endpoint cannot be reached the error is shown etcd recovery behavior Recovery behavior In the event of an etcd endpoint becoming unhealthy etcd should automatically resolve this by electing a new leader and by failing over to a healthy etcd endpoint As long as quorum is preserved the etcd cluster will remain functional In addition Cilium performs a background check in an interval to determine etcd health and potentially take action The interval depends on the overall cluster size The larger the cluster the longer the interval https pkg go dev github com cilium cilium pkg kvstore tab doc ExtraOptions StatusCheckInterval If no etcd endpoints can be reached Cilium will report failure in cilium dbg status This will cause the liveness and readiness probe of Kubernetes to fail and Cilium will be restarted A lock is acquired and released to test a write operation which requires quorum If this operation fails loss of quorum is reported If quorum fails for three or more intervals in a row Cilium is declared unhealthy The Cilium operator will constantly write to a heartbeat key cilium heartbeat All Cilium agents will watch for updates to this heartbeat key This validates the ability for an agent to receive key updates from etcd If the heartbeat key is not updated in time the quorum check is declared to have failed and Cilium is declared unhealthy after 3 or more consecutive failures Example of a status with a quorum failure which has not yet reached the threshold KVStore Ok etcd 1 1 connected lease ID 29c6732d5d580cb5 lock lease ID 29c6732d5d580cb7 has quorum 2m2 778966915s since last heartbeat update has been received consecutive errors 1 https 192 168 60 11 2379 3 4 9 Leader Example of a status with the number of quorum failures exceeding the threshold KVStore Failure Err quorum check failed 8 times in a row 4m28 446600949s since last heartbeat update has been received troubleshooting clustermesh include troubleshooting clustermesh rst troubleshooting servicemesh include troubleshooting servicemesh rst Symptom Library Node to node traffic is being dropped Symptom Endpoint to endpoint communication on a single node succeeds but communication fails between endpoints across multiple nodes Troubleshooting steps Run cilium health status on the node of the source and destination endpoint It should describe the connectivity from that node to other nodes in the cluster and to a simulated endpoint on each other node Identify points in the cluster that cannot talk to each other If the command does not describe the status of the other node there may be an issue with the KV Store Run cilium dbg monitor on the node of the source and destination endpoint Look for packet drops When running in ref arch overlay mode Run cilium dbg bpf tunnel list and verify that each Cilium node is aware of the other nodes in the cluster If not check the logfile for errors If nodes are being populated correctly run tcpdump n i cilium vxlan on each node to verify whether cross node traffic is being forwarded correctly between nodes If packets are being dropped verify that the node IP listed in cilium dbg bpf tunnel list can reach each other verify that the firewall on each node allows UDP port 8472 When running in ref arch direct routing mode Run ip route or check your cloud provider router and verify that you have routes installed to route the endpoint prefix between all nodes Verify that the firewall on each node permits to route the endpoint IPs Useful Scripts retrieve cilium pod Retrieve Cilium pod managing a particular pod Identifies the Cilium pod that is managing a particular pod in a namespace code block shell session k8s get cilium pod sh pod namespace Example code block shell session curl sLO https raw githubusercontent com cilium cilium main contrib k8s k8s get cilium pod sh chmod x k8s get cilium pod sh k8s get cilium pod sh luke pod default cilium zmjj9 cilium node init v7r9p cilium operator f576f7977 s5gpq Execute a command in all Kubernetes Cilium pods Run a command within all Cilium pods of a cluster code block shell session k8s cilium exec sh command Example code block shell session curl sLO https raw githubusercontent com cilium cilium main contrib k8s k8s cilium exec sh chmod x k8s cilium exec sh k8s cilium exec sh uptime 10 15 16 up 6 days 7 37 0 users load average 0 00 0 02 0 00 10 15 16 up 6 days 7 32 0 users load average 0 00 0 03 0 04 10 15 16 up 6 days 7 30 0 users load average 0 75 0 27 0 15 10 15 16 up 6 days 7 28 0 users load average 0 14 0 04 0 01 List unmanaged Kubernetes pods Lists all Kubernetes pods in the cluster for which Cilium does not provide networking This includes pods running in host networking mode and pods that were started before Cilium was deployed code block shell session k8s unmanaged sh Example code block shell session curl sLO https raw githubusercontent com cilium cilium main contrib k8s k8s unmanaged sh chmod x k8s unmanaged sh k8s unmanaged sh kube system cilium hqpk7 kube system kube addon manager minikube kube system kube dns 54cccfbdf8 zmv2c kube system kubernetes dashboard 77d8b98585 g52k5 kube system storage provisioner Reporting a problem Before you report a problem make sure to retrieve the necessary information from your cluster before the failure state is lost sysdump Automatic log state collection include installation cli download rst Then execute cilium sysdump command to collect troubleshooting information from your Kubernetes cluster code block shell session cilium sysdump Note that by default cilium sysdump will attempt to collect as much logs as possible and for all the nodes in the cluster If your cluster size is above 20 nodes consider setting the following options to limit the size of the sysdump This is not required but useful for those who have a constraint on bandwidth or upload size set the node list option to pick only a few nodes in case the cluster has many of them set the logs since time option to go back in time to when the issues started set the logs limit bytes option to limit the size of the log files note passed onto kubectl logs does not apply to entire collection archive Ideally a sysdump that has a full history of select nodes rather than a brief history of all the nodes would be preferred by using node list The second recommended way would be to use logs since time if you are able to narrow down when the issues started Lastly if the Cilium agent and Operator logs are too large consider logs limit bytes Use help to see more options code block shell session cilium sysdump help Single Node Bugtool If you are not running Kubernetes it is also possible to run the bug collection tool manually with the scope of a single node The cilium bugtool captures potentially useful information about your environment for debugging The tool is meant to be used for debugging a single Cilium agent node In the Kubernetes case if you have multiple Cilium pods the tool can retrieve debugging information from all of them The tool works by archiving a collection of command output and files from several places By default it writes to the tmp directory Note that the command needs to be run from inside the Cilium pod container code block shell session cilium bugtool When running it with no option as shown above it will try to copy various files and execute some commands If kubectl is detected it will search for Cilium pods The default label being k8s app cilium but this and the namespace can be changed via k8s namespace and k8s label respectively If you want to capture the archive from a Kubernetes pod then the process is a bit different code block shell session First we need to get the Cilium pod kubectl get pods namespace kube system NAME READY STATUS RESTARTS AGE cilium kg8lv 1 1 Running 0 13m kube addon manager minikube 1 1 Running 0 1h kube dns 6fc954457d sf2nk 3 3 Running 0 1h kubernetes dashboard 6xvc7 1 1 Running 0 1h Run the bugtool from this pod kubectl n kube system exec cilium kg8lv cilium bugtool Copy the archive from the pod kubectl cp kube system cilium kg8lv tmp cilium bugtool 20180411 155146 166 0000 UTC 266836983 tar tmp cilium bugtool 20180411 155146 166 0000 UTC 266836983 tar note Please check the archive for sensitive information and strip it away before sharing it with us Below is an approximate list of the kind of information in the archive Cilium status Cilium version Kernel configuration Resolve configuration Cilium endpoint state Cilium logs Docker logs dmesg ethtool ip a ip link ip r iptables save kubectl n kube system get pods kubectl get pods svc for all namespaces uname uptime cilium dbg bpf list cilium dbg endpoint get for each endpoint cilium dbg endpoint list hostname cilium dbg policy get cilium dbg service list Debugging information If you are not running Kubernetes you can use the cilium dbg debuginfo command to retrieve useful debugging information If you are running Kubernetes this command is automatically run as part of the system dump cilium dbg debuginfo can print useful output from the Cilium API The output format is in Markdown format so this can be used when reporting a bug on the issue tracker Running without arguments will print to standard output but you can also redirect to a file like code block shell session cilium dbg debuginfo f debuginfo md note Please check the debuginfo file for sensitive information and strip it away before sharing it with us Slack assistance The Cilium Slack community is a helpful first point of assistance to get help troubleshooting a problem or to discuss options on how to address a problem The community is open to anyone Report an issue via GitHub If you believe to have found an issue in Cilium please report a GitHub issue and make sure to attach a system dump as described above to ensure that developers have the best chance to reproduce the issue NodeSelector https kubernetes io docs concepts scheduling eviction assign pod node nodeselector RBAC https kubernetes io docs reference access authn authz rbac CNI https github com containernetworking cni Volumes https kubernetes io docs tasks configure pod container configure volume storage Cilium Frequently Asked Questions FAQ https github com cilium cilium issues utf8 E2 9C 93 q label 3Akind 2Fquestion 20 issue tracker https github com cilium cilium issues GitHub issue issue tracker
cilium requirements below Most modern Linux distributions already do adminsystemreqs System Requirements docs cilium io Before installing Cilium please ensure that your system meets the minimum
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _admin_system_reqs: ******************* System Requirements ******************* Before installing Cilium, please ensure that your system meets the minimum requirements below. Most modern Linux distributions already do. Summary ======= When running Cilium using the container image ``cilium/cilium``, the host system must meet these requirements: - Hosts with either AMD64 or AArch64 architecture - `Linux kernel`_ >= 5.4 or equivalent (e.g., 4.18 on RHEL 8.6) When running Cilium as a native process on your host (i.e. **not** running the ``cilium/cilium`` container image) these additional requirements must be met: - `clang+LLVM`_ >= 10.0 .. _`clang+LLVM`: https://llvm.org When running Cilium without Kubernetes these additional requirements must be met: - :ref:`req_kvstore` etcd >= 3.1.0 ======================== ============================== =================== Requirement Minimum Version In cilium container ======================== ============================== =================== `Linux kernel`_ >= 5.4 or >= 4.18 on RHEL 8.6 no Key-Value store (etcd) >= 3.1.0 no clang+LLVM >= 10.0 yes ======================== ============================== =================== Architecture Support ==================== Cilium images are built for the following platforms: - AMD64 - AArch64 Linux Distribution Compatibility & Considerations ================================================= The following table lists Linux distributions that are known to work well with Cilium. Some distributions require a few initial tweaks. Please make sure to read each distribution's specific notes below before attempting to run Cilium. ========================== ==================== Distribution Minimum Version ========================== ==================== `Amazon Linux 2`_ all `Bottlerocket OS`_ all `CentOS`_ >= 8.6 `Container-Optimized OS`_ >= 85 Debian_ >= 10 Buster `Fedora CoreOS`_ >= 31.20200108.3.0 Flatcar_ all LinuxKit_ all Opensuse_ Tumbleweed, >=Leap 15.4 `RedHat Enterprise Linux`_ >= 8.6 `RedHat CoreOS`_ >= 4.12 `Talos Linux`_ >= 1.5.0 Ubuntu_ >= 20.04 ========================== ==================== .. _Amazon Linux 2: https://docs.aws.amazon.com/AL2/latest/relnotes/relnotes-al2.html .. _CentOS: https://centos.org .. _Container-Optimized OS: https://cloud.google.com/container-optimized-os/docs .. _Fedora CoreOS: https://fedoraproject.org/coreos/release-notes .. _Debian: https://www.debian.org/releases/ .. _Flatcar: https://www.flatcar.org/releases .. _LinuxKit: https://github.com/linuxkit/linuxkit/tree/master/kernel .. _RedHat Enterprise Linux: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux .. _RedHat CoreOS: https://access.redhat.com/articles/6907891 .. _Ubuntu: https://www.releases.ubuntu.com/ .. _Opensuse: https://en.opensuse.org/openSUSE:Roadmap .. _Bottlerocket OS: https://github.com/bottlerocket-os/bottlerocket .. _Talos Linux: https://www.talos.dev/ .. note:: The above list is based on feedback by users. If you find an unlisted Linux distribution that works well, please let us know by opening a GitHub issue or by creating a pull request that updates this guide. Flatcar on AWS EKS in ENI mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Flatcar is known to manipulate network interfaces created and managed by Cilium. When running the official Flatcar image for AWS EKS nodes in ENI mode, this may cause connectivity issues and potentially prevent the Cilium agent from booting. To avoid this, disable DHCP on the ENI interfaces and mark them as unmanaged by adding .. code-block:: text [Match] Name=eth[1-9]* [Network] DHCP=no [Link] Unmanaged=yes to ``/etc/systemd/network/01-no-dhcp.network`` and then .. code-block:: shell-session systemctl daemon-reload systemctl restart systemd-networkd Ubuntu 22.04 on Raspberry Pi ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before running Cilium on Ubuntu 22.04 on a Raspberry Pi, please make sure to install the following package: .. code-block:: shell-session sudo apt install linux-modules-extra-raspi .. _admin_kernel_version: Linux Kernel ============ Base Requirements ~~~~~~~~~~~~~~~~~ Cilium leverages and builds on the kernel eBPF functionality as well as various subsystems which integrate with eBPF. Therefore, host systems are required to run a recent Linux kernel to run a Cilium agent. More recent kernels may provide additional eBPF functionality that Cilium will automatically detect and use on agent start. For this version of Cilium, it is recommended to use kernel 5.4 or later (or equivalent such as 4.18 on RHEL8). For a list of features that require newer kernels, see :ref:`advanced_features`. In order for the eBPF feature to be enabled properly, the following kernel configuration options must be enabled. This is typically the case with distribution kernels. When an option can be built as a module or statically linked, either choice is valid. :: CONFIG_BPF=y CONFIG_BPF_SYSCALL=y CONFIG_NET_CLS_BPF=y CONFIG_BPF_JIT=y CONFIG_NET_CLS_ACT=y CONFIG_NET_SCH_INGRESS=y CONFIG_CRYPTO_SHA1=y CONFIG_CRYPTO_USER_API_HASH=y CONFIG_CGROUPS=y CONFIG_CGROUP_BPF=y CONFIG_PERF_EVENTS=y CONFIG_SCHEDSTATS=y Requirements for Iptables-based Masquerading ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are not using BPF for masquerading (``enable-bpf-masquerade=false``, the default value), then you will need the following kernel configuration options. :: CONFIG_NETFILTER_XT_SET=m CONFIG_IP_SET=m CONFIG_IP_SET_HASH_IP=m Requirements for L7 and FQDN Policies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L7 proxy redirection currently uses ``TPROXY`` iptables actions as well as ``socket`` matches. For L7 redirection to work as intended kernel configuration must include the following modules: :: CONFIG_NETFILTER_XT_TARGET_TPROXY=m CONFIG_NETFILTER_XT_TARGET_MARK=m CONFIG_NETFILTER_XT_TARGET_CT=m CONFIG_NETFILTER_XT_MATCH_MARK=m CONFIG_NETFILTER_XT_MATCH_SOCKET=m When ``xt_socket`` kernel module is missing the forwarding of redirected L7 traffic does not work in non-tunneled datapath modes. Since some notable kernels (e.g., COS) are shipping without ``xt_socket`` module, Cilium implements a fallback compatibility mode to allow L7 policies and visibility to be used with those kernels. Currently this fallback disables ``ip_early_demux`` kernel feature in non-tunneled datapath modes, which may decrease system networking performance. This guarantees HTTP and Kafka redirection works as intended. However, if HTTP or Kafka enforcement policies are never used, this behavior can be turned off by adding the following to the helm configuration command line: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ ... --set enableXTSocketFallback=false .. _features_kernel_matrix: Requirements for IPsec ~~~~~~~~~~~~~~~~~~~~~~ The :ref:`encryption_ipsec` feature requires a lot of kernel configuration options, most of which to enable the actual encryption. Note that the specific options required depend on the algorithm. The list below corresponds to requirements for GCM-128-AES. :: CONFIG_XFRM=y CONFIG_XFRM_OFFLOAD=y CONFIG_XFRM_STATISTICS=y CONFIG_XFRM_ALGO=m CONFIG_XFRM_USER=m CONFIG_INET{,6}_ESP=m CONFIG_INET{,6}_IPCOMP=m CONFIG_INET{,6}_XFRM_TUNNEL=m CONFIG_INET{,6}_TUNNEL=m CONFIG_INET_XFRM_MODE_TUNNEL=m CONFIG_CRYPTO_AEAD=m CONFIG_CRYPTO_AEAD2=m CONFIG_CRYPTO_GCM=m CONFIG_CRYPTO_SEQIV=m CONFIG_CRYPTO_CBC=m CONFIG_CRYPTO_HMAC=m CONFIG_CRYPTO_SHA256=m CONFIG_CRYPTO_AES=m Requirements for the Bandwidth Manager ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The :ref:`bandwidth-manager` requires the following kernel configuration option to change the packet scheduling algorithm. :: CONFIG_NET_SCH_FQ=m Requirements for Netkit Device Mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The :ref:`netkit` requires the following kernel configuration option to create netkit devices. :: CONFIG_NETKIT=y .. _advanced_features: Required Kernel Versions for Advanced Features ============================================== Additional kernel features continues to progress in the Linux community. Some of Cilium's features are dependent on newer kernel versions and are thus enabled by upgrading to more recent kernel versions as detailed below. ====================================================== =============================== Cilium Feature Minimum Kernel Version ====================================================== =============================== :ref:`encryption_wg` >= 5.6 Full support for :ref:`session-affinity` >= 5.7 BPF-based proxy redirection >= 5.7 Socket-level LB bypass in pod netns >= 5.7 L3 devices >= 5.8 BPF-based host routing >= 5.10 :ref:`enable_multicast` (AMD64) >= 5.10 IPv6 BIG TCP support >= 5.19 :ref:`enable_multicast` (AArch64) >= 6.0 IPv4 BIG TCP support >= 6.3 ====================================================== =============================== .. _req_kvstore: Key-Value store =============== Cilium optionally uses a distributed Key-Value store to manage, synchronize and distribute security identities across all cluster nodes. The following Key-Value stores are currently supported: - etcd >= 3.1.0 Cilium can be used without a Key-Value store when CRD-based state management is used with Kubernetes. This is the default for new Cilium installations. Larger clusters will perform better with a Key-Value store backed identity management instead, see :ref:`k8s_quick_install` for more details. See :ref:`install_kvstore` for details on how to configure the ``cilium-agent`` to use a Key-Value store. clang+LLVM ========== .. note:: This requirement is only needed if you run ``cilium-agent`` natively. If you are using the Cilium container image ``cilium/cilium``, clang+LLVM is included in the container image. LLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs to be loaded into the Linux kernel. The minimum supported version of LLVM available to ``cilium-agent`` should be >=5.0. The version of clang installed must be compiled with the eBPF backend enabled. See https://releases.llvm.org/ for information on how to download and install LLVM. .. _firewall_requirements: Firewall Rules ============== If you are running Cilium in an environment that requires firewall rules to enable connectivity, you will have to add the following rules to ensure Cilium works properly. It is recommended but optional that all nodes running Cilium in a given cluster must be able to ping each other so ``cilium-health`` can report and monitor connectivity among nodes. This requires ICMP Type 0/8, Code 0 open among all nodes. TCP 4240 should also be open among all nodes for ``cilium-health`` monitoring. Note that it is also an option to only use one of these two methods to enable health monitoring. If the firewall does not permit either of these methods, Cilium will still operate fine but will not be able to provide health information. For IPsec enabled Cilium deployments, you need to ensure that the firewall allows ESP traffic through. For example, AWS Security Groups doesn't allow ESP traffic by default. If you are using WireGuard, you must allow UDP port 51871. If you are using VXLAN overlay network mode, Cilium uses Linux's default VXLAN port 8472 over UDP, unless Linux has been configured otherwise. In this case, UDP 8472 must be open among all nodes to enable VXLAN overlay mode. The same applies to Geneve overlay network mode, except the port is UDP 6081. If you are running in direct routing mode, your network must allow routing of pod IPs. As an example, if you are running on AWS with VXLAN overlay networking, here is a minimum set of AWS Security Group (SG) rules. It assumes a separation between the SG on the master nodes, ``master-sg``, and the worker nodes, ``worker-sg``. It also assumes ``etcd`` is running on the master nodes. Master Nodes (``master-sg``) Rules: ======================== =============== ==================== =============== Port Range / Protocol Ingress/Egress Source/Destination Description ======================== =============== ==================== =============== 2379-2380/tcp ingress ``worker-sg`` etcd access 8472/udp ingress ``master-sg`` (self) VXLAN overlay 8472/udp ingress ``worker-sg`` VXLAN overlay 4240/tcp ingress ``master-sg`` (self) health checks 4240/tcp ingress ``worker-sg`` health checks ICMP 8/0 ingress ``master-sg`` (self) health checks ICMP 8/0 ingress ``worker-sg`` health checks 8472/udp egress ``master-sg`` (self) VXLAN overlay 8472/udp egress ``worker-sg`` VXLAN overlay 4240/tcp egress ``master-sg`` (self) health checks 4240/tcp egress ``worker-sg`` health checks ICMP 8/0 egress ``master-sg`` (self) health checks ICMP 8/0 egress ``worker-sg`` health checks ======================== =============== ==================== =============== Worker Nodes (``worker-sg``): ======================== =============== ==================== =============== Port Range / Protocol Ingress/Egress Source/Destination Description ======================== =============== ==================== =============== 8472/udp ingress ``master-sg`` VXLAN overlay 8472/udp ingress ``worker-sg`` (self) VXLAN overlay 4240/tcp ingress ``master-sg`` health checks 4240/tcp ingress ``worker-sg`` (self) health checks ICMP 8/0 ingress ``master-sg`` health checks ICMP 8/0 ingress ``worker-sg`` (self) health checks 8472/udp egress ``master-sg`` VXLAN overlay 8472/udp egress ``worker-sg`` (self) VXLAN overlay 4240/tcp egress ``master-sg`` health checks 4240/tcp egress ``worker-sg`` (self) health checks ICMP 8/0 egress ``master-sg`` health checks ICMP 8/0 egress ``worker-sg`` (self) health checks 2379-2380/tcp egress ``master-sg`` etcd access ======================== =============== ==================== =============== .. note:: If you use a shared SG for the masters and workers, you can condense these rules into ingress/egress to self. If you are using Direct Routing mode, you can condense all rules into ingress/egress ANY port/protocol to/from self. The following ports should also be available on each node: ======================== ================================================================== Port Range / Protocol Description ======================== ================================================================== 4240/tcp cluster health checks (``cilium-health``) 4244/tcp Hubble server 4245/tcp Hubble Relay 4250/tcp Mutual Authentication port 4251/tcp Spire Agent health check port (listening on 127.0.0.1 or ::1) 6060/tcp cilium-agent pprof server (listening on 127.0.0.1) 6061/tcp cilium-operator pprof server (listening on 127.0.0.1) 6062/tcp Hubble Relay pprof server (listening on 127.0.0.1) 9878/tcp cilium-envoy health listener (listening on 127.0.0.1) 9879/tcp cilium-agent health status API (listening on 127.0.0.1 and/or ::1) 9890/tcp cilium-agent gops server (listening on 127.0.0.1) 9891/tcp operator gops server (listening on 127.0.0.1) 9893/tcp Hubble Relay gops server (listening on 127.0.0.1) 9901/tcp cilium-envoy Admin API (listening on 127.0.0.1) 9962/tcp cilium-agent Prometheus metrics 9963/tcp cilium-operator Prometheus metrics 9964/tcp cilium-envoy Prometheus metrics 51871/udp WireGuard encryption tunnel endpoint ======================== ================================================================== .. _admin_mount_bpffs: Mounted eBPF filesystem ======================= .. Note:: Some distributions mount the bpf filesystem automatically. Check if the bpf filesystem is mounted by running the command. .. code-block:: shell-session # mount | grep /sys/fs/bpf $ # if present should output, e.g. "none on /sys/fs/bpf type bpf"... If the eBPF filesystem is not mounted in the host filesystem, Cilium will automatically mount the filesystem. Mounting this BPF filesystem allows the ``cilium-agent`` to persist eBPF resources across restarts of the agent so that the datapath can continue to operate while the agent is subsequently restarted or upgraded. Optionally it is also possible to mount the eBPF filesystem before Cilium is deployed in the cluster, the following command must be run in the host mount namespace. The command must only be run once during the boot process of the machine. .. code-block:: shell-session # mount bpffs /sys/fs/bpf -t bpf A portable way to achieve this with persistence is to add the following line to ``/etc/fstab`` and then run ``mount /sys/fs/bpf``. This will cause the filesystem to be automatically mounted when the node boots. :: bpffs /sys/fs/bpf bpf defaults 0 0 If you are using systemd to manage the kubelet, see the section :ref:`bpffs_systemd`. Routing Tables ============== When running in :ref:`ipam_eni` IPAM mode, Cilium will install per-ENI routing tables for each ENI that is used by Cilium for pod IP allocation. These routing tables are added to the host network namespace and must not be otherwise used by the system. The index of those per-ENI routing tables is computed as ``10 + <eni-interface-index>``. The base offset of 10 is chosen as it is highly unlikely to collide with the main routing table which is between 253-255. Privileges ========== The following privileges are required to run Cilium. When running the standard Kubernetes :term:`DaemonSet`, the privileges are automatically granted to Cilium. * Cilium interacts with the Linux kernel to install eBPF program which will then perform networking tasks and implement security rules. In order to install eBPF programs system-wide, ``CAP_SYS_ADMIN`` privileges are required. These privileges must be granted to ``cilium-agent``. The quickest way to meet the requirement is to run ``cilium-agent`` as root and/or as privileged container. * Cilium requires access to the host networking namespace. For this purpose, the Cilium pod is scheduled to run in the host networking namespace directly.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io admin system reqs System Requirements Before installing Cilium please ensure that your system meets the minimum requirements below Most modern Linux distributions already do Summary When running Cilium using the container image cilium cilium the host system must meet these requirements Hosts with either AMD64 or AArch64 architecture Linux kernel 5 4 or equivalent e g 4 18 on RHEL 8 6 When running Cilium as a native process on your host i e not running the cilium cilium container image these additional requirements must be met clang LLVM 10 0 clang LLVM https llvm org When running Cilium without Kubernetes these additional requirements must be met ref req kvstore etcd 3 1 0 Requirement Minimum Version In cilium container Linux kernel 5 4 or 4 18 on RHEL 8 6 no Key Value store etcd 3 1 0 no clang LLVM 10 0 yes Architecture Support Cilium images are built for the following platforms AMD64 AArch64 Linux Distribution Compatibility Considerations The following table lists Linux distributions that are known to work well with Cilium Some distributions require a few initial tweaks Please make sure to read each distribution s specific notes below before attempting to run Cilium Distribution Minimum Version Amazon Linux 2 all Bottlerocket OS all CentOS 8 6 Container Optimized OS 85 Debian 10 Buster Fedora CoreOS 31 20200108 3 0 Flatcar all LinuxKit all Opensuse Tumbleweed Leap 15 4 RedHat Enterprise Linux 8 6 RedHat CoreOS 4 12 Talos Linux 1 5 0 Ubuntu 20 04 Amazon Linux 2 https docs aws amazon com AL2 latest relnotes relnotes al2 html CentOS https centos org Container Optimized OS https cloud google com container optimized os docs Fedora CoreOS https fedoraproject org coreos release notes Debian https www debian org releases Flatcar https www flatcar org releases LinuxKit https github com linuxkit linuxkit tree master kernel RedHat Enterprise Linux https www redhat com en technologies linux platforms enterprise linux RedHat CoreOS https access redhat com articles 6907891 Ubuntu https www releases ubuntu com Opensuse https en opensuse org openSUSE Roadmap Bottlerocket OS https github com bottlerocket os bottlerocket Talos Linux https www talos dev note The above list is based on feedback by users If you find an unlisted Linux distribution that works well please let us know by opening a GitHub issue or by creating a pull request that updates this guide Flatcar on AWS EKS in ENI mode Flatcar is known to manipulate network interfaces created and managed by Cilium When running the official Flatcar image for AWS EKS nodes in ENI mode this may cause connectivity issues and potentially prevent the Cilium agent from booting To avoid this disable DHCP on the ENI interfaces and mark them as unmanaged by adding code block text Match Name eth 1 9 Network DHCP no Link Unmanaged yes to etc systemd network 01 no dhcp network and then code block shell session systemctl daemon reload systemctl restart systemd networkd Ubuntu 22 04 on Raspberry Pi Before running Cilium on Ubuntu 22 04 on a Raspberry Pi please make sure to install the following package code block shell session sudo apt install linux modules extra raspi admin kernel version Linux Kernel Base Requirements Cilium leverages and builds on the kernel eBPF functionality as well as various subsystems which integrate with eBPF Therefore host systems are required to run a recent Linux kernel to run a Cilium agent More recent kernels may provide additional eBPF functionality that Cilium will automatically detect and use on agent start For this version of Cilium it is recommended to use kernel 5 4 or later or equivalent such as 4 18 on RHEL8 For a list of features that require newer kernels see ref advanced features In order for the eBPF feature to be enabled properly the following kernel configuration options must be enabled This is typically the case with distribution kernels When an option can be built as a module or statically linked either choice is valid CONFIG BPF y CONFIG BPF SYSCALL y CONFIG NET CLS BPF y CONFIG BPF JIT y CONFIG NET CLS ACT y CONFIG NET SCH INGRESS y CONFIG CRYPTO SHA1 y CONFIG CRYPTO USER API HASH y CONFIG CGROUPS y CONFIG CGROUP BPF y CONFIG PERF EVENTS y CONFIG SCHEDSTATS y Requirements for Iptables based Masquerading If you are not using BPF for masquerading enable bpf masquerade false the default value then you will need the following kernel configuration options CONFIG NETFILTER XT SET m CONFIG IP SET m CONFIG IP SET HASH IP m Requirements for L7 and FQDN Policies L7 proxy redirection currently uses TPROXY iptables actions as well as socket matches For L7 redirection to work as intended kernel configuration must include the following modules CONFIG NETFILTER XT TARGET TPROXY m CONFIG NETFILTER XT TARGET MARK m CONFIG NETFILTER XT TARGET CT m CONFIG NETFILTER XT MATCH MARK m CONFIG NETFILTER XT MATCH SOCKET m When xt socket kernel module is missing the forwarding of redirected L7 traffic does not work in non tunneled datapath modes Since some notable kernels e g COS are shipping without xt socket module Cilium implements a fallback compatibility mode to allow L7 policies and visibility to be used with those kernels Currently this fallback disables ip early demux kernel feature in non tunneled datapath modes which may decrease system networking performance This guarantees HTTP and Kafka redirection works as intended However if HTTP or Kafka enforcement policies are never used this behavior can be turned off by adding the following to the helm configuration command line parsed literal helm install cilium CHART RELEASE set enableXTSocketFallback false features kernel matrix Requirements for IPsec The ref encryption ipsec feature requires a lot of kernel configuration options most of which to enable the actual encryption Note that the specific options required depend on the algorithm The list below corresponds to requirements for GCM 128 AES CONFIG XFRM y CONFIG XFRM OFFLOAD y CONFIG XFRM STATISTICS y CONFIG XFRM ALGO m CONFIG XFRM USER m CONFIG INET 6 ESP m CONFIG INET 6 IPCOMP m CONFIG INET 6 XFRM TUNNEL m CONFIG INET 6 TUNNEL m CONFIG INET XFRM MODE TUNNEL m CONFIG CRYPTO AEAD m CONFIG CRYPTO AEAD2 m CONFIG CRYPTO GCM m CONFIG CRYPTO SEQIV m CONFIG CRYPTO CBC m CONFIG CRYPTO HMAC m CONFIG CRYPTO SHA256 m CONFIG CRYPTO AES m Requirements for the Bandwidth Manager The ref bandwidth manager requires the following kernel configuration option to change the packet scheduling algorithm CONFIG NET SCH FQ m Requirements for Netkit Device Mode The ref netkit requires the following kernel configuration option to create netkit devices CONFIG NETKIT y advanced features Required Kernel Versions for Advanced Features Additional kernel features continues to progress in the Linux community Some of Cilium s features are dependent on newer kernel versions and are thus enabled by upgrading to more recent kernel versions as detailed below Cilium Feature Minimum Kernel Version ref encryption wg 5 6 Full support for ref session affinity 5 7 BPF based proxy redirection 5 7 Socket level LB bypass in pod netns 5 7 L3 devices 5 8 BPF based host routing 5 10 ref enable multicast AMD64 5 10 IPv6 BIG TCP support 5 19 ref enable multicast AArch64 6 0 IPv4 BIG TCP support 6 3 req kvstore Key Value store Cilium optionally uses a distributed Key Value store to manage synchronize and distribute security identities across all cluster nodes The following Key Value stores are currently supported etcd 3 1 0 Cilium can be used without a Key Value store when CRD based state management is used with Kubernetes This is the default for new Cilium installations Larger clusters will perform better with a Key Value store backed identity management instead see ref k8s quick install for more details See ref install kvstore for details on how to configure the cilium agent to use a Key Value store clang LLVM note This requirement is only needed if you run cilium agent natively If you are using the Cilium container image cilium cilium clang LLVM is included in the container image LLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs to be loaded into the Linux kernel The minimum supported version of LLVM available to cilium agent should be 5 0 The version of clang installed must be compiled with the eBPF backend enabled See https releases llvm org for information on how to download and install LLVM firewall requirements Firewall Rules If you are running Cilium in an environment that requires firewall rules to enable connectivity you will have to add the following rules to ensure Cilium works properly It is recommended but optional that all nodes running Cilium in a given cluster must be able to ping each other so cilium health can report and monitor connectivity among nodes This requires ICMP Type 0 8 Code 0 open among all nodes TCP 4240 should also be open among all nodes for cilium health monitoring Note that it is also an option to only use one of these two methods to enable health monitoring If the firewall does not permit either of these methods Cilium will still operate fine but will not be able to provide health information For IPsec enabled Cilium deployments you need to ensure that the firewall allows ESP traffic through For example AWS Security Groups doesn t allow ESP traffic by default If you are using WireGuard you must allow UDP port 51871 If you are using VXLAN overlay network mode Cilium uses Linux s default VXLAN port 8472 over UDP unless Linux has been configured otherwise In this case UDP 8472 must be open among all nodes to enable VXLAN overlay mode The same applies to Geneve overlay network mode except the port is UDP 6081 If you are running in direct routing mode your network must allow routing of pod IPs As an example if you are running on AWS with VXLAN overlay networking here is a minimum set of AWS Security Group SG rules It assumes a separation between the SG on the master nodes master sg and the worker nodes worker sg It also assumes etcd is running on the master nodes Master Nodes master sg Rules Port Range Protocol Ingress Egress Source Destination Description 2379 2380 tcp ingress worker sg etcd access 8472 udp ingress master sg self VXLAN overlay 8472 udp ingress worker sg VXLAN overlay 4240 tcp ingress master sg self health checks 4240 tcp ingress worker sg health checks ICMP 8 0 ingress master sg self health checks ICMP 8 0 ingress worker sg health checks 8472 udp egress master sg self VXLAN overlay 8472 udp egress worker sg VXLAN overlay 4240 tcp egress master sg self health checks 4240 tcp egress worker sg health checks ICMP 8 0 egress master sg self health checks ICMP 8 0 egress worker sg health checks Worker Nodes worker sg Port Range Protocol Ingress Egress Source Destination Description 8472 udp ingress master sg VXLAN overlay 8472 udp ingress worker sg self VXLAN overlay 4240 tcp ingress master sg health checks 4240 tcp ingress worker sg self health checks ICMP 8 0 ingress master sg health checks ICMP 8 0 ingress worker sg self health checks 8472 udp egress master sg VXLAN overlay 8472 udp egress worker sg self VXLAN overlay 4240 tcp egress master sg health checks 4240 tcp egress worker sg self health checks ICMP 8 0 egress master sg health checks ICMP 8 0 egress worker sg self health checks 2379 2380 tcp egress master sg etcd access note If you use a shared SG for the masters and workers you can condense these rules into ingress egress to self If you are using Direct Routing mode you can condense all rules into ingress egress ANY port protocol to from self The following ports should also be available on each node Port Range Protocol Description 4240 tcp cluster health checks cilium health 4244 tcp Hubble server 4245 tcp Hubble Relay 4250 tcp Mutual Authentication port 4251 tcp Spire Agent health check port listening on 127 0 0 1 or 1 6060 tcp cilium agent pprof server listening on 127 0 0 1 6061 tcp cilium operator pprof server listening on 127 0 0 1 6062 tcp Hubble Relay pprof server listening on 127 0 0 1 9878 tcp cilium envoy health listener listening on 127 0 0 1 9879 tcp cilium agent health status API listening on 127 0 0 1 and or 1 9890 tcp cilium agent gops server listening on 127 0 0 1 9891 tcp operator gops server listening on 127 0 0 1 9893 tcp Hubble Relay gops server listening on 127 0 0 1 9901 tcp cilium envoy Admin API listening on 127 0 0 1 9962 tcp cilium agent Prometheus metrics 9963 tcp cilium operator Prometheus metrics 9964 tcp cilium envoy Prometheus metrics 51871 udp WireGuard encryption tunnel endpoint admin mount bpffs Mounted eBPF filesystem Note Some distributions mount the bpf filesystem automatically Check if the bpf filesystem is mounted by running the command code block shell session mount grep sys fs bpf if present should output e g none on sys fs bpf type bpf If the eBPF filesystem is not mounted in the host filesystem Cilium will automatically mount the filesystem Mounting this BPF filesystem allows the cilium agent to persist eBPF resources across restarts of the agent so that the datapath can continue to operate while the agent is subsequently restarted or upgraded Optionally it is also possible to mount the eBPF filesystem before Cilium is deployed in the cluster the following command must be run in the host mount namespace The command must only be run once during the boot process of the machine code block shell session mount bpffs sys fs bpf t bpf A portable way to achieve this with persistence is to add the following line to etc fstab and then run mount sys fs bpf This will cause the filesystem to be automatically mounted when the node boots bpffs sys fs bpf bpf defaults 0 0 If you are using systemd to manage the kubelet see the section ref bpffs systemd Routing Tables When running in ref ipam eni IPAM mode Cilium will install per ENI routing tables for each ENI that is used by Cilium for pod IP allocation These routing tables are added to the host network namespace and must not be otherwise used by the system The index of those per ENI routing tables is computed as 10 eni interface index The base offset of 10 is chosen as it is highly unlikely to collide with the main routing table which is between 253 255 Privileges The following privileges are required to run Cilium When running the standard Kubernetes term DaemonSet the privileges are automatically granted to Cilium Cilium interacts with the Linux kernel to install eBPF program which will then perform networking tasks and implement security rules In order to install eBPF programs system wide CAP SYS ADMIN privileges are required These privileges must be granted to cilium agent The quickest way to meet the requirement is to run cilium agent as root and or as privileged container Cilium requires access to the host networking namespace For this purpose the Cilium pod is scheduled to run in the host networking namespace directly
cilium docs cilium io This document serves as an introduction for using Cilium to enforce DNS based Locking Down External Access with DNS Based Policies security policies for Kubernetes pods gsdns
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _gs_dns: **************************************************** Locking Down External Access with DNS-Based Policies **************************************************** This document serves as an introduction for using Cilium to enforce DNS-based security policies for Kubernetes pods. .. include:: gsg_requirements.rst Deploy the Demo Application =========================== DNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by AWS, Google, Twilio, Stripe, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping. In this guide we will learn about: - Controlling egress access to services outside the cluster using DNS-based policies - Using patterns (or wildcards) to whitelist a subset of DNS domains - Combining DNS, port and L7 rules for restricting access to external service In line with our Star Wars theme examples, we will use a simple scenario where the Empire's ``mediabot`` pods need access to GitHub for managing the Empire's git repositories. The pods shouldn't have access to any other external service. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml $ kubectl wait pod/mediabot --for=condition=Ready $ kubectl get pods NAME READY STATUS RESTARTS AGE pod/mediabot 1/1 Running 0 14s Apply DNS Egress Policy ======================= The following Cilium network policy allows ``mediabot`` pods to only access ``api.github.com``. .. tabs:: .. group-tab:: Generic .. literalinclude:: ../../examples/kubernetes-dns/dns-matchname.yaml .. group-tab:: OpenShift .. literalinclude:: ../../examples/kubernetes-dns/dns-matchname-openshift.yaml .. note:: OpenShift users will need to modify the policies to match the namespace ``openshift-dns`` (instead of ``kube-system``), remove the match on the ``k8s:k8s-app=kube-dns`` label, and change the port to 5353. Let's take a closer look at the policy: * The first egress section uses ``toFQDNs: matchName`` specification to allow egress to ``api.github.com``. The destination DNS should match exactly the name specified in the rule. The ``endpointSelector`` allows only pods with labels ``class: mediabot, org:empire`` to have the egress access. * The second egress section (``toEndpoints``) allows ``mediabot`` pods to access ``kube-dns`` service. Note that ``rules: dns`` instructs Cilium to inspect and allow DNS lookups matching specified patterns. In this case, inspect and allow all DNS queries. Note that with this policy the ``mediabot`` doesn't have access to any internal cluster service other than ``kube-dns``. Refer to :ref:`Network Policy` to learn more about policies for controlling access to internal cluster services. Let's apply the policy: .. parsed-literal:: kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-matchname.yaml Testing the policy, we see that ``mediabot`` has access to ``api.github.com`` but doesn't have access to any other external service, e.g., ``support.github.com``. .. code-block:: shell-session $ kubectl exec mediabot -- curl -I -s https://api.github.com | head -1 HTTP/2 200 $ kubectl exec mediabot -- curl -I -s --max-time 5 https://support.github.com | head -1 curl: (28) Connection timed out after 5000 milliseconds command terminated with exit code 28 DNS Policies Using Patterns =========================== The above policy controlled DNS access based on exact match of the DNS domain name. Often, it is required to allow access to a subset of domains. Let's say, in the above example, ``mediabot`` pods need access to any GitHub sub-domain, e.g., the pattern ``*.github.com``. We can achieve this easily by changing the ``toFQDN`` rule to use ``matchPattern`` instead of ``matchName``. .. literalinclude:: ../../examples/kubernetes-dns/dns-pattern.yaml .. parsed-literal:: kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-pattern.yaml Test that ``mediabot`` has access to multiple GitHub services for which the DNS matches the pattern ``*.github.com``. It is important to note and test that this doesn't allow access to ``github.com`` because the ``*.`` in the pattern requires one subdomain to be present in the DNS name. You can simply add more ``matchName`` and ``matchPattern`` clauses to extend the access. (See :ref:`DNS based` policies to learn more about specifying DNS rules using patterns and names.) .. code-block:: shell-session $ kubectl exec mediabot -- curl -I -s https://support.github.com | head -1 HTTP/1.1 200 OK $ kubectl exec mediabot -- curl -I -s https://gist.github.com | head -1 HTTP/1.1 302 Found $ kubectl exec mediabot -- curl -I -s --max-time 5 https://github.com | head -1 curl: (28) Connection timed out after 5000 milliseconds command terminated with exit code 28 Combining DNS, Port and L7 Rules ================================ The DNS-based policies can be combined with port (L4) and API (L7) rules to further restrict the access. In our example, we will restrict ``mediabot`` pods to access GitHub services only on ports ``443``. The ``toPorts`` section in the policy below achieves the port-based restrictions along with the DNS-based policies. .. literalinclude:: ../../examples/kubernetes-dns/dns-port.yaml .. parsed-literal:: kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-port.yaml Testing, the access to ``https://support.github.com`` on port ``443`` will succeed but the access to ``http://support.github.com`` on port ``80`` will be denied. .. code-block:: shell-session $ kubectl exec mediabot -- curl -I -s https://support.github.com | head -1 HTTP/1.1 200 OK $ kubectl exec mediabot -- curl -I -s --max-time 5 http://support.github.com | head -1 curl: (28) Connection timed out after 5001 milliseconds command terminated with exit code 28 Refer to :ref:`l4_policy` and :ref:`l7_policy` to learn more about Cilium L4 and L7 network policies. Clean-up ======== .. parsed-literal:: kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml kubectl delete cnp fqdn
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io gs dns Locking Down External Access with DNS Based Policies This document serves as an introduction for using Cilium to enforce DNS based security policies for Kubernetes pods include gsg requirements rst Deploy the Demo Application DNS based policies are very useful for controlling access to services running outside the Kubernetes cluster DNS acts as a persistent service identifier for both external services provided by AWS Google Twilio Stripe etc and internal services such as database clusters running in private subnets outside Kubernetes CIDR or IP based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently The Cilium DNS based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping In this guide we will learn about Controlling egress access to services outside the cluster using DNS based policies Using patterns or wildcards to whitelist a subset of DNS domains Combining DNS port and L7 rules for restricting access to external service In line with our Star Wars theme examples we will use a simple scenario where the Empire s mediabot pods need access to GitHub for managing the Empire s git repositories The pods shouldn t have access to any other external service parsed literal kubectl create f SCM WEB examples kubernetes dns dns sw app yaml kubectl wait pod mediabot for condition Ready kubectl get pods NAME READY STATUS RESTARTS AGE pod mediabot 1 1 Running 0 14s Apply DNS Egress Policy The following Cilium network policy allows mediabot pods to only access api github com tabs group tab Generic literalinclude examples kubernetes dns dns matchname yaml group tab OpenShift literalinclude examples kubernetes dns dns matchname openshift yaml note OpenShift users will need to modify the policies to match the namespace openshift dns instead of kube system remove the match on the k8s k8s app kube dns label and change the port to 5353 Let s take a closer look at the policy The first egress section uses toFQDNs matchName specification to allow egress to api github com The destination DNS should match exactly the name specified in the rule The endpointSelector allows only pods with labels class mediabot org empire to have the egress access The second egress section toEndpoints allows mediabot pods to access kube dns service Note that rules dns instructs Cilium to inspect and allow DNS lookups matching specified patterns In this case inspect and allow all DNS queries Note that with this policy the mediabot doesn t have access to any internal cluster service other than kube dns Refer to ref Network Policy to learn more about policies for controlling access to internal cluster services Let s apply the policy parsed literal kubectl apply f SCM WEB examples kubernetes dns dns matchname yaml Testing the policy we see that mediabot has access to api github com but doesn t have access to any other external service e g support github com code block shell session kubectl exec mediabot curl I s https api github com head 1 HTTP 2 200 kubectl exec mediabot curl I s max time 5 https support github com head 1 curl 28 Connection timed out after 5000 milliseconds command terminated with exit code 28 DNS Policies Using Patterns The above policy controlled DNS access based on exact match of the DNS domain name Often it is required to allow access to a subset of domains Let s say in the above example mediabot pods need access to any GitHub sub domain e g the pattern github com We can achieve this easily by changing the toFQDN rule to use matchPattern instead of matchName literalinclude examples kubernetes dns dns pattern yaml parsed literal kubectl apply f SCM WEB examples kubernetes dns dns pattern yaml Test that mediabot has access to multiple GitHub services for which the DNS matches the pattern github com It is important to note and test that this doesn t allow access to github com because the in the pattern requires one subdomain to be present in the DNS name You can simply add more matchName and matchPattern clauses to extend the access See ref DNS based policies to learn more about specifying DNS rules using patterns and names code block shell session kubectl exec mediabot curl I s https support github com head 1 HTTP 1 1 200 OK kubectl exec mediabot curl I s https gist github com head 1 HTTP 1 1 302 Found kubectl exec mediabot curl I s max time 5 https github com head 1 curl 28 Connection timed out after 5000 milliseconds command terminated with exit code 28 Combining DNS Port and L7 Rules The DNS based policies can be combined with port L4 and API L7 rules to further restrict the access In our example we will restrict mediabot pods to access GitHub services only on ports 443 The toPorts section in the policy below achieves the port based restrictions along with the DNS based policies literalinclude examples kubernetes dns dns port yaml parsed literal kubectl apply f SCM WEB examples kubernetes dns dns port yaml Testing the access to https support github com on port 443 will succeed but the access to http support github com on port 80 will be denied code block shell session kubectl exec mediabot curl I s https support github com head 1 HTTP 1 1 200 OK kubectl exec mediabot curl I s max time 5 http support github com head 1 curl 28 Connection timed out after 5001 milliseconds command terminated with exit code 28 Refer to ref l4 policy and ref l7 policy to learn more about Cilium L4 and L7 network policies Clean up parsed literal kubectl delete f SCM WEB examples kubernetes dns dns sw app yaml kubectl delete cnp fqdn
cilium Host Firewall docs cilium io hostfirewall security policies for Kubernetes nodes This document serves as an introduction to Cilium s host firewall to enforce
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _host_firewall: ************* Host Firewall ************* This document serves as an introduction to Cilium's host firewall, to enforce security policies for Kubernetes nodes. .. admonition:: Video :class: attention You can also watch a video of Cilium's host firewall in action on `eCHO Episode 40: Cilium Host Firewall <https://www.youtube.com/watch?v=GLLLcz398K0&t=288s>`__. Enable the Host Firewall in Cilium ================================== .. include:: /installation/k8s-install-download-release.rst Deploy Cilium release via Helm: .. parsed-literal:: helm install cilium |CHART_RELEASE| \\ --namespace kube-system \\ --set hostFirewall.enabled=true \\ --set devices='{ethX,ethY}' The ``devices`` flag refers to the network devices Cilium is configured on, such as ``eth0``. If you omit this option, Cilium auto-detects what interfaces the host firewall applies to. The resulting interfaces are shown in the output of the ``cilium-dbg status`` command: .. code-block:: shell-session $ kubectl exec -n kube-system ds/cilium -- \ cilium-dbg status | grep 'Host firewall' At this point, the Cilium-managed nodes are ready to enforce network policies. Attach a Label to the Node ========================== In this guide, host policies only apply to nodes with the label ``node-access=ssh``. Therefore, you first need to attach this label to a node in the cluster: .. code-block:: shell-session $ export NODE_NAME=k8s1 $ kubectl label node $NODE_NAME node-access=ssh node/k8s1 labeled Enable Policy Audit Mode for the Host Endpoint ============================================== `HostPolicies` enforce access control over connectivity to and from nodes. Particular care must be taken to ensure that when host policies are imported, Cilium does not block access to the nodes or break the cluster's normal behavior (for example by blocking communication with ``kube-apiserver``). To avoid such issues, switch the host firewall in audit mode and validate the impact of host policies before enforcing them. .. warning:: When Policy Audit Mode is enabled, no network policy is enforced so this setting is not recommended for production deployment. Enable and check status for the Policy Audit Mode on the host endpoint for a given node with the following commands: .. code-block:: shell-session $ CILIUM_NAMESPACE=kube-system $ CILIUM_POD_NAME=$(kubectl -n $CILIUM_NAMESPACE get pods -l "k8s-app=cilium" -o jsonpath="{.items[?(@.spec.nodeName=='$NODE_NAME')].metadata.name}") $ alias kexec="kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME --" $ HOST_EP_ID=$(kexec cilium-dbg endpoint list -o jsonpath='{[?(@.status.identity.id==1)].id}') $ kexec cilium-dbg status | grep 'Host firewall' Host firewall: Enabled [eth0] $ kexec cilium-dbg endpoint config $HOST_EP_ID PolicyAuditMode=Enabled Endpoint 3353 configuration updated successfully $ kexec cilium-dbg endpoint config $HOST_EP_ID | grep PolicyAuditMode PolicyAuditMode : Enabled Apply a Host Network Policy =========================== :ref:`HostPolicies` match on node labels using a :ref:`NodeSelector` to identify the nodes to which the policies applies. They apply only to the host namespace, including host-networking pods. They don't apply to communications between pods or between pods and the outside of the cluster, except if those pods are host-networking pods. The following policy applies to all nodes with the ``node-access=ssh`` label. It allows communications from outside the cluster only for TCP/22 and for ICMP (ping) echo requests. All communications from the cluster to the hosts are allowed. .. literalinclude:: ../../examples/policies/host/demo-host-policy.yaml To apply this policy, run: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/policies/host/demo-host-policy.yaml ciliumclusterwidenetworkpolicy.cilium.io/demo-host-policy created The host is represented as a special endpoint, with label ``reserved:host``, in the output of command ``cilium-dbg endpoint list``. Use this command to inspect the status of host policies: .. code-block:: shell-session $ kexec cilium-dbg endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 266 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:ef4e 10.16.172.63 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1687 Disabled (Audit) Disabled 1 k8s:node-access=ssh ready reserved:host 3362 Disabled Disabled 4 reserved:health f00d::a0b:0:0:49cf 10.16.87.66 ready In this example, one can observe that policy enforcement on the host endpoint is in audit mode for ingress traffic, and disabled for egress traffic. Adjust the Host Policy to Your Environment ========================================== As long as the host endpoint runs in audit mode, communications disallowed by the policy are not dropped. Nevertheless, they are reported by ``cilium-dbg monitor``, as ``action audit``. With these reports, the audit mode allows you to adjust the host policy to your environment in order to avoid unexpected connection breakages. .. code-block:: shell-session $ kexec cilium-dbg monitor -t policy-verdict --related-to $HOST_EP_ID Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.60.12 -> 192.168.60.11 EchoRequest Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.60.12:37278 -> 192.168.60.11:2379 tcp SYN Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action audit, match none, 10.0.2.2:47500 -> 10.0.2.15:6443 tcp SYN For details on deriving the network policies from the output of ``cilium monitor``, refer to `observe_policy_verdicts` and `create_network_policy` in the `policy_verdicts` guide. Note that `Entities based` rules are convenient when combined with host policies, for example to allow communication to entire classes of destinations, such as all remotes nodes (``remote-node``) or the entire cluster (``cluster``). .. warning:: Make sure that none of the communications required to access the cluster or for the cluster to work properly are denied. Ensure they all appear as ``action allow`` before disabling the audit mode. .. _disable_policy_audit_mode: Disable Policy Audit Mode ========================= Once you are confident all required communications to the host from outside the cluster are allowed, disable the policy audit mode to enforce the host policy: .. code-block:: shell-session $ kexec cilium-dbg endpoint config $HOST_EP_ID PolicyAuditMode=Disabled Endpoint 3353 configuration updated successfully Ingress host policies should now appear as enforced: .. code-block:: shell-session $ kexec cilium-dbg endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 266 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:ef4e 10.16.172.63 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1687 Enabled Disabled 1 k8s:node-access=ssh ready reserved:host 3362 Disabled Disabled 4 reserved:health f00d::a0b:0:0:49cf 10.16.87.66 ready Communications that are not explicitly allowed by the host policy are now dropped: .. code-block:: shell-session $ kexec cilium-dbg monitor -t policy-verdict --related-to $HOST_EP_ID Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action deny, match none, 10.0.2.2:49038 -> 10.0.2.15:21 tcp SYN Clean up ======== .. code-block:: shell-session $ kubectl delete ccnp demo-host-policy $ kubectl label node $NODE_NAME node-access- Further Reading =============== Read the documentation on :ref:`HostPolicies` for additional details on how to use the policies. In particular, refer to the :ref:`Troubleshooting Host Policies <troubleshooting_host_policies>` subsection to understand how to debug issues with Host Policies, or to the section on :ref:`Host Policies known issues <host_policies_known_issues>` to understand the current limitations of the feature.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io host firewall Host Firewall This document serves as an introduction to Cilium s host firewall to enforce security policies for Kubernetes nodes admonition Video class attention You can also watch a video of Cilium s host firewall in action on eCHO Episode 40 Cilium Host Firewall https www youtube com watch v GLLLcz398K0 t 288s Enable the Host Firewall in Cilium include installation k8s install download release rst Deploy Cilium release via Helm parsed literal helm install cilium CHART RELEASE namespace kube system set hostFirewall enabled true set devices ethX ethY The devices flag refers to the network devices Cilium is configured on such as eth0 If you omit this option Cilium auto detects what interfaces the host firewall applies to The resulting interfaces are shown in the output of the cilium dbg status command code block shell session kubectl exec n kube system ds cilium cilium dbg status grep Host firewall At this point the Cilium managed nodes are ready to enforce network policies Attach a Label to the Node In this guide host policies only apply to nodes with the label node access ssh Therefore you first need to attach this label to a node in the cluster code block shell session export NODE NAME k8s1 kubectl label node NODE NAME node access ssh node k8s1 labeled Enable Policy Audit Mode for the Host Endpoint HostPolicies enforce access control over connectivity to and from nodes Particular care must be taken to ensure that when host policies are imported Cilium does not block access to the nodes or break the cluster s normal behavior for example by blocking communication with kube apiserver To avoid such issues switch the host firewall in audit mode and validate the impact of host policies before enforcing them warning When Policy Audit Mode is enabled no network policy is enforced so this setting is not recommended for production deployment Enable and check status for the Policy Audit Mode on the host endpoint for a given node with the following commands code block shell session CILIUM NAMESPACE kube system CILIUM POD NAME kubectl n CILIUM NAMESPACE get pods l k8s app cilium o jsonpath items spec nodeName NODE NAME metadata name alias kexec kubectl n CILIUM NAMESPACE exec CILIUM POD NAME HOST EP ID kexec cilium dbg endpoint list o jsonpath status identity id 1 id kexec cilium dbg status grep Host firewall Host firewall Enabled eth0 kexec cilium dbg endpoint config HOST EP ID PolicyAuditMode Enabled Endpoint 3353 configuration updated successfully kexec cilium dbg endpoint config HOST EP ID grep PolicyAuditMode PolicyAuditMode Enabled Apply a Host Network Policy ref HostPolicies match on node labels using a ref NodeSelector to identify the nodes to which the policies applies They apply only to the host namespace including host networking pods They don t apply to communications between pods or between pods and the outside of the cluster except if those pods are host networking pods The following policy applies to all nodes with the node access ssh label It allows communications from outside the cluster only for TCP 22 and for ICMP ping echo requests All communications from the cluster to the hosts are allowed literalinclude examples policies host demo host policy yaml To apply this policy run parsed literal kubectl create f SCM WEB examples policies host demo host policy yaml ciliumclusterwidenetworkpolicy cilium io demo host policy created The host is represented as a special endpoint with label reserved host in the output of command cilium dbg endpoint list Use this command to inspect the status of host policies code block shell session kexec cilium dbg endpoint list ENDPOINT POLICY ingress POLICY egress IDENTITY LABELS source key value IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 266 Disabled Disabled 104 k8s io cilium k8s policy cluster default f00d a0b 0 0 ef4e 10 16 172 63 ready k8s io cilium k8s policy serviceaccount coredns k8s io kubernetes pod namespace kube system k8s k8s app kube dns 1687 Disabled Audit Disabled 1 k8s node access ssh ready reserved host 3362 Disabled Disabled 4 reserved health f00d a0b 0 0 49cf 10 16 87 66 ready In this example one can observe that policy enforcement on the host endpoint is in audit mode for ingress traffic and disabled for egress traffic Adjust the Host Policy to Your Environment As long as the host endpoint runs in audit mode communications disallowed by the policy are not dropped Nevertheless they are reported by cilium dbg monitor as action audit With these reports the audit mode allows you to adjust the host policy to your environment in order to avoid unexpected connection breakages code block shell session kexec cilium dbg monitor t policy verdict related to HOST EP ID Policy verdict log flow 0x0 local EP ID 1687 remote ID 6 proto 1 ingress action allow match L3 Only 192 168 60 12 192 168 60 11 EchoRequest Policy verdict log flow 0x0 local EP ID 1687 remote ID 6 proto 6 ingress action allow match L3 Only 192 168 60 12 37278 192 168 60 11 2379 tcp SYN Policy verdict log flow 0x0 local EP ID 1687 remote ID 2 proto 6 ingress action audit match none 10 0 2 2 47500 10 0 2 15 6443 tcp SYN For details on deriving the network policies from the output of cilium monitor refer to observe policy verdicts and create network policy in the policy verdicts guide Note that Entities based rules are convenient when combined with host policies for example to allow communication to entire classes of destinations such as all remotes nodes remote node or the entire cluster cluster warning Make sure that none of the communications required to access the cluster or for the cluster to work properly are denied Ensure they all appear as action allow before disabling the audit mode disable policy audit mode Disable Policy Audit Mode Once you are confident all required communications to the host from outside the cluster are allowed disable the policy audit mode to enforce the host policy code block shell session kexec cilium dbg endpoint config HOST EP ID PolicyAuditMode Disabled Endpoint 3353 configuration updated successfully Ingress host policies should now appear as enforced code block shell session kexec cilium dbg endpoint list ENDPOINT POLICY ingress POLICY egress IDENTITY LABELS source key value IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 266 Disabled Disabled 104 k8s io cilium k8s policy cluster default f00d a0b 0 0 ef4e 10 16 172 63 ready k8s io cilium k8s policy serviceaccount coredns k8s io kubernetes pod namespace kube system k8s k8s app kube dns 1687 Enabled Disabled 1 k8s node access ssh ready reserved host 3362 Disabled Disabled 4 reserved health f00d a0b 0 0 49cf 10 16 87 66 ready Communications that are not explicitly allowed by the host policy are now dropped code block shell session kexec cilium dbg monitor t policy verdict related to HOST EP ID Policy verdict log flow 0x0 local EP ID 1687 remote ID 2 proto 6 ingress action deny match none 10 0 2 2 49038 10 0 2 15 21 tcp SYN Clean up code block shell session kubectl delete ccnp demo host policy kubectl label node NODE NAME node access Further Reading Read the documentation on ref HostPolicies for additional details on how to use the policies In particular refer to the ref Troubleshooting Host Policies troubleshooting host policies subsection to understand how to debug issues with Host Policies or to the section on ref Host Policies known issues host policies known issues to understand the current limitations of the feature
cilium Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node This document serves as an introduction for using Cilium to enforce Elasticsearch aware minutes Securing Elasticsearch
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io ********************** Securing Elasticsearch ********************** This document serves as an introduction for using Cilium to enforce Elasticsearch-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes. .. include:: gsg_requirements.rst Deploy the Demo Application =========================== Following the Cilium tradition, we will use a Star Wars-inspired example. The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including: * ``index: troop_logs``: Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers! * ``index: spaceship_diagnostics``: Spaceships diagnostics data collected from every spaceship which is used for R&D and improvement of the spaceships. Every outpost has an Elasticsearch client service to upload the Stormtroopers logs. And every spaceship has a service to upload diagnostics. Similarly, the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data. Before we look into the security concerns, let's first create this application scenario in minikube. Deploy the app using command below, which will create * An ``elasticsearch`` service with the selector label ``component:elasticsearch`` and a pod running Elasticsearch. * Three Elasticsearch clients one each for ``empire-hq``, ``outpost`` and ``spaceship``. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-es/es-sw-app.yaml serviceaccount "elasticsearch" created service "elasticsearch" created replicationcontroller "es" created role "elasticsearch" created rolebinding "elasticsearch" created pod "outpost" created pod "empire-hq" created pod "spaceship" created .. code-block:: shell-session $ kubectl get svc,pods NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/elasticsearch NodePort 10.111.238.254 <none> 9200:30130/TCP,9300:31721/TCP 2d svc/etcd-cilium NodePort 10.98.67.60 <none> 32379:31079/TCP,32380:31080/TCP 9d svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d NAME READY STATUS RESTARTS AGE po/empire-hq 1/1 Running 0 2d po/es-g9qk2 1/1 Running 0 2d po/etcd-cilium-0 1/1 Running 0 9d po/outpost 1/1 Running 0 2d po/spaceship 1/1 Running 0 2d Security Risks for Elasticsearch Access ======================================= For Elasticsearch clusters the **least privilege security** challenge is to give clients access only to particular indices, and to limit the operations each client is allowed to perform on each index. In this example, the ``outpost`` Elasticsearch clients only need access to upload troop logs; and the ``empire-hq`` client only needs search access to both the indices. From the security perspective, the outposts are weak spots and susceptible to be captured by the rebels. Once compromised, the clients can be used to search and manipulate the critical data in Elasticsearch. We can simulate this attack, but first let's run the commands for legitimate behavior for all the client services. ``outpost`` client uploading troop logs .. code-block:: shell-session $ kubectl exec outpost -- python upload_logs.py Uploading Stormtroopers Performance Logs created : {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True} ``spaceship`` uploading diagnostics .. code-block:: shell-session $ kubectl exec spaceship -- python upload_diagnostics.py Uploading Spaceship Diagnostics created : {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True} ``empire-hq`` running search queries for logs and diagnostics .. code-block:: shell-session $ kubectl exec empire-hq -- python search.py Searching for Spaceship Diagnostics Got 1 Hits: {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}} Searching for Stormtroopers Performance Logs Got 1 Hits: {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \ '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \ 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}} Now imagine an outpost captured by the rebels. In the commands below, the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost. .. code-block:: shell-session $ kubectl exec outpost -- python search.py Searching for Spaceship Diagnostics Got 1 Hits: {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}} Searching for Stormtroopers Performance Logs Got 1 Hits: {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \ '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \ 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}} Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire-hq! (Hint: Rebels have changed the ``stats`` for the tiefighter spaceship, a change hard to detect but with adverse impact!) .. code-block:: shell-session $ kubectl exec outpost -- python update.py Uploading Spaceship Diagnostics {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 'stats': '[OK] [ENGINE OK @SPEED 5000 km/s]'}} Securing Elasticsearch Using Cilium ==================================== .. image:: images/cilium_es_gsg_topology.png :scale: 40 % Following the least privilege security principle, we want to the allow the following legitimate actions and nothing more: * ``outpost`` service only has upload access to ``index: troop_logs`` * ``spaceship`` service only has upload access to ``index: spaceship_diagnostics`` * ``empire-hq`` service only has search access for both the indices Fortunately, the Empire DevOps team is using Cilium for their Kubernetes cluster. Cilium provides L7 visibility and security policies to control Elasticsearch API access. Cilium follows the **white-list, least privilege model** for security. That is to say, a *CiliumNetworkPolicy* contains a list of rules that define **allowed requests** and any request that does not match the rules is denied. In this example, the policy rules are defined for inbound traffic (i.e., "ingress") connections to the *elasticsearch* service. Note that endpoints selected as backend pods for the service are defined by the *selector* labels. *Selector* labels use the same concept as Kubernetes to define a service. In this example, label ``component: elasticsearch`` defines the pods that are part of the *elasticsearch* service in Kubernetes. In the policy file below, you will see the following rules for controlling the indices access and actions performed: * ``fromEndpoints`` with labels ``app:spaceship`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^/spaceship_diagnostics/stats/.*$`` * ``fromEndpoints`` with labels ``app:outpost`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^/troop_logs/log/.*$`` * ``fromEndpoints`` with labels ``app:empire`` only ``HTTP`` ``GET`` is allowed on paths matching regex ``^/spaceship_diagnostics/_search/??.*$`` and ``^/troop_logs/search/??.*$`` .. literalinclude:: ../../examples/kubernetes-es/es-sw-policy.yaml Apply this Elasticsearch-aware network security policy using ``kubectl``: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-es/es-sw-policy.yaml ciliumnetworkpolicy "secure-empire-elasticsearch" created Let's test the security policies. Firstly, the search access is blocked for both outpost and spaceship. So from a compromised outpost, rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics. Secondly, the outpost clients don't have access to create or update the ``index: spaceship_diagnostics``. .. code-block:: shell-session $ kubectl exec outpost -- python search.py GET http://elasticsearch:9200/spaceship_diagnostics/_search [status:403 request:0.008s] ... ... elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n') command terminated with exit code 1 .. code-block:: shell-session $ kubectl exec outpost -- python update.py PUT http://elasticsearch:9200/spaceship_diagnostics/stats/1 [status:403 request:0.006s] ... ... elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n') command terminated with exit code 1 We can re-run any of the below commands to show that the security policy still allows all legitimate requests (i.e., no 403 errors are returned). .. code-block:: shell-session $ kubectl exec outpost -- python upload_logs.py ... $ kubectl exec spaceship -- python upload_diagnostics.py ... $ kubectl exec empire-hq -- python search.py ... Clean Up ======== You have now installed Cilium, deployed a demo app, and finally deployed & tested Elasticsearch-aware network security policies. To clean up, run: .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-es/es-sw-app.yaml $ kubectl delete cnp secure-empire-elasticsearch
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Securing Elasticsearch This document serves as an introduction for using Cilium to enforce Elasticsearch aware security policies It is a detailed walk through of getting a single node Cilium environment running on your machine It is designed to take 15 30 minutes include gsg requirements rst Deploy the Demo Application Following the Cilium tradition we will use a Star Wars inspired example The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including index troop logs Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers index spaceship diagnostics Spaceships diagnostics data collected from every spaceship which is used for R D and improvement of the spaceships Every outpost has an Elasticsearch client service to upload the Stormtroopers logs And every spaceship has a service to upload diagnostics Similarly the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data Before we look into the security concerns let s first create this application scenario in minikube Deploy the app using command below which will create An elasticsearch service with the selector label component elasticsearch and a pod running Elasticsearch Three Elasticsearch clients one each for empire hq outpost and spaceship parsed literal kubectl create f SCM WEB examples kubernetes es es sw app yaml serviceaccount elasticsearch created service elasticsearch created replicationcontroller es created role elasticsearch created rolebinding elasticsearch created pod outpost created pod empire hq created pod spaceship created code block shell session kubectl get svc pods NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE svc elasticsearch NodePort 10 111 238 254 none 9200 30130 TCP 9300 31721 TCP 2d svc etcd cilium NodePort 10 98 67 60 none 32379 31079 TCP 32380 31080 TCP 9d svc kubernetes ClusterIP 10 96 0 1 none 443 TCP 9d NAME READY STATUS RESTARTS AGE po empire hq 1 1 Running 0 2d po es g9qk2 1 1 Running 0 2d po etcd cilium 0 1 1 Running 0 9d po outpost 1 1 Running 0 2d po spaceship 1 1 Running 0 2d Security Risks for Elasticsearch Access For Elasticsearch clusters the least privilege security challenge is to give clients access only to particular indices and to limit the operations each client is allowed to perform on each index In this example the outpost Elasticsearch clients only need access to upload troop logs and the empire hq client only needs search access to both the indices From the security perspective the outposts are weak spots and susceptible to be captured by the rebels Once compromised the clients can be used to search and manipulate the critical data in Elasticsearch We can simulate this attack but first let s run the commands for legitimate behavior for all the client services outpost client uploading troop logs code block shell session kubectl exec outpost python upload logs py Uploading Stormtroopers Performance Logs created index troop logs type log id 1 version 1 result created shards total 2 successful 1 failed 0 created True spaceship uploading diagnostics code block shell session kubectl exec spaceship python upload diagnostics py Uploading Spaceship Diagnostics created index spaceship diagnostics type stats id 1 version 1 result created shards total 2 successful 1 failed 0 created True empire hq running search queries for logs and diagnostics code block shell session kubectl exec empire hq python search py Searching for Spaceship Diagnostics Got 1 Hits index spaceship diagnostics type stats id 1 score 1 0 source spaceshipid 3459B78XNZTF type tiefighter title Engine Diagnostics stats CRITICAL ENGINE BURN SPEED 5000 km s CHANCE 80 Searching for Stormtroopers Performance Logs Got 1 Hits index troop logs type log id 1 score 1 0 source outpost Endor datetime 33 ABY 4AM DST title Endor Corps 1 Morning Drill notes 5100 PRESENT 15 ABSENT 130 CODE RED BELOW PAR PERFORMANCE Now imagine an outpost captured by the rebels In the commands below the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost code block shell session kubectl exec outpost python search py Searching for Spaceship Diagnostics Got 1 Hits index spaceship diagnostics type stats id 1 score 1 0 source spaceshipid 3459B78XNZTF type tiefighter title Engine Diagnostics stats CRITICAL ENGINE BURN SPEED 5000 km s CHANCE 80 Searching for Stormtroopers Performance Logs Got 1 Hits index troop logs type log id 1 score 1 0 source outpost Endor datetime 33 ABY 4AM DST title Endor Corps 1 Morning Drill notes 5100 PRESENT 15 ABSENT 130 CODE RED BELOW PAR PERFORMANCE Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire hq Hint Rebels have changed the stats for the tiefighter spaceship a change hard to detect but with adverse impact code block shell session kubectl exec outpost python update py Uploading Spaceship Diagnostics index spaceship diagnostics type stats id 1 score 1 0 source spaceshipid 3459B78XNZTF type tiefighter title Engine Diagnostics stats OK ENGINE OK SPEED 5000 km s Securing Elasticsearch Using Cilium image images cilium es gsg topology png scale 40 Following the least privilege security principle we want to the allow the following legitimate actions and nothing more outpost service only has upload access to index troop logs spaceship service only has upload access to index spaceship diagnostics empire hq service only has search access for both the indices Fortunately the Empire DevOps team is using Cilium for their Kubernetes cluster Cilium provides L7 visibility and security policies to control Elasticsearch API access Cilium follows the white list least privilege model for security That is to say a CiliumNetworkPolicy contains a list of rules that define allowed requests and any request that does not match the rules is denied In this example the policy rules are defined for inbound traffic i e ingress connections to the elasticsearch service Note that endpoints selected as backend pods for the service are defined by the selector labels Selector labels use the same concept as Kubernetes to define a service In this example label component elasticsearch defines the pods that are part of the elasticsearch service in Kubernetes In the policy file below you will see the following rules for controlling the indices access and actions performed fromEndpoints with labels app spaceship only HTTP PUT is allowed on paths matching regex spaceship diagnostics stats fromEndpoints with labels app outpost only HTTP PUT is allowed on paths matching regex troop logs log fromEndpoints with labels app empire only HTTP GET is allowed on paths matching regex spaceship diagnostics search and troop logs search literalinclude examples kubernetes es es sw policy yaml Apply this Elasticsearch aware network security policy using kubectl parsed literal kubectl create f SCM WEB examples kubernetes es es sw policy yaml ciliumnetworkpolicy secure empire elasticsearch created Let s test the security policies Firstly the search access is blocked for both outpost and spaceship So from a compromised outpost rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics Secondly the outpost clients don t have access to create or update the index spaceship diagnostics code block shell session kubectl exec outpost python search py GET http elasticsearch 9200 spaceship diagnostics search status 403 request 0 008s elasticsearch exceptions AuthorizationException TransportError 403 Access denied r n command terminated with exit code 1 code block shell session kubectl exec outpost python update py PUT http elasticsearch 9200 spaceship diagnostics stats 1 status 403 request 0 006s elasticsearch exceptions AuthorizationException TransportError 403 Access denied r n command terminated with exit code 1 We can re run any of the below commands to show that the security policy still allows all legitimate requests i e no 403 errors are returned code block shell session kubectl exec outpost python upload logs py kubectl exec spaceship python upload diagnostics py kubectl exec empire hq python search py Clean Up You have now installed Cilium deployed a demo app and finally deployed tested Elasticsearch aware network security policies To clean up run parsed literal kubectl delete f SCM WEB examples kubernetes es es sw app yaml kubectl delete cnp secure empire elasticsearch
cilium This document serves as an introduction to using Cilium to enforce gRPC aware Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node Securing gRPC minutes
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io ************* Securing gRPC ************* This document serves as an introduction to using Cilium to enforce gRPC-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes. .. include:: gsg_requirements.rst It is important for this demo that ``kube-dns`` is working correctly. To know the status of ``kube-dns`` you can run the following command: .. code-block:: shell-session $ kubectl get deployment kube-dns -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-dns 1 1 1 1 13h Where at least one pod should be available. Deploy the Demo Application =========================== Now that we have Cilium deployed and ``kube-dns`` operating correctly we can deploy our demo gRPC application. Since our first demo of Cilium + HTTP-aware security policies was Star Wars-themed, we decided to do the same for gRPC. While the `HTTP-aware Cilium Star Wars demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_ showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the Rebel Alliance, this gRPC demo shows how the lack of gRPC-aware security policies allowed Leia, Chewbacca, Lando, C-3PO, and R2-D2 to escape from Cloud City, which had been overtaken by empire forces. `gRPC <https://grpc.io/>`_ is a high-performance RPC framework built on top of the `protobuf <https://developers.google.com/protocol-buffers/>`_ serialization/deserialization library popularized by Google. There are gRPC bindings for many programming languages, and the efficiency of the protobuf parsing as well as advantages from leveraging HTTP 2 as a transport make it a popular RPC framework for those building new microservices from scratch. For those unfamiliar with the details of the movie, Leia and the other rebels are fleeing storm troopers and trying to reach the space port platform where the Millennium Falcon is parked, so they can fly out of Cloud City. However, the door to the platform is closed, and the access code has been changed. However, R2-D2 is able to access the Cloud City computer system via a public terminal, and disable this security, opening the door and letting the Rebels reach the Millennium Falcon just in time to escape. .. image:: images/cilium_grpc_gsg_r2d2_terminal.png In our example, Cloud City's internal computer system is built as a set of gRPC-based microservices (who knew that gRPC was actually invented a long time ago, in a galaxy far, far away?). With gRPC, each service is defined using a language independent protocol buffer definition. Here is the definition for the system used to manage doors within Cloud City: .. code-block:: java package cloudcity; // The door manager service definition. service DoorManager { // Get human readable name of door. rpc GetName(DoorRequest) returns (DoorNameReply) {} // Find the location of this door. rpc GetLocation (DoorRequest) returns (DoorLocationReply) {} // Find out whether door is open or closed rpc GetStatus(DoorRequest) returns (DoorStatusReply) {} // Request maintenance on the door rpc RequestMaintenance(DoorMaintRequest) returns (DoorActionReply) {} // Set Access Code to Open / Lock the door rpc SetAccessCode(DoorAccessCodeRequest) returns (DoorActionReply) {} } To keep the setup small, we will just launch two pods to represent this setup: - **cc-door-mgr**: A single pod running the gRPC door manager service with label ``app=cc-door-mgr``. - **terminal-87**: One of the public network access terminals scattered across Cloud City. R2-D2 plugs into terminal-87 as the rebels are desperately trying to escape. This terminal uses the gRPC client code to communicate with the door management services with label ``app=public-terminal``. .. image:: images/cilium_grpc_gsg_topology.png The file ``cc-door-app.yaml`` contains a Kubernetes Deployment for the door manager service, a Kubernetes Pod representing ``terminal-87``, and a Kubernetes Service for the door manager services. To deploy this example app, run: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-app.yaml deployment "cc-door-mgr" created service "cc-door-server" created pod "terminal-87" created Kubernetes will deploy the pods and service in the background. Running ``kubectl get svc,pods`` will inform you about the progress of the operation. Each pod will go through several states until it reaches ``Running`` at which point the setup is ready. .. code-block:: shell-session $ kubectl get pods,svc NAME READY STATUS RESTARTS AGE po/cc-door-mgr-3590146619-cv4jn 1/1 Running 0 1m po/terminal-87 1/1 Running 0 1m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/cc-door-server 10.0.0.72 <none> 50051/TCP 1m svc/kubernetes 10.0.0.1 <none> 443/TCP 6m Test Access Between gRPC Client and Server ========================================== First, let's confirm that the public terminal can properly act as a client to the door service. We can test this by running a Python gRPC client for the door service that exists in the *terminal-87* container. We'll invoke the 'cc_door_client' with the name of the gRPC method to call, and any parameters (in this case, the door-id): .. code-block:: shell-session $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetName 1 Door name is: Spaceport Door #1 $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1 Door location is lat = 10.222200393676758 long = 68.87879943847656 Exposing this information to public terminals seems quite useful, as it helps travelers new to Cloud City identify and locate different doors. But recall that the door service also exposes several other methods, including ``SetAccessCode``. If access to the door manager service is protected only using traditional IP and port-based firewalling, the TCP port of the service (50051 in this example) will be wide open to allow legitimate calls like ``GetName`` and ``GetLocation``, which also leave more sensitive calls like ``SetAccessCode`` exposed as well. It is this mismatch between the course granularity of traditional firewalls and the fine-grained nature of gRPC calls that R2-D2 exploited to override the security and help the rebels escape. To see this, run: .. code-block:: shell-session $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999 Successfully set AccessCode to 999 Securing Access to a gRPC Service with Cilium ============================================= Once the legitimate owners of Cloud City recover the city from the empire, how can they use Cilium to plug this key security hole and block requests to ``SetAccessCode`` and ``GetStatus`` while still allowing ``GetName``, ``GetLocation``, and ``RequestMaintenance``? .. image:: images/cilium_grpc_gsg_policy.png Since gRPC build on top of HTTP, this can be achieved easily by understanding how a gRPC call is mapped to an HTTP URL, and then applying a Cilium HTTP-aware filter to allow public terminals to only invoke a subset of all the total gRPC methods available on the door service. Each gRPC method is mapped to an HTTP POST call to a URL of the form ``/cloudcity.DoorManager/<method-name>``. As a result, the following *CiliumNetworkPolicy* rule limits access of pods with label ``app=public-terminal`` to only invoke ``GetName``, ``GetLocation``, and ``RequestMaintenance`` on the door service, identified by label ``app=cc-door-mgr``: .. literalinclude:: ../../examples/kubernetes-grpc/cc-door-ingress-security.yaml :language: yaml :emphasize-lines: 9,13,21 A *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests that do not match any rules (e.g., ``SetAccessCode``) are denied as invalid. The above rule applies to inbound (i.e., "ingress") connections to ``cc-door-mgr pods`` (as indicated by ``app: cc-door-mgr`` in the "endpointSelector" section). The rule will apply to connections from pods with label ``app: public-terminal`` as indicated by the "fromEndpoints" section. The rule explicitly matches gRPC connections destined to TCP 50051, and white-lists specifically the permitted URLs. Apply this gRPC-aware network security policy using ``kubectl`` in the main window: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-ingress-security.yaml After this security policy is in place, access to the innocuous calls like ``GetLocation`` still works as intended: .. code-block:: shell-session $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1 Door location is lat = 10.222200393676758 long = 68.87879943847656 However, if we then again try to invoke ``SetAccessCode``, it is denied: .. code-block:: shell-session $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999 Traceback (most recent call last): File "/cloudcity/cc_door_client.py", line 71, in <module> run() File "/cloudcity/cc_door_client.py", line 53, in run door_id=int(arg2), access_code=int(arg3))) File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 492, in __call__ return _end_unary_response_blocking(state, call, False, deadline) File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 440, in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.CANCELLED, Received http2 header with status: 403)> This is now blocked, thanks to the Cilium network policy. And notice that unlike a traditional firewall which would just drop packets in a way indistinguishable from a network failure, because Cilium operates at the API-layer, it can explicitly reply with an custom HTTP 403 Unauthorized error, indicating that the request was intentionally denied for security reasons. Thank goodness that the empire IT staff hadn't had time to deploy Cilium on Cloud City's internal network prior to the escape attempt, or things might have turned out quite differently for Leia and the other Rebels! Clean-Up ======== You have now installed Cilium, deployed a demo app, and tested L7 gRPC-aware network security policies. To clean-up, run: .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-app.yaml $ kubectl delete cnp rule1 After this, you can re-run the tutorial from Step 1.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Securing gRPC This document serves as an introduction to using Cilium to enforce gRPC aware security policies It is a detailed walk through of getting a single node Cilium environment running on your machine It is designed to take 15 30 minutes include gsg requirements rst It is important for this demo that kube dns is working correctly To know the status of kube dns you can run the following command code block shell session kubectl get deployment kube dns n kube system NAME DESIRED CURRENT UP TO DATE AVAILABLE AGE kube dns 1 1 1 1 13h Where at least one pod should be available Deploy the Demo Application Now that we have Cilium deployed and kube dns operating correctly we can deploy our demo gRPC application Since our first demo of Cilium HTTP aware security policies was Star Wars themed we decided to do the same for gRPC While the HTTP aware Cilium Star Wars demo https cilium io blog 2017 5 4 demo may the force be with you showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance this gRPC demo shows how the lack of gRPC aware security policies allowed Leia Chewbacca Lando C 3PO and R2 D2 to escape from Cloud City which had been overtaken by empire forces gRPC https grpc io is a high performance RPC framework built on top of the protobuf https developers google com protocol buffers serialization deserialization library popularized by Google There are gRPC bindings for many programming languages and the efficiency of the protobuf parsing as well as advantages from leveraging HTTP 2 as a transport make it a popular RPC framework for those building new microservices from scratch For those unfamiliar with the details of the movie Leia and the other rebels are fleeing storm troopers and trying to reach the space port platform where the Millennium Falcon is parked so they can fly out of Cloud City However the door to the platform is closed and the access code has been changed However R2 D2 is able to access the Cloud City computer system via a public terminal and disable this security opening the door and letting the Rebels reach the Millennium Falcon just in time to escape image images cilium grpc gsg r2d2 terminal png In our example Cloud City s internal computer system is built as a set of gRPC based microservices who knew that gRPC was actually invented a long time ago in a galaxy far far away With gRPC each service is defined using a language independent protocol buffer definition Here is the definition for the system used to manage doors within Cloud City code block java package cloudcity The door manager service definition service DoorManager Get human readable name of door rpc GetName DoorRequest returns DoorNameReply Find the location of this door rpc GetLocation DoorRequest returns DoorLocationReply Find out whether door is open or closed rpc GetStatus DoorRequest returns DoorStatusReply Request maintenance on the door rpc RequestMaintenance DoorMaintRequest returns DoorActionReply Set Access Code to Open Lock the door rpc SetAccessCode DoorAccessCodeRequest returns DoorActionReply To keep the setup small we will just launch two pods to represent this setup cc door mgr A single pod running the gRPC door manager service with label app cc door mgr terminal 87 One of the public network access terminals scattered across Cloud City R2 D2 plugs into terminal 87 as the rebels are desperately trying to escape This terminal uses the gRPC client code to communicate with the door management services with label app public terminal image images cilium grpc gsg topology png The file cc door app yaml contains a Kubernetes Deployment for the door manager service a Kubernetes Pod representing terminal 87 and a Kubernetes Service for the door manager services To deploy this example app run parsed literal kubectl create f SCM WEB examples kubernetes grpc cc door app yaml deployment cc door mgr created service cc door server created pod terminal 87 created Kubernetes will deploy the pods and service in the background Running kubectl get svc pods will inform you about the progress of the operation Each pod will go through several states until it reaches Running at which point the setup is ready code block shell session kubectl get pods svc NAME READY STATUS RESTARTS AGE po cc door mgr 3590146619 cv4jn 1 1 Running 0 1m po terminal 87 1 1 Running 0 1m NAME CLUSTER IP EXTERNAL IP PORT S AGE svc cc door server 10 0 0 72 none 50051 TCP 1m svc kubernetes 10 0 0 1 none 443 TCP 6m Test Access Between gRPC Client and Server First let s confirm that the public terminal can properly act as a client to the door service We can test this by running a Python gRPC client for the door service that exists in the terminal 87 container We ll invoke the cc door client with the name of the gRPC method to call and any parameters in this case the door id code block shell session kubectl exec terminal 87 python3 cloudcity cc door client py GetName 1 Door name is Spaceport Door 1 kubectl exec terminal 87 python3 cloudcity cc door client py GetLocation 1 Door location is lat 10 222200393676758 long 68 87879943847656 Exposing this information to public terminals seems quite useful as it helps travelers new to Cloud City identify and locate different doors But recall that the door service also exposes several other methods including SetAccessCode If access to the door manager service is protected only using traditional IP and port based firewalling the TCP port of the service 50051 in this example will be wide open to allow legitimate calls like GetName and GetLocation which also leave more sensitive calls like SetAccessCode exposed as well It is this mismatch between the course granularity of traditional firewalls and the fine grained nature of gRPC calls that R2 D2 exploited to override the security and help the rebels escape To see this run code block shell session kubectl exec terminal 87 python3 cloudcity cc door client py SetAccessCode 1 999 Successfully set AccessCode to 999 Securing Access to a gRPC Service with Cilium Once the legitimate owners of Cloud City recover the city from the empire how can they use Cilium to plug this key security hole and block requests to SetAccessCode and GetStatus while still allowing GetName GetLocation and RequestMaintenance image images cilium grpc gsg policy png Since gRPC build on top of HTTP this can be achieved easily by understanding how a gRPC call is mapped to an HTTP URL and then applying a Cilium HTTP aware filter to allow public terminals to only invoke a subset of all the total gRPC methods available on the door service Each gRPC method is mapped to an HTTP POST call to a URL of the form cloudcity DoorManager method name As a result the following CiliumNetworkPolicy rule limits access of pods with label app public terminal to only invoke GetName GetLocation and RequestMaintenance on the door service identified by label app cc door mgr literalinclude examples kubernetes grpc cc door ingress security yaml language yaml emphasize lines 9 13 21 A CiliumNetworkPolicy contains a list of rules that define allowed requests meaning that requests that do not match any rules e g SetAccessCode are denied as invalid The above rule applies to inbound i e ingress connections to cc door mgr pods as indicated by app cc door mgr in the endpointSelector section The rule will apply to connections from pods with label app public terminal as indicated by the fromEndpoints section The rule explicitly matches gRPC connections destined to TCP 50051 and white lists specifically the permitted URLs Apply this gRPC aware network security policy using kubectl in the main window parsed literal kubectl create f SCM WEB examples kubernetes grpc cc door ingress security yaml After this security policy is in place access to the innocuous calls like GetLocation still works as intended code block shell session kubectl exec terminal 87 python3 cloudcity cc door client py GetLocation 1 Door location is lat 10 222200393676758 long 68 87879943847656 However if we then again try to invoke SetAccessCode it is denied code block shell session kubectl exec terminal 87 python3 cloudcity cc door client py SetAccessCode 1 999 Traceback most recent call last File cloudcity cc door client py line 71 in module run File cloudcity cc door client py line 53 in run door id int arg2 access code int arg3 File usr local lib python3 4 dist packages grpc channel py line 492 in call return end unary response blocking state call False deadline File usr local lib python3 4 dist packages grpc channel py line 440 in end unary response blocking raise Rendezvous state None None deadline grpc channel Rendezvous Rendezvous of RPC that terminated with StatusCode CANCELLED Received http2 header with status 403 This is now blocked thanks to the Cilium network policy And notice that unlike a traditional firewall which would just drop packets in a way indistinguishable from a network failure because Cilium operates at the API layer it can explicitly reply with an custom HTTP 403 Unauthorized error indicating that the request was intentionally denied for security reasons Thank goodness that the empire IT staff hadn t had time to deploy Cilium on Cloud City s internal network prior to the escape attempt or things might have turned out quite differently for Leia and the other Rebels Clean Up You have now installed Cilium deployed a demo app and tested L7 gRPC aware network security policies To clean up run parsed literal kubectl delete f SCM WEB examples kubernetes grpc cc door app yaml kubectl delete cnp rule1 After this you can re run the tutorial from Step 1
cilium TLS encrypted connections This TLS aware inspection allows Cilium API aware visibility and policy to function docs cilium io Inspecting TLS Encrypted Connections with Cilium gstlsinspection This document serves as an introduction for how network security teams can use Cilium to transparently inspect
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _gs_tls_inspection: ************************************************ Inspecting TLS Encrypted Connections with Cilium ************************************************ This document serves as an introduction for how network security teams can use Cilium to transparently inspect TLS-encrypted connections. This TLS-aware inspection allows Cilium API-aware visibility and policy to function even for connections where client to server communication is protected by TLS, such as when a client accesses the API service via HTTPS. This capability is similar to what is possible to traditional hardware firewalls, but is implemented entirely in software on the Kubernetes worker node, and is policy driven, allowing inspection to target only selected network connectivity. This type of visibility is extremely valuable to be able to monitor how external API services are being used, for example, understanding which S3 buckets are being accessed by an given application. .. include:: gsg_requirements.rst To ensure that the Cilium agent has the correct permissions to perform TLS Interception, please set the following values in your Helm chart settings: .. code-block:: YAML tls: secretsBackend: k8s secretSync: enabled: true This configures Cilium so that the Cilium Operator will synchronize any secrets referenced in CiliumNetworkPolicy (or CiliumClusterwideNetworkPolicy) to a ``cilium-secrets`` namespace, and grant the Cilium agent read access to Secrets for that namespace only. Deploy the Demo Application =========================== To demonstrate TLS-interception we will use the same ``mediabot`` application that we used for the DNS-aware policy example. This application will access the Star Wars API service using HTTPS, which would normally mean that network-layer mechanisms like Cilium would not be able to see the HTTP-layer details of the communication, since all application data is encrypted using TLS before that data is sent on the network. In this guide we will learn about: - Creating an internal Certificate Authority (CA) and associated certificates signed by that CA to enable TLS interception. - Using Cilium network policy to select the traffic to intercept using DNS-based policy rules. - Inspecting the details of the HTTP request using cilium monitor (accessing this visibility data via Hubble, and applying Cilium network policies to filter/modify the HTTP request is also possible, but is beyond the scope of this simple Getting Started Guide) First off, we will create a single pod ``mediabot`` application: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml $ kubectl wait pod/mediabot --for=condition=Ready $ kubectl get pods NAME READY STATUS RESTARTS AGE pod/mediabot 1/1 Running 0 14s A Brief Overview of the TLS Certificate Model ============================================= TLS is a protocol that "wraps" other protocols like HTTP and ensures that communication between client and server has confidentiality (no one can read the data except the intended recipient), integrity (recipient can confirm that the data has not been modified in transit), and authentication (sender can confirm that it is talking with the intended destination, not an impostor). We will provide a highly simplified overview of TLS in this document, but for full details, please see `<https://en.wikipedia.org/wiki/Transport_Layer_Security>`_ . From an authentication perspective, the TLS model relies on a "Certificate Authority" (CA) which is an entity that is trusted to create proof that a given network service (e.g., www.cilium.io) is who they say they are. The goal is to prevents a malicious party in the network between the client and the server from intercepting the traffic and pretending to be the destination server. In the case of "friendly interception" for network security monitoring, Cilium uses a model similar to traditional firewalls with TLS inspection capabilities: the network security team creates their own "internal certificate authority" that can be used to create alternative certificates for external destinations. This model requires each client workload to also trust this new certificate, otherwise the client's TLS library will reject the connection as invalid. In this model, the network firewall uses the certificate signed by the internal CA to act like the destination service and terminate the TLS connection. This allows the firewall to inspect and even modify the application layer data, and then initiate another TLS connect to the actual destination service. The CA model within TLS is based on cryptographic keys and certificates. Realizing the above model requires four primary steps: 1) Create an internal certificate authority by generating a CA private key and CA certificate. 2) For any destination where TLS inspection is desired (e.g., httpbin.org in the example below), generate a private key and certificate signing request with a common name that matches the destination DNS name. 3) Use the CA private key to create a signed certificate. 4) Ensure that all clients where TLS inspection is have the CA certificate installed so that they will trust all certificates signed by that CA. 5) Given that Cilium will be terminating the initial TLS connection from the client and creating a new TLS connection to the destination, Cilium must be told the set of CAs that it should trust when validating the new TLS connection to the destination service. .. note:: In a non-demo environment it is EXTREMELY important that you keep the above private keys safe, as anyone with access to this private key will be able to inspect TLS-encrypted traffic (certificates on the other hand are public information, and are not at all sensitive). In the guide below, the CA private key does not need to be provided to Cilium at all (it is used only to create certificates, which can be done offline) and private keys for individual destination services are stored as Kubernetes secrets. These secrets should be stored in a namespace where they can be accessed by Cilium, but not general purpose workloads. Generating and Installing TLS Keys and Certificates =================================================== Now that we have explained the high-level certificate model used by TLS, we will walk through the concrete steps to generate the appropriate keys and certificates using the ``openssl`` utility. The following image describes the different files containing cryptographic data that are generated or copied, and what components in the system need access to those files: .. image:: images/cilium_tls_visibility_gsg.png You can use openssl on your local system if it is already installed, but if not a simple shortcut is to use ``kubectl exec`` to execute ``/bin/bash`` within any of the cilium pods, and then run the resulting ``openssl`` commands. Use ``kubectl cp`` to copy the resulting files out of the cilium pod when it is time to use them to create Kubernetes secrets of copy them to the ``mediabot`` pod. Create an Internal Certificate Authority (CA) --------------------------------------------- Generate CA private key named 'myCA.key': .. code-block:: shell-session $ openssl genrsa -des3 -out myCA.key 2048 Enter any password, just remember it for some of the later steps. Generate CA certificate from the private key: .. code-block:: shell-session $ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.crt The values you enter for each prompt do not need to be any specific value, and do not need to be accurate. Create Private Key and Certificate Signing Request for a Given DNS Name ----------------------------------------------------------------------- Generate an internal private key and certificate signing with a common name that matches the DNS name of the destination service to be intercepted for inspection (in this example, use ``httpbin.org``). First create the private key: .. code-block:: shell-session $ openssl genrsa -out internal-httpbin.key 2048 Next, create a certificate signing request, specifying the DNS name of the destination service for the common name field when prompted. All other prompts can be filled with any value. .. code-block:: shell-session $ openssl req -new -key internal-httpbin.key -out internal-httpbin.csr The only field that must be a specific value is ensuring that ``Common Name`` is the exact DNS destination ``httpbin.org`` that will be provided to the client. This example workflow will work for any DNS name as long as the toFQDNs rule in the policy YAML (below) is also updated to match the DNS name in the certificate. Use CA to Generate a Signed Certificate for the DNS Name -------------------------------------------------------- Use the internal CA private key to create a signed certificate for httpbin.org named ``internal-httpbin.crt``. .. code-block:: shell-session $ openssl x509 -req -days 360 -in internal-httpbin.csr -CA myCA.crt -CAkey myCA.key -CAcreateserial -out internal-httpbin.crt -sha256 Next we create a Kubernetes secret that includes both the private key and signed certificates for the destination service: .. code-block:: shell-session $ kubectl create secret tls httpbin-tls-data -n kube-system --cert=internal-httpbin.crt --key=internal-httpbin.key Add the Internal CA as a Trusted CA Inside the Client Pod --------------------------------------------------------- Once the CA certificate is inside the client pod, we still must make sure that the CA file is picked up by the TLS library used by your application. Most Linux applications automatically use a set of trusted CA certificates that are bundled along with the Linux distro. In this guide, we are using an Ubuntu container as the client, and so will update it with Ubuntu specific instructions. Other Linux distros will have different mechanisms. Also, individual applications may leverage their own certificate stores rather than use the OS certificate store. Java applications and the aws-cli are two common examples. Please refer to the application or application runtime documentation for more details. For Ubuntu, we first copy the additional CA certificate to the client pod filesystem .. code-block:: shell-session $ kubectl cp myCA.crt default/mediabot:/usr/local/share/ca-certificates/myCA.crt Then run the Ubuntu-specific utility that adds this certificate to the global set of trusted certificate authorities in /etc/ssl/certs/ca-certificates.crt . .. code-block:: shell-session $ kubectl exec mediabot -- update-ca-certificates This command will issue a WARNING, but this can be ignored. Provide Cilium with List of Trusted CAs --------------------------------------- Next, we will provide Cilium with the set of CAs that it should trust when originating the secondary TLS connections. This list should correspond to the standard set of global CAs that your organization trusts. A logical option for this is the standard CAs that are trusted by your operating system, since this is the set of CAs that were being used prior to introducing TLS inspection. To keep things simple, in this example we will simply copy this list out of the Ubuntu filesystem of the mediabot pod, though it is important to understand that this list of trusted CAs is not specific to a particular TLS client or server, and so this step need only be performed once regardless of how many TLS clients or servers are involved in TLS inspection. .. code-block:: shell-session $ kubectl cp default/mediabot:/etc/ssl/certs/ca-certificates.crt ca-certificates.crt We then will create a Kubernetes secret using this certificate bundle so that Cilium can read the certificate bundle and use it to validate outgoing TLS connections. .. code-block:: shell-session $ kubectl create secret generic tls-orig-data -n kube-system --from-file=ca.crt=./ca-certificates.crt Apply DNS and TLS-aware Egress Policy ===================================== Up to this point, we have created keys and certificates to enable TLS inspection, but we have not told Cilium which traffic we want to intercept and inspect. This is done using the same Cilium Network Policy constructs that are used for other Cilium Network Policies. The following Cilium network policy indicates that Cilium should perform HTTP-aware inspect of communication between the ``mediabot`` pod to ``httpbin.org``. .. literalinclude:: ../../examples/kubernetes-tls-inspection/l7-visibility-tls.yaml Let's take a closer look at the policy: * The ``endpointSelector`` means that this policy will only apply to pods with labels ``class: mediabot, org:empire`` to have the egress access. * The first egress section uses ``toFQDNs: matchName`` specification to allow TCP port 443 egress to ``httpbin.org``. * The ``http`` section below the toFQDNs rule indicates that such connections should be parsed as HTTP, with a policy of ``{}`` which will allow all requests. * The ``terminatingTLS`` and ``originatingTLS`` sections indicate that TLS interception should be used to terminate the initial TLS connection from mediabot and initiate a new out-bound TLS connection to ``httpbin.org``. * The second egress section allows ``mediabot`` pods to access ``kube-dns`` service. Note that ``rules: dns`` instructs Cilium to inspect and allow DNS lookups matching specified patterns. In this case, inspect and allow all DNS queries. Note that with this policy the ``mediabot`` doesn't have access to any internal cluster service other than ``kube-dns`` and will have no access to any other external destinations either. Refer to :ref:`Network Policy` to learn more about policies for controlling access to internal cluster services. Let's apply the policy: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-tls-inspection/l7-visibility-tls.yaml Demonstrating TLS Inspection ============================ Recall that the policy we pushed will allow all HTTPS requests from ``mediabot`` to ``httpbin.org``, but will parse all data at the HTTP-layer, meaning that cilium monitor will report each HTTP request and response. To see this, open a new window and run the following command to identity the name of the cilium pod (e.g, cilium-97s78) that is running on the same Kubernetes worker node as the ``mediabot`` pod. Then start running cilium-dbg monitor in "L7 mode" to monitor for HTTP requests being reported by Cilium: .. code-block:: shell-session $ kubectl exec -it -n kube-system cilium-d5x8v -- cilium-dbg monitor -t l7 Next in the original window, from the ``mediabot`` pod we can access ``httpbin.org`` via HTTPS: .. code-block:: shell-session $ kubectl exec -it mediabot -- curl -sL 'https://httpbin.org/anything' ... ... $ kubectl exec -it mediabot -- curl -sL 'https://httpbin.org/headers' ... ... Looking back at the cilium-dbg monitor window, you will see each individual HTTP request and response. For example:: -> Request http from 2585 ([k8s:class=mediabot k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default]) to 0 ([reserved:world]), identity 24948->2, verdict Forwarded GET https://httpbin.org/anything => 0 -> Response http to 2585 ([k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=mediabot k8s:org=empire]) from 0 ([reserved:world]), identity 24948->2, verdict Forwarded GET https://httpbin.org/anything => 200 Refer to :ref:`l4_policy` and :ref:`l7_policy` to learn more about Cilium L4 and L7 network policies. Clean-up ======== .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml $ kubectl delete cnp l7-visibility-tls $ kubectl delete secret -n kube-system tls-orig-data $ kubectl delete secret -n kube-system httpbin-tls-data
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io gs tls inspection Inspecting TLS Encrypted Connections with Cilium This document serves as an introduction for how network security teams can use Cilium to transparently inspect TLS encrypted connections This TLS aware inspection allows Cilium API aware visibility and policy to function even for connections where client to server communication is protected by TLS such as when a client accesses the API service via HTTPS This capability is similar to what is possible to traditional hardware firewalls but is implemented entirely in software on the Kubernetes worker node and is policy driven allowing inspection to target only selected network connectivity This type of visibility is extremely valuable to be able to monitor how external API services are being used for example understanding which S3 buckets are being accessed by an given application include gsg requirements rst To ensure that the Cilium agent has the correct permissions to perform TLS Interception please set the following values in your Helm chart settings code block YAML tls secretsBackend k8s secretSync enabled true This configures Cilium so that the Cilium Operator will synchronize any secrets referenced in CiliumNetworkPolicy or CiliumClusterwideNetworkPolicy to a cilium secrets namespace and grant the Cilium agent read access to Secrets for that namespace only Deploy the Demo Application To demonstrate TLS interception we will use the same mediabot application that we used for the DNS aware policy example This application will access the Star Wars API service using HTTPS which would normally mean that network layer mechanisms like Cilium would not be able to see the HTTP layer details of the communication since all application data is encrypted using TLS before that data is sent on the network In this guide we will learn about Creating an internal Certificate Authority CA and associated certificates signed by that CA to enable TLS interception Using Cilium network policy to select the traffic to intercept using DNS based policy rules Inspecting the details of the HTTP request using cilium monitor accessing this visibility data via Hubble and applying Cilium network policies to filter modify the HTTP request is also possible but is beyond the scope of this simple Getting Started Guide First off we will create a single pod mediabot application parsed literal kubectl create f SCM WEB examples kubernetes dns dns sw app yaml kubectl wait pod mediabot for condition Ready kubectl get pods NAME READY STATUS RESTARTS AGE pod mediabot 1 1 Running 0 14s A Brief Overview of the TLS Certificate Model TLS is a protocol that wraps other protocols like HTTP and ensures that communication between client and server has confidentiality no one can read the data except the intended recipient integrity recipient can confirm that the data has not been modified in transit and authentication sender can confirm that it is talking with the intended destination not an impostor We will provide a highly simplified overview of TLS in this document but for full details please see https en wikipedia org wiki Transport Layer Security From an authentication perspective the TLS model relies on a Certificate Authority CA which is an entity that is trusted to create proof that a given network service e g www cilium io is who they say they are The goal is to prevents a malicious party in the network between the client and the server from intercepting the traffic and pretending to be the destination server In the case of friendly interception for network security monitoring Cilium uses a model similar to traditional firewalls with TLS inspection capabilities the network security team creates their own internal certificate authority that can be used to create alternative certificates for external destinations This model requires each client workload to also trust this new certificate otherwise the client s TLS library will reject the connection as invalid In this model the network firewall uses the certificate signed by the internal CA to act like the destination service and terminate the TLS connection This allows the firewall to inspect and even modify the application layer data and then initiate another TLS connect to the actual destination service The CA model within TLS is based on cryptographic keys and certificates Realizing the above model requires four primary steps 1 Create an internal certificate authority by generating a CA private key and CA certificate 2 For any destination where TLS inspection is desired e g httpbin org in the example below generate a private key and certificate signing request with a common name that matches the destination DNS name 3 Use the CA private key to create a signed certificate 4 Ensure that all clients where TLS inspection is have the CA certificate installed so that they will trust all certificates signed by that CA 5 Given that Cilium will be terminating the initial TLS connection from the client and creating a new TLS connection to the destination Cilium must be told the set of CAs that it should trust when validating the new TLS connection to the destination service note In a non demo environment it is EXTREMELY important that you keep the above private keys safe as anyone with access to this private key will be able to inspect TLS encrypted traffic certificates on the other hand are public information and are not at all sensitive In the guide below the CA private key does not need to be provided to Cilium at all it is used only to create certificates which can be done offline and private keys for individual destination services are stored as Kubernetes secrets These secrets should be stored in a namespace where they can be accessed by Cilium but not general purpose workloads Generating and Installing TLS Keys and Certificates Now that we have explained the high level certificate model used by TLS we will walk through the concrete steps to generate the appropriate keys and certificates using the openssl utility The following image describes the different files containing cryptographic data that are generated or copied and what components in the system need access to those files image images cilium tls visibility gsg png You can use openssl on your local system if it is already installed but if not a simple shortcut is to use kubectl exec to execute bin bash within any of the cilium pods and then run the resulting openssl commands Use kubectl cp to copy the resulting files out of the cilium pod when it is time to use them to create Kubernetes secrets of copy them to the mediabot pod Create an Internal Certificate Authority CA Generate CA private key named myCA key code block shell session openssl genrsa des3 out myCA key 2048 Enter any password just remember it for some of the later steps Generate CA certificate from the private key code block shell session openssl req x509 new nodes key myCA key sha256 days 1825 out myCA crt The values you enter for each prompt do not need to be any specific value and do not need to be accurate Create Private Key and Certificate Signing Request for a Given DNS Name Generate an internal private key and certificate signing with a common name that matches the DNS name of the destination service to be intercepted for inspection in this example use httpbin org First create the private key code block shell session openssl genrsa out internal httpbin key 2048 Next create a certificate signing request specifying the DNS name of the destination service for the common name field when prompted All other prompts can be filled with any value code block shell session openssl req new key internal httpbin key out internal httpbin csr The only field that must be a specific value is ensuring that Common Name is the exact DNS destination httpbin org that will be provided to the client This example workflow will work for any DNS name as long as the toFQDNs rule in the policy YAML below is also updated to match the DNS name in the certificate Use CA to Generate a Signed Certificate for the DNS Name Use the internal CA private key to create a signed certificate for httpbin org named internal httpbin crt code block shell session openssl x509 req days 360 in internal httpbin csr CA myCA crt CAkey myCA key CAcreateserial out internal httpbin crt sha256 Next we create a Kubernetes secret that includes both the private key and signed certificates for the destination service code block shell session kubectl create secret tls httpbin tls data n kube system cert internal httpbin crt key internal httpbin key Add the Internal CA as a Trusted CA Inside the Client Pod Once the CA certificate is inside the client pod we still must make sure that the CA file is picked up by the TLS library used by your application Most Linux applications automatically use a set of trusted CA certificates that are bundled along with the Linux distro In this guide we are using an Ubuntu container as the client and so will update it with Ubuntu specific instructions Other Linux distros will have different mechanisms Also individual applications may leverage their own certificate stores rather than use the OS certificate store Java applications and the aws cli are two common examples Please refer to the application or application runtime documentation for more details For Ubuntu we first copy the additional CA certificate to the client pod filesystem code block shell session kubectl cp myCA crt default mediabot usr local share ca certificates myCA crt Then run the Ubuntu specific utility that adds this certificate to the global set of trusted certificate authorities in etc ssl certs ca certificates crt code block shell session kubectl exec mediabot update ca certificates This command will issue a WARNING but this can be ignored Provide Cilium with List of Trusted CAs Next we will provide Cilium with the set of CAs that it should trust when originating the secondary TLS connections This list should correspond to the standard set of global CAs that your organization trusts A logical option for this is the standard CAs that are trusted by your operating system since this is the set of CAs that were being used prior to introducing TLS inspection To keep things simple in this example we will simply copy this list out of the Ubuntu filesystem of the mediabot pod though it is important to understand that this list of trusted CAs is not specific to a particular TLS client or server and so this step need only be performed once regardless of how many TLS clients or servers are involved in TLS inspection code block shell session kubectl cp default mediabot etc ssl certs ca certificates crt ca certificates crt We then will create a Kubernetes secret using this certificate bundle so that Cilium can read the certificate bundle and use it to validate outgoing TLS connections code block shell session kubectl create secret generic tls orig data n kube system from file ca crt ca certificates crt Apply DNS and TLS aware Egress Policy Up to this point we have created keys and certificates to enable TLS inspection but we have not told Cilium which traffic we want to intercept and inspect This is done using the same Cilium Network Policy constructs that are used for other Cilium Network Policies The following Cilium network policy indicates that Cilium should perform HTTP aware inspect of communication between the mediabot pod to httpbin org literalinclude examples kubernetes tls inspection l7 visibility tls yaml Let s take a closer look at the policy The endpointSelector means that this policy will only apply to pods with labels class mediabot org empire to have the egress access The first egress section uses toFQDNs matchName specification to allow TCP port 443 egress to httpbin org The http section below the toFQDNs rule indicates that such connections should be parsed as HTTP with a policy of which will allow all requests The terminatingTLS and originatingTLS sections indicate that TLS interception should be used to terminate the initial TLS connection from mediabot and initiate a new out bound TLS connection to httpbin org The second egress section allows mediabot pods to access kube dns service Note that rules dns instructs Cilium to inspect and allow DNS lookups matching specified patterns In this case inspect and allow all DNS queries Note that with this policy the mediabot doesn t have access to any internal cluster service other than kube dns and will have no access to any other external destinations either Refer to ref Network Policy to learn more about policies for controlling access to internal cluster services Let s apply the policy parsed literal kubectl create f SCM WEB examples kubernetes tls inspection l7 visibility tls yaml Demonstrating TLS Inspection Recall that the policy we pushed will allow all HTTPS requests from mediabot to httpbin org but will parse all data at the HTTP layer meaning that cilium monitor will report each HTTP request and response To see this open a new window and run the following command to identity the name of the cilium pod e g cilium 97s78 that is running on the same Kubernetes worker node as the mediabot pod Then start running cilium dbg monitor in L7 mode to monitor for HTTP requests being reported by Cilium code block shell session kubectl exec it n kube system cilium d5x8v cilium dbg monitor t l7 Next in the original window from the mediabot pod we can access httpbin org via HTTPS code block shell session kubectl exec it mediabot curl sL https httpbin org anything kubectl exec it mediabot curl sL https httpbin org headers Looking back at the cilium dbg monitor window you will see each individual HTTP request and response For example Request http from 2585 k8s class mediabot k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default k8s io cilium k8s policy cluster default to 0 reserved world identity 24948 2 verdict Forwarded GET https httpbin org anything 0 Response http to 2585 k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default k8s io cilium k8s policy cluster default k8s class mediabot k8s org empire from 0 reserved world identity 24948 2 verdict Forwarded GET https httpbin org anything 200 Refer to ref l4 policy and ref l7 policy to learn more about Cilium L4 and L7 network policies Clean up parsed literal kubectl delete f SCM WEB examples kubernetes dns dns sw app yaml kubectl delete cnp l7 visibility tls kubectl delete secret n kube system tls orig data kubectl delete secret n kube system httpbin tls data
cilium This document serves as an introduction to using Cilium to enforce policies docs cilium io based on AWS metadata It provides a detailed walk through of running a single node awsmetadatawithpolicy Locking Down External Access Using AWS Metadata
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _aws_metadata_with_policy: *********************************************** Locking Down External Access Using AWS Metadata *********************************************** This document serves as an introduction to using Cilium to enforce policies based on AWS metadata. It provides a detailed walk-through of running a single-node Cilium environment on your machine. It is designed to take 15-30 minutes for users with some experience running Kubernetes. Setup Cilium ============ This guide will work with any approach to installing Cilium, including minikube, as long as the cilium-operator pod in the deployment can reach the AWS API server However, since the most common use of this mechanism is for Kubernetes clusters running in AWS, we recommend trying it out along with the guide: :ref:`k8s_install_quick` . Create AWS secrets ================== Before installing Cilium, a new Kubernetes Secret with the AWS Tokens needs to be added to your Kubernetes cluster. This Secret will allow Cilium to gather information from the AWS API which is needed to implement ToGroups policies. AWS Access keys and IAM role ------------------------------ To create a new access token the `following guide can be used <https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html>`_. These keys need to have certain permissions set: .. code-block:: javascript { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" } ] } As soon as you have the access tokens, the following secret needs to be added, with each empty string replaced by the associated value as a base64-encoded string: .. code-block:: yaml :name: cilium-secret.yaml apiVersion: v1 kind: Secret metadata: name: cilium-aws namespace: kube-system type: Opaque data: AWS_ACCESS_KEY_ID: "" AWS_SECRET_ACCESS_KEY: "" AWS_DEFAULT_REGION: "" The base64 command line utility can be used to generate each value, for example: .. code-block:: shell-session $ echo -n "eu-west-1" | base64 ZXUtd2VzdC0x This secret stores the AWS credentials, which will be used to connect the AWS API. .. code-block:: shell-session $ kubectl create -f cilium-secret.yaml To validate that the credentials are correct, the following pod can be created for debugging purposes: .. code-block:: yaml apiVersion: v1 kind: Pod metadata: name: testing-aws-pod namespace: kube-system spec: containers: - name: aws-cli image: mesosphere/aws-cli command: ['sh', '-c', 'sleep 3600'] env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: cilium-aws key: AWS_ACCESS_KEY_ID optional: true - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: cilium-aws key: AWS_SECRET_ACCESS_KEY optional: true - name: AWS_DEFAULT_REGION valueFrom: secretKeyRef: name: cilium-aws key: AWS_DEFAULT_REGION optional: true To list all of the available AWS instances, the following command can be used: .. code-block:: shell-session $ kubectl -n kube-system exec -ti testing-aws-pod -- aws ec2 describe-instances Once the secret has been created and validated, the cilium-operator pod must be restarted in order to pick up the credentials in the secret. To do this, identify and delete the existing cilium-operator pod, which will be recreated automatically: .. code-block:: shell-session $ kubectl get pods -l name=cilium-operator -n kube-system NAME READY STATUS RESTARTS AGE cilium-operator-7c9d69f7c-97vqx 1/1 Running 0 36h $ kubectl delete pod cilium-operator-7c9d69f7c-97vqx It is important for this demo that ``coredns`` is working correctly. To know the status of ``coredns`` you can run the following command: .. code-block:: shell-session $ kubectl get deployment kube-dns -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE coredns 2 2 2 2 13h Where at least one pod should be available. Configure AWS Security Groups ============================= Cilium's AWS Metadata filtering capability enables explicit whitelisting of communication between a subset of pods (identified by Kubernetes labels) with a set of destination EC2 ENIs (identified by membership in an AWS security group). In this example, the destination EC2 elastic network interfaces are attached to EC2 instances that are members of a single AWS security group ('sg-0f2146100a88d03c3'). Pods with label ``class=xwing`` should only be able to make connections outside the cluster to the destination network interfaces in that security group. To enable this, the VMs acting as Kubernetes worker nodes must be able to send traffic to the destination VMs that are being accessed by pods. One approach for achieving this is to put all Kubernetes worker VMs in a single 'k8s-worker' security group, and then ensure that any security group that is referenced in a Cilium toGroups policy has an allow all ingress rule (all ports) for connections from the 'k8s-worker' security group. Cilium filtering will then ensure that the only pods allowed by policy can reach the destination VMs. Create a sample policy ====================== Deploy a demo application: ---------------------------- In this case we're going to use a demo application that is used in other guides. These manifests will create three microservices applications: *deathstar*, *tiefighter*, and *xwing*. In this case, we are only going to use our *xwing* microservice to secure communications to existing AWS instances. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/minikube/http-sw-app.yaml service "deathstar" created deployment "deathstar" created deployment "tiefighter" created deployment "xwing" created Kubernetes will deploy the pods and service in the background. Running ``kubectl get pods,svc`` will inform you about the progress of the operation. Each pod will go through several states until it reaches ``Running`` at which point the pod is ready. .. code-block:: shell-session $ kubectl get pods,svc NAME READY STATUS RESTARTS AGE po/deathstar-76995f4687-2mxb2 1/1 Running 0 1m po/deathstar-76995f4687-xbgnl 1/1 Running 0 1m po/tiefighter 1/1 Running 0 1m po/xwing 1/1 Running 0 1m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/deathstar ClusterIP 10.109.254.198 <none> 80/TCP 3h svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h Policy Language: ----------------- **ToGroups** rules can be used to define policy in relation to cloud providers, like AWS. .. code-block:: yaml --- kind: CiliumNetworkPolicy apiVersion: cilium.io/v2 metadata: name: to-groups-sample namespace: default spec: endpointSelector: matchLabels: org: alliance class: xwing egress: - toPorts: - ports: - port: '80' protocol: TCP toGroups: - aws: securityGroupsIds: - 'sg-0f2146100a88d03c3' This policy allows traffic from pod *xwing* to any AWS elastic network interface in the security group with ID ``sg-0f2146100a88d03c3``. Validate that derived policy is in place ---------------------------------------- Every time that a new policy with ToGroups rules is added, an equivalent policy (also called "derivative policy"), will be created. This policy will contain the set of CIDRs that correspond to the specification in ToGroups, e.g., the IPs of all network interfaces that are part of a specified security group. The list of IPs is updated periodically. .. code-block:: shell-session $ kubectl get cnp NAME AGE to-groups-sample 11s to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952 10s Eventually, the derivative policy will contain IPs in the ToCIDR section: .. code-block:: shell-session $ kubectl get cnp to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952 .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: creationTimestamp: 2018-11-30T11:13:52Z generation: 1 labels: io.cilium.network.policy.kind: derivative io.cilium.network.policy.parent.uuid: 044ba7d1-f491-11e8-ad2e-080027d2d952 name: to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952 namespace: default ownerReferences: - apiVersion: cilium.io/v2 blockOwnerDeletion: true kind: CiliumNetworkPolicy name: to-groups-sample uid: 044ba7d1-f491-11e8-ad2e-080027d2d952 resourceVersion: "34853" selfLink: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952 uid: 04b289ba-f491-11e8-ad2e-080027d2d952 specs: - egress: - toCIDRSet: - cidr: 34.254.113.42/32 - cidr: 172.31.44.160/32 toPorts: - ports: - port: "80" protocol: TCP endpointSelector: matchLabels: any:class: xwing any:org: alliance k8s:io.kubernetes.pod.namespace: default labels: - key: io.cilium.k8s.policy.name source: k8s value: to-groups-sample - key: io.cilium.k8s.policy.uid source: k8s value: 044ba7d1-f491-11e8-ad2e-080027d2d952 - key: io.cilium.k8s.policy.namespace source: k8s value: default - key: io.cilium.k8s.policy.derived-from source: k8s value: CiliumNetworkPolicy status: nodes: k8s1: enforcing: true lastUpdated: 2018-11-30T11:28:03.907678888Z localPolicyRevision: 28 ok: true The derivative rule should contain the following information: - *metadata.OwnerReferences*: that contains the information about the ToGroups policy. - *specs.Egress.ToCIDRSet*: the list of private and public IPs of the instances that correspond to the spec of the parent policy. - *status*: whether or not the policy is enforced yet, and when the policy was last updated. The endpoint status for the *xwing* should have policy enforcement enabled only for egress connectivity: .. code-block:: shell-session $ kubectl exec -q -it -n kube-system cilium-85vtg -- cilium-dbg endpoint get 23453 -o jsonpath='{$[0].status.policy.realized.policy-enabled}' egress In this example, *xwing* pod can only connect to ``34.254.113.42/32`` and ``172.31.44.160/32`` and connectivity to other IP will be denied.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io aws metadata with policy Locking Down External Access Using AWS Metadata This document serves as an introduction to using Cilium to enforce policies based on AWS metadata It provides a detailed walk through of running a single node Cilium environment on your machine It is designed to take 15 30 minutes for users with some experience running Kubernetes Setup Cilium This guide will work with any approach to installing Cilium including minikube as long as the cilium operator pod in the deployment can reach the AWS API server However since the most common use of this mechanism is for Kubernetes clusters running in AWS we recommend trying it out along with the guide ref k8s install quick Create AWS secrets Before installing Cilium a new Kubernetes Secret with the AWS Tokens needs to be added to your Kubernetes cluster This Secret will allow Cilium to gather information from the AWS API which is needed to implement ToGroups policies AWS Access keys and IAM role To create a new access token the following guide can be used https docs aws amazon com cli latest userguide cli configure files html These keys need to have certain permissions set code block javascript Version 2012 10 17 Statement Effect Allow Action ec2 Describe Resource As soon as you have the access tokens the following secret needs to be added with each empty string replaced by the associated value as a base64 encoded string code block yaml name cilium secret yaml apiVersion v1 kind Secret metadata name cilium aws namespace kube system type Opaque data AWS ACCESS KEY ID AWS SECRET ACCESS KEY AWS DEFAULT REGION The base64 command line utility can be used to generate each value for example code block shell session echo n eu west 1 base64 ZXUtd2VzdC0x This secret stores the AWS credentials which will be used to connect the AWS API code block shell session kubectl create f cilium secret yaml To validate that the credentials are correct the following pod can be created for debugging purposes code block yaml apiVersion v1 kind Pod metadata name testing aws pod namespace kube system spec containers name aws cli image mesosphere aws cli command sh c sleep 3600 env name AWS ACCESS KEY ID valueFrom secretKeyRef name cilium aws key AWS ACCESS KEY ID optional true name AWS SECRET ACCESS KEY valueFrom secretKeyRef name cilium aws key AWS SECRET ACCESS KEY optional true name AWS DEFAULT REGION valueFrom secretKeyRef name cilium aws key AWS DEFAULT REGION optional true To list all of the available AWS instances the following command can be used code block shell session kubectl n kube system exec ti testing aws pod aws ec2 describe instances Once the secret has been created and validated the cilium operator pod must be restarted in order to pick up the credentials in the secret To do this identify and delete the existing cilium operator pod which will be recreated automatically code block shell session kubectl get pods l name cilium operator n kube system NAME READY STATUS RESTARTS AGE cilium operator 7c9d69f7c 97vqx 1 1 Running 0 36h kubectl delete pod cilium operator 7c9d69f7c 97vqx It is important for this demo that coredns is working correctly To know the status of coredns you can run the following command code block shell session kubectl get deployment kube dns n kube system NAME DESIRED CURRENT UP TO DATE AVAILABLE AGE coredns 2 2 2 2 13h Where at least one pod should be available Configure AWS Security Groups Cilium s AWS Metadata filtering capability enables explicit whitelisting of communication between a subset of pods identified by Kubernetes labels with a set of destination EC2 ENIs identified by membership in an AWS security group In this example the destination EC2 elastic network interfaces are attached to EC2 instances that are members of a single AWS security group sg 0f2146100a88d03c3 Pods with label class xwing should only be able to make connections outside the cluster to the destination network interfaces in that security group To enable this the VMs acting as Kubernetes worker nodes must be able to send traffic to the destination VMs that are being accessed by pods One approach for achieving this is to put all Kubernetes worker VMs in a single k8s worker security group and then ensure that any security group that is referenced in a Cilium toGroups policy has an allow all ingress rule all ports for connections from the k8s worker security group Cilium filtering will then ensure that the only pods allowed by policy can reach the destination VMs Create a sample policy Deploy a demo application In this case we re going to use a demo application that is used in other guides These manifests will create three microservices applications deathstar tiefighter and xwing In this case we are only going to use our xwing microservice to secure communications to existing AWS instances parsed literal kubectl create f SCM WEB examples minikube http sw app yaml service deathstar created deployment deathstar created deployment tiefighter created deployment xwing created Kubernetes will deploy the pods and service in the background Running kubectl get pods svc will inform you about the progress of the operation Each pod will go through several states until it reaches Running at which point the pod is ready code block shell session kubectl get pods svc NAME READY STATUS RESTARTS AGE po deathstar 76995f4687 2mxb2 1 1 Running 0 1m po deathstar 76995f4687 xbgnl 1 1 Running 0 1m po tiefighter 1 1 Running 0 1m po xwing 1 1 Running 0 1m NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE svc deathstar ClusterIP 10 109 254 198 none 80 TCP 3h svc kubernetes ClusterIP 10 96 0 1 none 443 TCP 3h Policy Language ToGroups rules can be used to define policy in relation to cloud providers like AWS code block yaml kind CiliumNetworkPolicy apiVersion cilium io v2 metadata name to groups sample namespace default spec endpointSelector matchLabels org alliance class xwing egress toPorts ports port 80 protocol TCP toGroups aws securityGroupsIds sg 0f2146100a88d03c3 This policy allows traffic from pod xwing to any AWS elastic network interface in the security group with ID sg 0f2146100a88d03c3 Validate that derived policy is in place Every time that a new policy with ToGroups rules is added an equivalent policy also called derivative policy will be created This policy will contain the set of CIDRs that correspond to the specification in ToGroups e g the IPs of all network interfaces that are part of a specified security group The list of IPs is updated periodically code block shell session kubectl get cnp NAME AGE to groups sample 11s to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952 10s Eventually the derivative policy will contain IPs in the ToCIDR section code block shell session kubectl get cnp to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952 code block yaml apiVersion cilium io v2 kind CiliumNetworkPolicy metadata creationTimestamp 2018 11 30T11 13 52Z generation 1 labels io cilium network policy kind derivative io cilium network policy parent uuid 044ba7d1 f491 11e8 ad2e 080027d2d952 name to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952 namespace default ownerReferences apiVersion cilium io v2 blockOwnerDeletion true kind CiliumNetworkPolicy name to groups sample uid 044ba7d1 f491 11e8 ad2e 080027d2d952 resourceVersion 34853 selfLink apis cilium io v2 namespaces default ciliumnetworkpolicies to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952 uid 04b289ba f491 11e8 ad2e 080027d2d952 specs egress toCIDRSet cidr 34 254 113 42 32 cidr 172 31 44 160 32 toPorts ports port 80 protocol TCP endpointSelector matchLabels any class xwing any org alliance k8s io kubernetes pod namespace default labels key io cilium k8s policy name source k8s value to groups sample key io cilium k8s policy uid source k8s value 044ba7d1 f491 11e8 ad2e 080027d2d952 key io cilium k8s policy namespace source k8s value default key io cilium k8s policy derived from source k8s value CiliumNetworkPolicy status nodes k8s1 enforcing true lastUpdated 2018 11 30T11 28 03 907678888Z localPolicyRevision 28 ok true The derivative rule should contain the following information metadata OwnerReferences that contains the information about the ToGroups policy specs Egress ToCIDRSet the list of private and public IPs of the instances that correspond to the spec of the parent policy status whether or not the policy is enforced yet and when the policy was last updated The endpoint status for the xwing should have policy enforcement enabled only for egress connectivity code block shell session kubectl exec q it n kube system cilium 85vtg cilium dbg endpoint get 23453 o jsonpath 0 status policy realized policy enabled egress In this example xwing pod can only connect to 34 254 113 42 32 and 172 31 44 160 32 and connectivity to other IP will be denied
cilium policyverdicts docs cilium io Policy Audit Mode configures Cilium to allow all traffic while logging all Creating Policies from Verdicts connections that would otherwise be dropped by network policies Policy Audit
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _policy_verdicts: ******************************* Creating Policies from Verdicts ******************************* Policy Audit Mode configures Cilium to allow all traffic while logging all connections that would otherwise be dropped by network policies. Policy Audit Mode may be configured for the entire daemon using ``--policy-audit-mode=true`` or for individual Cilium Endpoints. When Policy Audit Mode is enabled, no network policy is enforced so this setting is **not recommended for production deployment**. Policy Audit Mode supports auditing network policies implemented at networks layers 3 and 4. This guide walks through the process of creating policies using Policy Audit Mode. .. include:: gsg_requirements.rst .. include:: gsg_sw_demo.rst Scale down the deathstar Deployment =================================== In this guide we're going to scale down the deathstar Deployment in order to simplify the next steps: .. code-block:: shell-session $ kubectl scale --replicas=1 deployment deathstar deployment.apps/deathstar scaled Enable Policy Audit Mode (Entire Daemon) ======================================== To observe policy audit messages for all endpoints managed by this Daemonset, modify the Cilium ConfigMap and restart all daemons: .. tabs:: .. group-tab:: Configure via kubectl .. code-block:: shell-session $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"true"}}' configmap/cilium-config patched $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium daemonset.apps/cilium restarted $ kubectl -n $CILIUM_NAMESPACE rollout status ds/cilium Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available... daemon set "cilium" successfully rolled out .. group-tab:: Helm Upgrade If you installed Cilium via ``helm install``, then you can use ``helm upgrade`` to enable Policy Audit Mode: .. parsed-literal:: $ helm upgrade cilium |CHART_RELEASE| \\ --namespace $CILIUM_NAMESPACE \\ --reuse-values \\ --set policyAuditMode=true Enable Policy Audit Mode (Specific Endpoint) ============================================ Cilium can enable Policy Audit Mode for a specific endpoint. This may be helpful when enabling Policy Audit Mode for the entire daemon is too broad. Enabling per endpoint will ensure other endpoints managed by the same daemon are not impacted. This approach is meant to be temporary. **Restarting Cilium pod will reset the Policy Audit Mode to match the daemon's configuration.** Policy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via the ``cilium`` tool on the endpoint's Kubernetes node. The steps include: #. Determine the endpoint id on which Policy Audit Mode will be enabled. #. Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint. #. Using the Cilium pod above, modify the endpoint configuration by setting ``PolicyAuditMode=Enabled``. The following shell commands perform these steps: .. code-block:: shell-session $ PODNAME=$(kubectl get pods -l app.kubernetes.io/name=deathstar -o jsonpath='{.items[*].metadata.name}') $ NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}") $ ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}") $ CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}') $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ cilium-dbg endpoint config "$ENDPOINT" PolicyAuditMode=Enabled Endpoint 232 configuration updated successfully We can check that Policy Audit Mode is enabled for this endpoint with .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ cilium-dbg endpoint get "$ENDPOINT" -o jsonpath='{[*].spec.options.PolicyAuditMode}' Enabled .. _observe_policy_verdicts: Observe policy verdicts ======================= In this example, we are tasked with applying security policy for the deathstar. First, from the Cilium pod we need to monitor the notifications for policy verdicts using the Hubble CLI. We'll be monitoring for inbound traffic towards the deathstar to identify it and determine whether to extend the network policy to allow that traffic. Apply a default-deny policy: .. literalinclude:: ../../examples/minikube/sw_deny_policy.yaml CiliumNetworkPolicies match on pod labels using an ``endpointSelector`` to identify the sources and destinations to which the policy applies. The above policy denies traffic sent to any pods with label (``org=empire``). Due to the Policy Audit Mode enabled above (either for the entire daemon, or for just the ``deathstar`` endpoint), the traffic will not actually be denied but will instead trigger policy verdict notifications. To apply this policy, run: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_deny_policy.yaml ciliumnetworkpolicy.cilium.io/empire-default-deny created With the above policy, we will enable a default-deny posture on ingress to pods with the label ``org=empire`` and enable the policy verdict notifications for those pods. The same principle applies on egress as well. Now let's send some traffic from the tiefighter to the deathstar: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed We can check the policy verdict from the Cilium Pod: .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ hubble observe flows -t policy-verdict --last 1 Feb 7 12:53:39.168: default/tiefighter:54134 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none AUDITED (TCP Flags: SYN) In the above example, we can see that the Pod ``deathstar-6fb5694d48-5hmds`` has received traffic from the ``tiefighter`` Pod which doesn't match the policy (``policy-verdict:none AUDITED``). .. _create_network_policy: Create the Network Policy ========================= We can get more information about the flow with .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ hubble observe flows -t policy-verdict -o json --last 1 Given the above information, we now know the labels of the source and destination Pods, the traffic direction, and the destination port. In this case, we can see clearly that the source (i.e. the tiefighter Pod) is an empire aircraft (as it has the ``org=empire`` label) so once we've determined that we expect this traffic to arrive at the deathstar, we can form a policy to match the traffic: .. literalinclude:: ../../examples/minikube/sw_l3_l4_policy.yaml To apply this L3/L4 policy, run: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 created Now if we run the landing requests again, .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed we can then observe that the traffic which was previously audited to be dropped by the policy is reported as allowed: .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ hubble observe flows -t policy-verdict --last 1 ... Feb 7 13:06:45.130: default/tiefighter:59824 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN) Now the policy verdict states that the traffic would be allowed: ``policy-verdict:L3-L4 ALLOWED``. Success! Disable Policy Audit Mode (Entire Daemon) ========================================= These steps should be repeated for each connection in the cluster to ensure that the network policy allows all of the expected traffic. The final step after deploying the policy is to disable Policy Audit Mode again: .. tabs:: .. group-tab:: Configure via kubectl .. code-block:: shell-session $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"false"}}' configmap/cilium-config patched $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium daemonset.apps/cilium restarted $ kubectl -n kube-system rollout status ds/cilium Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available... daemon set "cilium" successfully rolled out .. group-tab:: Helm Upgrade .. parsed-literal:: $ helm upgrade cilium |CHART_RELEASE| \\ --namespace $CILIUM_NAMESPACE \\ --reuse-values \\ --set policyAuditMode=false Disable Policy Audit Mode (Specific Endpoint) ============================================= These steps are nearly identical to enabling Policy Audit Mode. .. code-block:: shell-session $ PODNAME=$(kubectl get pods -l app.kubernetes.io/name=deathstar -o jsonpath='{.items[*].metadata.name}') $ NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}") $ ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}") $ CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}') $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ cilium-dbg endpoint config "$ENDPOINT" PolicyAuditMode=Disabled Endpoint 232 configuration updated successfully Alternatively, **restarting the Cilium pod** will set the endpoint Policy Audit Mode to the daemon set configuration. Verify Policy Audit Mode is Disabled ==================================== .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ cilium-dbg endpoint get "$ENDPOINT" -o jsonpath='{[*].spec.options.PolicyAuditMode}' Disabled Now if we run the landing requests again, only the *tiefighter* pods with the label ``org=empire`` should succeed: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed And we can observe that the traffic was allowed by the policy: .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ hubble observe flows -t policy-verdict --from-pod tiefighter --last 1 Feb 7 13:34:26.112: default/tiefighter:37314 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN) This works as expected. Now the same request from an *xwing* Pod should fail: .. code-block:: shell-session $ kubectl exec xwing -- curl --connect-timeout 3 -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing command terminated with exit code 28 This curl request should timeout after three seconds, we can observe the policy verdict with: .. code-block:: shell-session $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \ hubble observe flows -t policy-verdict --from-pod xwing --last 1 Feb 7 13:43:46.791: default/xwing:54842 (ID:22654) <> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none DENIED (TCP Flags: SYN) We hope you enjoyed the tutorial. Feel free to play more with the setup, follow the `gs_http` guide, and reach out to us on `Cilium Slack`_ with any questions! Clean-up ======== .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/minikube/http-sw-app.yaml $ kubectl delete cnp empire-default-deny rule1
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io policy verdicts Creating Policies from Verdicts Policy Audit Mode configures Cilium to allow all traffic while logging all connections that would otherwise be dropped by network policies Policy Audit Mode may be configured for the entire daemon using policy audit mode true or for individual Cilium Endpoints When Policy Audit Mode is enabled no network policy is enforced so this setting is not recommended for production deployment Policy Audit Mode supports auditing network policies implemented at networks layers 3 and 4 This guide walks through the process of creating policies using Policy Audit Mode include gsg requirements rst include gsg sw demo rst Scale down the deathstar Deployment In this guide we re going to scale down the deathstar Deployment in order to simplify the next steps code block shell session kubectl scale replicas 1 deployment deathstar deployment apps deathstar scaled Enable Policy Audit Mode Entire Daemon To observe policy audit messages for all endpoints managed by this Daemonset modify the Cilium ConfigMap and restart all daemons tabs group tab Configure via kubectl code block shell session kubectl patch n CILIUM NAMESPACE configmap cilium config type merge patch data policy audit mode true configmap cilium config patched kubectl n CILIUM NAMESPACE rollout restart ds cilium daemonset apps cilium restarted kubectl n CILIUM NAMESPACE rollout status ds cilium Waiting for daemon set cilium rollout to finish 0 of 1 updated pods are available daemon set cilium successfully rolled out group tab Helm Upgrade If you installed Cilium via helm install then you can use helm upgrade to enable Policy Audit Mode parsed literal helm upgrade cilium CHART RELEASE namespace CILIUM NAMESPACE reuse values set policyAuditMode true Enable Policy Audit Mode Specific Endpoint Cilium can enable Policy Audit Mode for a specific endpoint This may be helpful when enabling Policy Audit Mode for the entire daemon is too broad Enabling per endpoint will ensure other endpoints managed by the same daemon are not impacted This approach is meant to be temporary Restarting Cilium pod will reset the Policy Audit Mode to match the daemon s configuration Policy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via the cilium tool on the endpoint s Kubernetes node The steps include Determine the endpoint id on which Policy Audit Mode will be enabled Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint Using the Cilium pod above modify the endpoint configuration by setting PolicyAuditMode Enabled The following shell commands perform these steps code block shell session PODNAME kubectl get pods l app kubernetes io name deathstar o jsonpath items metadata name NODENAME kubectl get pod o jsonpath items metadata name PODNAME spec nodeName ENDPOINT kubectl get cep o jsonpath items metadata name PODNAME status id CILIUM POD kubectl n CILIUM NAMESPACE get pod all namespaces field selector spec nodeName NODENAME lk8s app cilium o jsonpath items metadata name kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent cilium dbg endpoint config ENDPOINT PolicyAuditMode Enabled Endpoint 232 configuration updated successfully We can check that Policy Audit Mode is enabled for this endpoint with code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent cilium dbg endpoint get ENDPOINT o jsonpath spec options PolicyAuditMode Enabled observe policy verdicts Observe policy verdicts In this example we are tasked with applying security policy for the deathstar First from the Cilium pod we need to monitor the notifications for policy verdicts using the Hubble CLI We ll be monitoring for inbound traffic towards the deathstar to identify it and determine whether to extend the network policy to allow that traffic Apply a default deny policy literalinclude examples minikube sw deny policy yaml CiliumNetworkPolicies match on pod labels using an endpointSelector to identify the sources and destinations to which the policy applies The above policy denies traffic sent to any pods with label org empire Due to the Policy Audit Mode enabled above either for the entire daemon or for just the deathstar endpoint the traffic will not actually be denied but will instead trigger policy verdict notifications To apply this policy run parsed literal kubectl create f SCM WEB examples minikube sw deny policy yaml ciliumnetworkpolicy cilium io empire default deny created With the above policy we will enable a default deny posture on ingress to pods with the label org empire and enable the policy verdict notifications for those pods The same principle applies on egress as well Now let s send some traffic from the tiefighter to the deathstar code block shell session kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed We can check the policy verdict from the Cilium Pod code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent hubble observe flows t policy verdict last 1 Feb 7 12 53 39 168 default tiefighter 54134 ID 31028 default deathstar 6fb5694d48 5hmds 80 ID 16530 policy verdict none AUDITED TCP Flags SYN In the above example we can see that the Pod deathstar 6fb5694d48 5hmds has received traffic from the tiefighter Pod which doesn t match the policy policy verdict none AUDITED create network policy Create the Network Policy We can get more information about the flow with code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent hubble observe flows t policy verdict o json last 1 Given the above information we now know the labels of the source and destination Pods the traffic direction and the destination port In this case we can see clearly that the source i e the tiefighter Pod is an empire aircraft as it has the org empire label so once we ve determined that we expect this traffic to arrive at the deathstar we can form a policy to match the traffic literalinclude examples minikube sw l3 l4 policy yaml To apply this L3 L4 policy run parsed literal kubectl create f SCM WEB examples minikube sw l3 l4 policy yaml ciliumnetworkpolicy cilium io rule1 created Now if we run the landing requests again code block shell session kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed we can then observe that the traffic which was previously audited to be dropped by the policy is reported as allowed code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent hubble observe flows t policy verdict last 1 Feb 7 13 06 45 130 default tiefighter 59824 ID 31028 default deathstar 6fb5694d48 5hmds 80 ID 16530 policy verdict L3 L4 ALLOWED TCP Flags SYN Now the policy verdict states that the traffic would be allowed policy verdict L3 L4 ALLOWED Success Disable Policy Audit Mode Entire Daemon These steps should be repeated for each connection in the cluster to ensure that the network policy allows all of the expected traffic The final step after deploying the policy is to disable Policy Audit Mode again tabs group tab Configure via kubectl code block shell session kubectl patch n CILIUM NAMESPACE configmap cilium config type merge patch data policy audit mode false configmap cilium config patched kubectl n CILIUM NAMESPACE rollout restart ds cilium daemonset apps cilium restarted kubectl n kube system rollout status ds cilium Waiting for daemon set cilium rollout to finish 0 of 1 updated pods are available daemon set cilium successfully rolled out group tab Helm Upgrade parsed literal helm upgrade cilium CHART RELEASE namespace CILIUM NAMESPACE reuse values set policyAuditMode false Disable Policy Audit Mode Specific Endpoint These steps are nearly identical to enabling Policy Audit Mode code block shell session PODNAME kubectl get pods l app kubernetes io name deathstar o jsonpath items metadata name NODENAME kubectl get pod o jsonpath items metadata name PODNAME spec nodeName ENDPOINT kubectl get cep o jsonpath items metadata name PODNAME status id CILIUM POD kubectl n CILIUM NAMESPACE get pod all namespaces field selector spec nodeName NODENAME lk8s app cilium o jsonpath items metadata name kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent cilium dbg endpoint config ENDPOINT PolicyAuditMode Disabled Endpoint 232 configuration updated successfully Alternatively restarting the Cilium pod will set the endpoint Policy Audit Mode to the daemon set configuration Verify Policy Audit Mode is Disabled code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent cilium dbg endpoint get ENDPOINT o jsonpath spec options PolicyAuditMode Disabled Now if we run the landing requests again only the tiefighter pods with the label org empire should succeed code block shell session kubectl exec tiefighter curl s XPOST deathstar default svc cluster local v1 request landing Ship landed And we can observe that the traffic was allowed by the policy code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent hubble observe flows t policy verdict from pod tiefighter last 1 Feb 7 13 34 26 112 default tiefighter 37314 ID 31028 default deathstar 6fb5694d48 5hmds 80 ID 16530 policy verdict L3 L4 ALLOWED TCP Flags SYN This works as expected Now the same request from an xwing Pod should fail code block shell session kubectl exec xwing curl connect timeout 3 s XPOST deathstar default svc cluster local v1 request landing command terminated with exit code 28 This curl request should timeout after three seconds we can observe the policy verdict with code block shell session kubectl n CILIUM NAMESPACE exec CILIUM POD c cilium agent hubble observe flows t policy verdict from pod xwing last 1 Feb 7 13 43 46 791 default xwing 54842 ID 22654 default deathstar 6fb5694d48 5hmds 80 ID 16530 policy verdict none DENIED TCP Flags SYN We hope you enjoyed the tutorial Feel free to play more with the setup follow the gs http guide and reach out to us on Cilium Slack with any questions Clean up parsed literal kubectl delete f SCM WEB examples minikube http sw app yaml kubectl delete cnp empire default deny rule1
cilium Cilium environment running on your machine It is designed to take 15 30 docs cilium io This document serves as an introduction to using Cilium to enforce memcached aware security policies It walks through a single node Securing Memcached minutes
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io ****************** Securing Memcached ****************** This document serves as an introduction to using Cilium to enforce memcached-aware security policies. It walks through a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes. **NOTE:** memcached-aware policy support is still in beta. It is not yet ready for production use. Additionally, the memcached-specific policy language is highly likely to change in a future Cilium version. `Memcached <https://memcached.org/>`_ is a high performance, distributed memory object caching system. It's simple yet powerful, and used by dynamic web applications to alleviate database load. Memcached is designed to work efficiently for a very large number of open connections. Thus, clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data. Multiple clients can benefit from this distributed cache's performance benefits. There are two kinds of data sent in the memcache protocol: text lines and unstructured (binary) data. We will demonstrate clients using both types of protocols to communicate with a memcached server. .. include:: gsg_requirements.rst Step 2: Deploy the Demo Application =================================== Now that we have Cilium deployed and ``kube-dns`` operating correctly we can deploy our demo memcached application. Since our first `HTTP-aware Cilium demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_ was based on Star Wars, we continue with the theme for the memcached demo as well. Ever wonder how the Alliance Fleet manages the changing positions of their ships? The Alliance Fleet uses memcached to store the coordinates of their ships. The Alliance Fleet leverages the memcached-svc service implemented as a memcached server. Each ship in the fleet constantly updates its coordinates and has the ability to get the coordinates of other ships in the Alliance Fleet. In this simple example, the Alliance Fleet uses a memcached server for their starfighters to store their own supergalatic coordinates and get those of other starfighters. In order to avoid collisions and protect against compromised starfighters, memcached commands are limited to gets for any starfighter coordinates and sets only to a key specific to the starfighter. Thus the following operations are allowed: - **A-wing**: can set coordinates to key "awing-coord" and get the key coordinates. - **X-wing**: can set coordinates to key "xwing-coord" and get the key coordinates. - **Alliance-Tracker**: can get any coordinates but not set any. To keep the setup small, we will launch a small number of pods to represent a larger environment: - **memcached-server** : A Kubernetes service represented by a single pod running a memcached server (label app=memcd-server). - **a-wing** memcached binary client : A pod representing an A-wing starfighter, which can update its coordinates and read it via the binary memcached protocol (label app=a-wing). - **x-wing** memcached text-based client : A pod representing an X-wing starfighter, which can update its coordinates and read it via the text-based memcached protocol (label app=x-wing). - **alliance-tracker** memcached binary client : A pod representing the Alliance Fleet Tracker, able to read the coordinates of all starfighters (label name=fleet-tracker). Memcached clients access the *memcached-server* on TCP port 11211 and send memcached protocol messages to it. .. image:: images/cilium_memcd_gsg_topology.png The file ``memcd-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described above, as well as a Kubernetes Service *memcached-server* for the Memcached server. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-memcached/memcd-sw-app.yaml deployment.apps/memcached-server created service/memcached-server created deployment.apps/a-wing created deployment.apps/x-wing created deployment.apps/alliance-tracker created Kubernetes will deploy the pods and service in the background. Running ``kubectl get svc,pods`` will inform you about the progress of the operation. Each pod will go through several states until it reaches ``Running`` at which point the setup is ready. .. code-block:: shell-session $ kubectl get svc,pods NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31m service/memcached-server ClusterIP None <none> 11211/TCP 14m NAME READY STATUS RESTARTS AGE pod/a-wing-67db8d5fcc-dpwl4 1/1 Running 0 14m pod/alliance-tracker-6b6447bd69-sz5hz 1/1 Running 0 14m pod/memcached-server-bdbfb87cd-8tdh7 1/1 Running 0 14m pod/x-wing-fd5dfb9d9-wrtwn 1/1 Running 0 14m We suggest having a main terminal window to execute *kubectl* commands and two additional terminal windows dedicated to accessing the **A-Wing** and **Alliance-Tracker**, which use a python library to communicate to the memcached server using the binary protocol. In **all three** terminal windows, set some handy environment variables for the demo with the following script: .. parsed-literal:: $ curl -s \ |SCM_WEB|\/examples/kubernetes-memcached/memcd-env.sh > memcd-env.sh $ source memcd-env.sh In the terminal window dedicated for the A-wing pod, exec in, use python to import the binary memcached library and set the client connection to the memcached server: .. code-block:: shell-session $ kubectl exec -ti $AWING_POD -- sh # python Python 3.7.0 (default, Sep 5 2018, 03:25:31) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import bmemcached >>> client = bmemcached.Client(("memcached-server:11211", )) In the terminal window dedicated for the Alliance-Tracker, exec in, use python to import the binary memcached library and set the client connection to the memcached server: .. code-block:: shell-session $ kubectl exec -ti $TRACKER_POD -- sh # python Python 3.7.0 (default, Sep 5 2018, 03:25:31) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import bmemcached >>> client = bmemcached.Client(("memcached-server:11211", )) Step 3: Test Basic Memcached Access =================================== Let's show that each client is able to access the memcached server. Execute the following to have the A-wing and X-wing starfighters update the Alliance Fleet memcached-server with their respective supergalatic coordinates: A-wing will access the memcached-server using the *binary protocol*. In your terminal window for A-Wing, set A-wing's coordinates: .. code-block:: python >>> client.set("awing-coord","4309.432,918.980",time=2400) True >>> client.get("awing-coord") '4309.432,918.980' In your main terminal window, have X-wing starfighter set their coordinates using the text-based protocol to the memcached server. .. code-block:: shell-session $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$SETXC\\" | nc memcached-server 11211" STORED $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$GETXC\\" | nc memcached-server 11211" VALUE xwing-coord 0 16 8893.34,234.3290 END Check that the Alliance Fleet Tracker is able to get all starfighters' coordinates in your terminal window for the Alliance-Tracker: .. code-block:: python >>> client.get("awing-coord") '4309.432,918.980' >>> client.get("xwing-coord") '8893.34,234.3290' Step 4: The Danger of a Compromised Memcached Client ===================================================== Imagine if a starfighter ship is captured. Should the starfighter be able to set the coordinates of other ships, or get the coordinates of all other ships? Or if the Alliance-Tracker is compromised, can it modify the coordinates of any starfighter ship? If every client has access to the Memcached API on port 11211, all have over-privileged access until further locked down. With L4 port access to the memcached server, all starfighters could write to any key/ship and read all ship coordinates. In your main terminal, execute: .. code-block:: shell-session $ kubectl exec $XWING_POD sh -- -c "echo -en \\"$GETAC\\" | nc memcached-server 11211" VALUE awing-coord 0 16 4309.432,918.980 END In your A-Wing terminal window, confirm the over-privileged access: .. code-block:: python >>> client.get("xwing-coord") '8893.34,234.3290' >>> client.set("xwing-coord","0.0,0.0",time=2400) True >>> client.get("xwing-coord") '0.0,0.0' From A-Wing, set the X-Wing coordinates back to their proper position: .. code-block:: python >>> client.set("xwing-coord","8893.34,234.3290",time=2400) True Thus, the Alliance Fleet Tracking System could be made weak if a single starfighter ship is compromised. Step 5: Securing Access to Memcached with Cilium ================================================ Cilium helps lock down Memcached servers to ensure clients have secure access to it. Beyond just providing access to port 11211, Cilium can enforce specific key value access by understanding both the text-based and the unstructured (binary) memcached protocol. We'll create a policy that limits the scope of what a starfighter can access and write. Thus, only the intended memcached protocol calls to the memcached-server can be made. In this example, we'll only allow A-Wing to get and set the key "awing-coord", only allow X-Wing to get and set key "xwing-coord", and allow Alliance-Tracker to only get coordinates. .. image:: images/cilium_memcd_gsg_attack.png Here is the *CiliumNetworkPolicy* rule that limits the access of starfighters to their own key and allows Alliance Tracker to get any coordinate: .. literalinclude:: ../../examples/kubernetes-memcached/memcd-sw-security-policy.yaml A *CiliumNetworkPolicy* contains a list of rules that define allowed memcached commands, and requests that do not match any rules are denied. The rules explicitly match connections destined to the Memcached Service on TCP 11211. The rules apply to inbound (i.e., "ingress") connections bound for memcached-server pods (as indicated by ``app:memcached-server`` in the "endpointSelector" section). The rules apply differently depending on the client pod: ``app:a-wing``, ``app:x-wing``, or ``name:fleet-tracker`` as indicated by the "fromEndpoints" section. With the policy in place, A-wings can only get and set the key "awing-coord"; similarly the X-Wing can only get and set "xwing-coord". The Alliance Tracker can only get coordinates - not set. Apply this Memcached-aware network security policy using ``kubectl`` in your main terminal window: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-memcached/memcd-sw-security-policy.yaml If we then try to perform the attacks from the *X-wing* pod from the main terminal window, we'll see that they are denied: .. code-block:: shell-session $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$GETAC\\" | nc memcached-server 11211" CLIENT_ERROR access denied From the A-Wing terminal window, we can confirm that if *A-wing* goes outside of the bounds of its allowed calls. You may need to run the ``client.get`` command twice for the python call: .. code-block:: python >>> client.get("awing-coord") '4309.432,918.980' >>> client.get("xwing-coord") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.7/site-packages/bmemcached/client/replicating.py", line 42, in get value, cas = server.get(key) File "/usr/local/lib/python3.7/site-packages/bmemcached/protocol.py", line 440, in get raise MemcachedException('Code: %d Message: %s' % (status, extra_content), status) bmemcached.exceptions.MemcachedException: ("Code: 8 Message: b'access denied'", 8) Similarly, the Alliance-Tracker cannot set any coordinates, which you can attempt from the Alliance-Tracker terminal window: .. code-block:: python >>> client.get("xwing-coord") '8893.34,234.3290' >>> client.set("awing-coord","0.0,0.0",time=1200) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.7/site-packages/bmemcached/client/replicating.py", line 112, in set returns.append(server.set(key, value, time, compress_level=compress_level)) File "/usr/local/lib/python3.7/site-packages/bmemcached/protocol.py", line 604, in set return self._set_add_replace('set', key, value, time, compress_level=compress_level) File "/usr/local/lib/python3.7/site-packages/bmemcached/protocol.py", line 583, in _set_add_replace raise MemcachedException('Code: %d Message: %s' % (status, extra_content), status) bmemcached.exceptions.MemcachedException: ("Code: 8 Message: b'access denied'", 8) The policy is working as expected. With the CiliumNetworkPolicy in place, the allowed Memcached calls are still allowed from the respective pods. In the main terminal window, execute: .. code-block:: shell-session $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$GETXC\\" | nc memcached-server 11211" VALUE xwing-coord 0 16 8893.34,234.3290 END $ SETXC="set xwing-coord 0 1200 16\\r\\n9854.34,926.9187\\r\\nquit\\r\\n" $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$SETXC\\" | nc memcached-server 11211" STORED $ kubectl exec $XWING_POD -- sh -c "echo -en \\"$GETXC\\" | nc memcached-server 11211" VALUE xwing-coord 0 16 9854.34,926.9187 END In the A-Wing terminal window, execute: .. code-block:: python >>> client.set("awing-coord","9852.542,892.1318",time=1200) True >>> client.get("awing-coord") '9852.542,892.1318' >>> exit() # exit In the Alliance-Tracker terminal window, execute: .. code-block:: python >>> client.get("awing-coord") '9852.542,892.1318' >>> client.get("xwing-coord") '9854.34,926.9187' >>> exit() # exit Step 6: Clean Up ================ You have now installed Cilium, deployed a demo app, and tested L7 memcached-aware network security policies. To clean up, in your main terminal window, run: .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-memcached/memcd-sw-app.yaml $ kubectl delete cnp secure-fleet For some handy memcached references, see below: * https://memcached.org/ * https://github.com/memcached/memcached/blob/master/doc/protocol.txt * https://python-binary-memcached.readthedocs.io/en/latest/intro/
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Securing Memcached This document serves as an introduction to using Cilium to enforce memcached aware security policies It walks through a single node Cilium environment running on your machine It is designed to take 15 30 minutes NOTE memcached aware policy support is still in beta It is not yet ready for production use Additionally the memcached specific policy language is highly likely to change in a future Cilium version Memcached https memcached org is a high performance distributed memory object caching system It s simple yet powerful and used by dynamic web applications to alleviate database load Memcached is designed to work efficiently for a very large number of open connections Thus clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data Multiple clients can benefit from this distributed cache s performance benefits There are two kinds of data sent in the memcache protocol text lines and unstructured binary data We will demonstrate clients using both types of protocols to communicate with a memcached server include gsg requirements rst Step 2 Deploy the Demo Application Now that we have Cilium deployed and kube dns operating correctly we can deploy our demo memcached application Since our first HTTP aware Cilium demo https cilium io blog 2017 5 4 demo may the force be with you was based on Star Wars we continue with the theme for the memcached demo as well Ever wonder how the Alliance Fleet manages the changing positions of their ships The Alliance Fleet uses memcached to store the coordinates of their ships The Alliance Fleet leverages the memcached svc service implemented as a memcached server Each ship in the fleet constantly updates its coordinates and has the ability to get the coordinates of other ships in the Alliance Fleet In this simple example the Alliance Fleet uses a memcached server for their starfighters to store their own supergalatic coordinates and get those of other starfighters In order to avoid collisions and protect against compromised starfighters memcached commands are limited to gets for any starfighter coordinates and sets only to a key specific to the starfighter Thus the following operations are allowed A wing can set coordinates to key awing coord and get the key coordinates X wing can set coordinates to key xwing coord and get the key coordinates Alliance Tracker can get any coordinates but not set any To keep the setup small we will launch a small number of pods to represent a larger environment memcached server A Kubernetes service represented by a single pod running a memcached server label app memcd server a wing memcached binary client A pod representing an A wing starfighter which can update its coordinates and read it via the binary memcached protocol label app a wing x wing memcached text based client A pod representing an X wing starfighter which can update its coordinates and read it via the text based memcached protocol label app x wing alliance tracker memcached binary client A pod representing the Alliance Fleet Tracker able to read the coordinates of all starfighters label name fleet tracker Memcached clients access the memcached server on TCP port 11211 and send memcached protocol messages to it image images cilium memcd gsg topology png The file memcd sw app yaml contains a Kubernetes Deployment for each of the pods described above as well as a Kubernetes Service memcached server for the Memcached server parsed literal kubectl create f SCM WEB examples kubernetes memcached memcd sw app yaml deployment apps memcached server created service memcached server created deployment apps a wing created deployment apps x wing created deployment apps alliance tracker created Kubernetes will deploy the pods and service in the background Running kubectl get svc pods will inform you about the progress of the operation Each pod will go through several states until it reaches Running at which point the setup is ready code block shell session kubectl get svc pods NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE service kubernetes ClusterIP 10 96 0 1 none 443 TCP 31m service memcached server ClusterIP None none 11211 TCP 14m NAME READY STATUS RESTARTS AGE pod a wing 67db8d5fcc dpwl4 1 1 Running 0 14m pod alliance tracker 6b6447bd69 sz5hz 1 1 Running 0 14m pod memcached server bdbfb87cd 8tdh7 1 1 Running 0 14m pod x wing fd5dfb9d9 wrtwn 1 1 Running 0 14m We suggest having a main terminal window to execute kubectl commands and two additional terminal windows dedicated to accessing the A Wing and Alliance Tracker which use a python library to communicate to the memcached server using the binary protocol In all three terminal windows set some handy environment variables for the demo with the following script parsed literal curl s SCM WEB examples kubernetes memcached memcd env sh memcd env sh source memcd env sh In the terminal window dedicated for the A wing pod exec in use python to import the binary memcached library and set the client connection to the memcached server code block shell session kubectl exec ti AWING POD sh python Python 3 7 0 default Sep 5 2018 03 25 31 GCC 6 3 0 20170516 on linux Type help copyright credits or license for more information import bmemcached client bmemcached Client memcached server 11211 In the terminal window dedicated for the Alliance Tracker exec in use python to import the binary memcached library and set the client connection to the memcached server code block shell session kubectl exec ti TRACKER POD sh python Python 3 7 0 default Sep 5 2018 03 25 31 GCC 6 3 0 20170516 on linux Type help copyright credits or license for more information import bmemcached client bmemcached Client memcached server 11211 Step 3 Test Basic Memcached Access Let s show that each client is able to access the memcached server Execute the following to have the A wing and X wing starfighters update the Alliance Fleet memcached server with their respective supergalatic coordinates A wing will access the memcached server using the binary protocol In your terminal window for A Wing set A wing s coordinates code block python client set awing coord 4309 432 918 980 time 2400 True client get awing coord 4309 432 918 980 In your main terminal window have X wing starfighter set their coordinates using the text based protocol to the memcached server code block shell session kubectl exec XWING POD sh c echo en SETXC nc memcached server 11211 STORED kubectl exec XWING POD sh c echo en GETXC nc memcached server 11211 VALUE xwing coord 0 16 8893 34 234 3290 END Check that the Alliance Fleet Tracker is able to get all starfighters coordinates in your terminal window for the Alliance Tracker code block python client get awing coord 4309 432 918 980 client get xwing coord 8893 34 234 3290 Step 4 The Danger of a Compromised Memcached Client Imagine if a starfighter ship is captured Should the starfighter be able to set the coordinates of other ships or get the coordinates of all other ships Or if the Alliance Tracker is compromised can it modify the coordinates of any starfighter ship If every client has access to the Memcached API on port 11211 all have over privileged access until further locked down With L4 port access to the memcached server all starfighters could write to any key ship and read all ship coordinates In your main terminal execute code block shell session kubectl exec XWING POD sh c echo en GETAC nc memcached server 11211 VALUE awing coord 0 16 4309 432 918 980 END In your A Wing terminal window confirm the over privileged access code block python client get xwing coord 8893 34 234 3290 client set xwing coord 0 0 0 0 time 2400 True client get xwing coord 0 0 0 0 From A Wing set the X Wing coordinates back to their proper position code block python client set xwing coord 8893 34 234 3290 time 2400 True Thus the Alliance Fleet Tracking System could be made weak if a single starfighter ship is compromised Step 5 Securing Access to Memcached with Cilium Cilium helps lock down Memcached servers to ensure clients have secure access to it Beyond just providing access to port 11211 Cilium can enforce specific key value access by understanding both the text based and the unstructured binary memcached protocol We ll create a policy that limits the scope of what a starfighter can access and write Thus only the intended memcached protocol calls to the memcached server can be made In this example we ll only allow A Wing to get and set the key awing coord only allow X Wing to get and set key xwing coord and allow Alliance Tracker to only get coordinates image images cilium memcd gsg attack png Here is the CiliumNetworkPolicy rule that limits the access of starfighters to their own key and allows Alliance Tracker to get any coordinate literalinclude examples kubernetes memcached memcd sw security policy yaml A CiliumNetworkPolicy contains a list of rules that define allowed memcached commands and requests that do not match any rules are denied The rules explicitly match connections destined to the Memcached Service on TCP 11211 The rules apply to inbound i e ingress connections bound for memcached server pods as indicated by app memcached server in the endpointSelector section The rules apply differently depending on the client pod app a wing app x wing or name fleet tracker as indicated by the fromEndpoints section With the policy in place A wings can only get and set the key awing coord similarly the X Wing can only get and set xwing coord The Alliance Tracker can only get coordinates not set Apply this Memcached aware network security policy using kubectl in your main terminal window parsed literal kubectl create f SCM WEB examples kubernetes memcached memcd sw security policy yaml If we then try to perform the attacks from the X wing pod from the main terminal window we ll see that they are denied code block shell session kubectl exec XWING POD sh c echo en GETAC nc memcached server 11211 CLIENT ERROR access denied From the A Wing terminal window we can confirm that if A wing goes outside of the bounds of its allowed calls You may need to run the client get command twice for the python call code block python client get awing coord 4309 432 918 980 client get xwing coord Traceback most recent call last File stdin line 1 in module File usr local lib python3 7 site packages bmemcached client replicating py line 42 in get value cas server get key File usr local lib python3 7 site packages bmemcached protocol py line 440 in get raise MemcachedException Code d Message s status extra content status bmemcached exceptions MemcachedException Code 8 Message b access denied 8 Similarly the Alliance Tracker cannot set any coordinates which you can attempt from the Alliance Tracker terminal window code block python client get xwing coord 8893 34 234 3290 client set awing coord 0 0 0 0 time 1200 Traceback most recent call last File stdin line 1 in module File usr local lib python3 7 site packages bmemcached client replicating py line 112 in set returns append server set key value time compress level compress level File usr local lib python3 7 site packages bmemcached protocol py line 604 in set return self set add replace set key value time compress level compress level File usr local lib python3 7 site packages bmemcached protocol py line 583 in set add replace raise MemcachedException Code d Message s status extra content status bmemcached exceptions MemcachedException Code 8 Message b access denied 8 The policy is working as expected With the CiliumNetworkPolicy in place the allowed Memcached calls are still allowed from the respective pods In the main terminal window execute code block shell session kubectl exec XWING POD sh c echo en GETXC nc memcached server 11211 VALUE xwing coord 0 16 8893 34 234 3290 END SETXC set xwing coord 0 1200 16 r n9854 34 926 9187 r nquit r n kubectl exec XWING POD sh c echo en SETXC nc memcached server 11211 STORED kubectl exec XWING POD sh c echo en GETXC nc memcached server 11211 VALUE xwing coord 0 16 9854 34 926 9187 END In the A Wing terminal window execute code block python client set awing coord 9852 542 892 1318 time 1200 True client get awing coord 9852 542 892 1318 exit exit In the Alliance Tracker terminal window execute code block python client get awing coord 9852 542 892 1318 client get xwing coord 9854 34 926 9187 exit exit Step 6 Clean Up You have now installed Cilium deployed a demo app and tested L7 memcached aware network security policies To clean up in your main terminal window run parsed literal kubectl delete f SCM WEB examples kubernetes memcached memcd sw app yaml kubectl delete cnp secure fleet For some handy memcached references see below https memcached org https github com memcached memcached blob master doc protocol txt https python binary memcached readthedocs io en latest intro
cilium Securing a Cassandra Database Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node This document serves as an introduction to using Cilium to enforce Cassandra aware minutes
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io ***************************** Securing a Cassandra Database ***************************** This document serves as an introduction to using Cilium to enforce Cassandra-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes. **NOTE:** Cassandra-aware policy support is still in beta phase. It is not yet ready for production use. Additionally, the Cassandra-specific policy language is highly likely to change in a future Cilium version. .. include:: gsg_requirements.rst Deploy the Demo Application =========================== Now that we have Cilium deployed and ``kube-dns`` operating correctly we can deploy our demo Cassandra application. Since our first `HTTP-aware Cilium Star Wars demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_ showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the Rebel Alliance, this Cassandra demo is Star Wars-themed as well. `Apache Cassanadra <http://cassandra.apache.org>`_ is a popular NOSQL database focused on delivering high-performance transactions (especially on writes) without sacrificing on availability or scale. Cassandra operates as a cluster of servers, and Cassandra clients query these services via a the `native Cassandra protocol <https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec>`_ . Cilium understands the Cassandra protocol, and thus is able to provide deep visibility and control over which clients are able to access particular tables inside a Cassandra cluster, and which actions (e.g., "select", "insert", "update", "delete") can be performed on tables. With Cassandra, each table belongs to a "keyspace", allowing multiple groups to use a single cluster without conflicting. Cassandra queries specify the full table name qualified by the keyspace using the syntax "<keyspace>.<table>". In our simple example, the Empire uses a Cassandra cluster to store two different types of information: - **Employee Attendance Records** : Use to store daily attendance data (attendance.daily_records). - **Deathstar Scrum Reports** : Daily scrum reports from the teams working on the Deathstar (deathstar.scrum_reports). To keep the setup small, we will just launch a small number of pods to represent this setup: - **cass-server** : A single pod running the Cassandra service, representing a Cassandra cluster (label app=cass-server). - **empire-hq** : A pod representing the Empire's Headquarters, which is the only pod that should be able to read all attendance data, or read/write the Deathstar scrum notes (label app=empire-hq). - **empire-outpost** : A random outpost in the empire. It should be able to insert employee attendance records, but not read records for other empire facilities. It also should not have any access to the deathstar keyspace (label app=empire-outpost). All pods other than *cass-server* are Cassandra clients, which need access to the *cass-server* container on TCP port 9042 in order to send Cassandra protocol messages. .. image:: images/cilium_cass_gsg_topology.png The file ``cass-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described above, as well as a Kubernetes Service *cassandra-svc* for the Cassandra cluster. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-app.yaml deployment.apps/cass-server created service/cassandra-svc created deployment.apps/empire-hq created deployment.apps/empire-outpost created Kubernetes will deploy the pods and service in the background. Running ``kubectl get svc,pods`` will inform you about the progress of the operation. Each pod will go through several states until it reaches ``Running`` at which point the setup is ready. .. code-block:: shell-session $ kubectl get svc,pods NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cassandra-svc ClusterIP None <none> 9042/TCP 1m service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h NAME READY STATUS RESTARTS AGE pod/cass-server-5674d5b946-x8v4j 1/1 Running 0 1m pod/empire-hq-c494c664d-xmvdl 1/1 Running 0 1m pod/empire-outpost-68bf76858d-flczn 1/1 Running 0 1m Step 3: Test Basic Cassandra Access =================================== First, we'll create the keyspaces and tables mentioned above, and populate them with some initial data: .. parsed-literal:: $ curl -s \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-populate-tables.sh | bash Next, create two environment variables that refer to the *empire-hq* and *empire-outpost* pods: .. code-block:: shell-session $ HQ_POD=$(kubectl get pods -l app=empire-hq -o jsonpath='{.items[0].metadata.name}') $ OUTPOST_POD=$(kubectl get pods -l app=empire-outpost -o jsonpath='{.items[0].metadata.name}') Now we will run the 'cqlsh' Cassandra client in the *empire-outpost* pod, telling it to access the Cassandra cluster identified by the 'cassandra-svc' DNS name: .. code-block:: shell-session $ kubectl exec -it $OUTPOST_POD -- cqlsh cassandra-svc Connected to Test Cluster at cassandra-svc:9042. [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh> Next, using the cqlsh prompt, we'll show that the outpost can add records to the "daily_records" table in the "attendance" keyspace: .. code-block:: shell-session cqlsh> INSERT INTO attendance.daily_records (creation, loc_id, present, empire_member_id) values (now(), 074AD3B9-A47D-4EBC-83D3-CAD75B1911CE, true, 6AD3139F-EBFC-4E0C-9F79-8F997BA01D90); We have confirmed that outposts are able to report daily attendance records as intended. We're off to a good start! The Danger of a Compromised Cassandra Client ============================================ But what if a rebel spy gains access to any of the remote outposts that act as a Cassandra client? Since every client has access to the Cassandra API on port 9042, it can do some bad stuff. For starters, the outpost container can not only add entries to the attendance.daily_reports table, but it could read all entries as well. To see this, we can run the following command: .. code-block:: shell-session $ cqlsh> SELECT * FROM attendance.daily_records; loc_id | creation | empire_member_id | present --------------------------------------+--------------------------------------+--------------------------------------+--------- a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c | True 5b9a7990-657e-442d-a3f7-94484f06696e | c8493120-bf57-11e8-98e6-f1a9f45fc4d8 | e74a0300-94f3-4b3d-aee4-fea85eca5af7 | True 53ed94d0-ddac-4b14-8c2f-ba6f83a8218c | c641a150-bf57-11e8-98e6-f1a9f45fc4d8 | 104ddbb6-f2f7-4cd0-8683-cc18cccc1326 | True 074ad3b9-a47d-4ebc-83d3-cad75b1911ce | 9674ed40-bf59-11e8-98e6-f1a9f45fc4d8 | 6ad3139f-ebfc-4e0c-9f79-8f997ba01d90 | True fe72cc39-dffb-45dc-8e5f-86c674a58951 | c5e79a70-bf57-11e8-98e6-f1a9f45fc4d8 | 6782689c-0488-4ecb-b582-a2ccd282405e | True 461f4176-eb4c-4bcc-a08a-46787ca01af3 | c6fefde0-bf57-11e8-98e6-f1a9f45fc4d8 | 01009199-3d6b-4041-9c43-b1ca9aef021c | True 64dbf608-6947-4a23-98e9-63339c413136 | c8096900-bf57-11e8-98e6-f1a9f45fc4d8 | 6ffe024e-beff-4370-a1b5-dcf6330ec82b | True 13cefcac-5652-4c69-a3c2-1484671f2467 | c53f4c80-bf57-11e8-98e6-f1a9f45fc4d8 | 55218adc-2f3d-4f84-a693-87a2c238bb26 | True eabf5185-376b-4d4a-a5b5-99f912d98279 | c593fc30-bf57-11e8-98e6-f1a9f45fc4d8 | 5e22159b-f3a9-4f8a-9944-97375df570e9 | True 3c0ae2d1-c836-4aa4-8fe2-5db6cc1f92fc | c7af1400-bf57-11e8-98e6-f1a9f45fc4d8 | 0ccb3df7-78d0-4434-8a7f-4bfa8d714275 | True 31a292e0-2e28-4a7d-8c84-8d4cf0c57483 | c4e0d8d0-bf57-11e8-98e6-f1a9f45fc4d8 | 8fe7625c-f482-4eb6-b33e-271440777403 | True (11 rows) Uh oh! The rebels now has strategic information about empire troop strengths at each location in the galaxy. But even more nasty from a security perspective is that the outpost container can also access information in any keyspace, including the deathstar keyspace. For example, run: .. code-block:: shell-session $ cqlsh> SELECT * FROM deathstar.scrum_notes; empire_member_id | content | creation --------------------------------------+----------------------------------------------------------------------------------------------------------------+-------------------------------------- 34e564c2-781b-477e-acd0-b357d67f94f2 | Designed protective shield for deathstar. Could be based on nearby moon. Feature punted to v2. Not blocked. | c3c8b210-bf57-11e8-98e6-f1a9f45fc4d8 dfa974ea-88cd-4e9b-85e3-542b9d00e2df | I think the exhaust port could be vulnerable to a direct hit. Hope no one finds out about it. Not blocked. | c37f4d00-bf57-11e8-98e6-f1a9f45fc4d8 ee12306a-7b44-46a4-ad68-42e86f0f111e | Trying to figure out if we should paint it medium grey, light grey, or medium-light grey. Not blocked. | c32daa90-bf57-11e8-98e6-f1a9f45fc4d8 (3 rows) We see that any outpost can actually access the deathstar scrum notes, which mentions a pretty serious issue with the exhaust port. Securing Access to Cassandra with Cilium ======================================== Obviously, it would be much more secure to limit each pod's access to the Cassandra server to be least privilege (i.e., only what is needed for the app to operate correctly and nothing more). We can do that with the following Cilium security policy. As with Cilium HTTP policies, we can write policies that identify pods by labels, and then limit the traffic in/out of this pod. In this case, we'll create a policy that identifies the tables that each client should be able to access, the actions that are allowed on those tables, and deny the rest. As an example, a policy could limit containers with label *app=empire-outpost* to only be able to insert entries into the table "attendance.daily_reports", but would block any attempt by a compromised outpost to read all attendance information or access other keyspaces. .. image:: images/cilium_cass_gsg_attack.png Here is the *CiliumNetworkPolicy* rule that limits access of pods with label *app=empire-outpost* to only insert records into "attendance.daily_reports": .. literalinclude:: ../../examples/kubernetes-cassandra/cass-sw-security-policy.yaml A *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests that do not match any rules are denied as invalid. The rule explicitly matches Cassandra connections destined to TCP 9042 on cass-server pods, and allows query actions like select/insert/update/delete only on a specified set of tables. The above rule applies to inbound (i.e., "ingress") connections to cass-server pods (as indicated by "app:cass-server" in the "endpointSelector" section). The rule applies different rules based on whether the client pod has labels "app: empire-outpost" or "app: empire-hq" as indicated by the "fromEndpoints" section. The policy limits the *empire-outpost* pod to performing "select" queries on the "system" and "system_schema" keyspaces (required by cqlsh on startup) and "insert" queries to the "attendance.daily_records" table. The full policy adds another rule that allows all queries from the *empire-hq* pod. Apply this Cassandra-aware network security policy using ``kubectl`` in a new window: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-security-policy.yaml If we then again try to perform the attacks from the *empire-outpost* pod, we'll see that they are denied: .. code-block:: shell-session $ cqlsh> SELECT * FROM attendance.daily_records; Unauthorized: Error from server: code=2100 [Unauthorized] message="Request Unauthorized" This is because the policy only permits pods with labels app: empire-outpost to insert into attendance.daily_records, it does not permit select on that table, or any action on other tables (with the exception of the system.* and system_schema.* keyspaces). Its worth noting that we don't simply drop the message (which could easily be confused with a network error), but rather we respond with the Cassandra Unauthorized error message. (similar to how HTTP would return an error code of 403 unauthorized). Likewise, if the outpost pod ever tries to access a table in another keyspace, like deathstar, this request will also be denied: .. code-block:: shell-session $ cqlsh> SELECT * FROM deathstar.scrum_notes; Unauthorized: Error from server: code=2100 [Unauthorized] message="Request Unauthorized" This is blocked as well, thanks to the Cilium network policy. Use another window to confirm that the *empire-hq* pod still has full access to the cassandra cluster: .. code-block:: shell-session $ kubectl exec -it $HQ_POD -- cqlsh cassandra-svc Connected to Test Cluster at cassandra-svc:9042. [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh> The power of Cilium's identity-based security allows *empire-hq* to still have full access to both tables: .. code-block:: shell-session $ cqlsh> SELECT * FROM attendance.daily_records; loc_id | creation | empire_member_id | present --------------------------------------+--------------------------------------+--------------------------------------+--------- a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c | True <snip> (12 rows) Similarly, the deathstar can still access the scrum notes: .. code-block:: shell-session $ cqlsh> SELECT * FROM deathstar.scrum_notes; <snip> (3 rows) Cassandra-Aware Visibility (Bonus) ================================== As a bonus, you can re-run the above queries with policy enforced and view how Cilium provides Cassandra-aware visibility, including whether requests are forwarded or denied. First, use "kubectl exec" to access the cilium pod. .. code-block:: shell-session $ CILIUM_POD=$(kubectl get pods -n kube-system -l k8s-app=cilium -o jsonpath='{.items[0].metadata.name}') $ kubectl exec -it -n kube-system $CILIUM_POD -- /bin/bash root@minikube:~# Next, start Cilium monitor, and limit the output to only "l7" type messages using the "-t" flag: :: root@minikube:~# cilium-dbg monitor -t l7 Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl-C to quit In the other windows, re-run the above queries, and you will see that Cilium provides full visibility at the level of each Cassandra request, indicating: - The Kubernetes label-based identity of both the sending and receiving pod. - The details of the Cassandra request, including the 'query_action' (e.g., 'select', 'insert') and 'query_table' (e.g., 'system.local', 'attendance.daily_records') - The 'verdict' indicating whether the request was allowed by policy ('Forwarded' or 'Denied'). Example output is below. All requests are from *empire-outpost* to *cass-server*. The first two requests are allowed, a 'select' into 'system.local' and an 'insert' into 'attendance.daily_records'. The second two requests are denied, a 'select' into 'attendance.daily_records' and a select into 'deathstar.scrum_notes' : :: <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_table:system.local query_action:selec <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_action:insert query_table:attendance.daily_records <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_action:select query_table:attendance.daily_records <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_table:deathstar.scrum_notes query_action:select Clean Up ======== You have now installed Cilium, deployed a demo app, and tested L7 Cassandra-aware network security policies. To clean up, run: .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-app.yaml $ kubectl delete cnp secure-empire-cassandra After this, you can re-run the tutorial from Step 1.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Securing a Cassandra Database This document serves as an introduction to using Cilium to enforce Cassandra aware security policies It is a detailed walk through of getting a single node Cilium environment running on your machine It is designed to take 15 30 minutes NOTE Cassandra aware policy support is still in beta phase It is not yet ready for production use Additionally the Cassandra specific policy language is highly likely to change in a future Cilium version include gsg requirements rst Deploy the Demo Application Now that we have Cilium deployed and kube dns operating correctly we can deploy our demo Cassandra application Since our first HTTP aware Cilium Star Wars demo https cilium io blog 2017 5 4 demo may the force be with you showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance this Cassandra demo is Star Wars themed as well Apache Cassanadra http cassandra apache org is a popular NOSQL database focused on delivering high performance transactions especially on writes without sacrificing on availability or scale Cassandra operates as a cluster of servers and Cassandra clients query these services via a the native Cassandra protocol https github com apache cassandra blob trunk doc native protocol v4 spec Cilium understands the Cassandra protocol and thus is able to provide deep visibility and control over which clients are able to access particular tables inside a Cassandra cluster and which actions e g select insert update delete can be performed on tables With Cassandra each table belongs to a keyspace allowing multiple groups to use a single cluster without conflicting Cassandra queries specify the full table name qualified by the keyspace using the syntax keyspace table In our simple example the Empire uses a Cassandra cluster to store two different types of information Employee Attendance Records Use to store daily attendance data attendance daily records Deathstar Scrum Reports Daily scrum reports from the teams working on the Deathstar deathstar scrum reports To keep the setup small we will just launch a small number of pods to represent this setup cass server A single pod running the Cassandra service representing a Cassandra cluster label app cass server empire hq A pod representing the Empire s Headquarters which is the only pod that should be able to read all attendance data or read write the Deathstar scrum notes label app empire hq empire outpost A random outpost in the empire It should be able to insert employee attendance records but not read records for other empire facilities It also should not have any access to the deathstar keyspace label app empire outpost All pods other than cass server are Cassandra clients which need access to the cass server container on TCP port 9042 in order to send Cassandra protocol messages image images cilium cass gsg topology png The file cass sw app yaml contains a Kubernetes Deployment for each of the pods described above as well as a Kubernetes Service cassandra svc for the Cassandra cluster parsed literal kubectl create f SCM WEB examples kubernetes cassandra cass sw app yaml deployment apps cass server created service cassandra svc created deployment apps empire hq created deployment apps empire outpost created Kubernetes will deploy the pods and service in the background Running kubectl get svc pods will inform you about the progress of the operation Each pod will go through several states until it reaches Running at which point the setup is ready code block shell session kubectl get svc pods NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE service cassandra svc ClusterIP None none 9042 TCP 1m service kubernetes ClusterIP 10 96 0 1 none 443 TCP 15h NAME READY STATUS RESTARTS AGE pod cass server 5674d5b946 x8v4j 1 1 Running 0 1m pod empire hq c494c664d xmvdl 1 1 Running 0 1m pod empire outpost 68bf76858d flczn 1 1 Running 0 1m Step 3 Test Basic Cassandra Access First we ll create the keyspaces and tables mentioned above and populate them with some initial data parsed literal curl s SCM WEB examples kubernetes cassandra cass populate tables sh bash Next create two environment variables that refer to the empire hq and empire outpost pods code block shell session HQ POD kubectl get pods l app empire hq o jsonpath items 0 metadata name OUTPOST POD kubectl get pods l app empire outpost o jsonpath items 0 metadata name Now we will run the cqlsh Cassandra client in the empire outpost pod telling it to access the Cassandra cluster identified by the cassandra svc DNS name code block shell session kubectl exec it OUTPOST POD cqlsh cassandra svc Connected to Test Cluster at cassandra svc 9042 cqlsh 5 0 1 Cassandra 3 11 3 CQL spec 3 4 4 Native protocol v4 Use HELP for help cqlsh Next using the cqlsh prompt we ll show that the outpost can add records to the daily records table in the attendance keyspace code block shell session cqlsh INSERT INTO attendance daily records creation loc id present empire member id values now 074AD3B9 A47D 4EBC 83D3 CAD75B1911CE true 6AD3139F EBFC 4E0C 9F79 8F997BA01D90 We have confirmed that outposts are able to report daily attendance records as intended We re off to a good start The Danger of a Compromised Cassandra Client But what if a rebel spy gains access to any of the remote outposts that act as a Cassandra client Since every client has access to the Cassandra API on port 9042 it can do some bad stuff For starters the outpost container can not only add entries to the attendance daily reports table but it could read all entries as well To see this we can run the following command code block shell session cqlsh SELECT FROM attendance daily records loc id creation empire member id present a855e745 69d8 4159 b8b6 e2bafed8387a c692ce90 bf57 11e8 98e6 f1a9f45fc4d8 cee6d956 dbeb 4b09 ad21 1dd93290fa6c True 5b9a7990 657e 442d a3f7 94484f06696e c8493120 bf57 11e8 98e6 f1a9f45fc4d8 e74a0300 94f3 4b3d aee4 fea85eca5af7 True 53ed94d0 ddac 4b14 8c2f ba6f83a8218c c641a150 bf57 11e8 98e6 f1a9f45fc4d8 104ddbb6 f2f7 4cd0 8683 cc18cccc1326 True 074ad3b9 a47d 4ebc 83d3 cad75b1911ce 9674ed40 bf59 11e8 98e6 f1a9f45fc4d8 6ad3139f ebfc 4e0c 9f79 8f997ba01d90 True fe72cc39 dffb 45dc 8e5f 86c674a58951 c5e79a70 bf57 11e8 98e6 f1a9f45fc4d8 6782689c 0488 4ecb b582 a2ccd282405e True 461f4176 eb4c 4bcc a08a 46787ca01af3 c6fefde0 bf57 11e8 98e6 f1a9f45fc4d8 01009199 3d6b 4041 9c43 b1ca9aef021c True 64dbf608 6947 4a23 98e9 63339c413136 c8096900 bf57 11e8 98e6 f1a9f45fc4d8 6ffe024e beff 4370 a1b5 dcf6330ec82b True 13cefcac 5652 4c69 a3c2 1484671f2467 c53f4c80 bf57 11e8 98e6 f1a9f45fc4d8 55218adc 2f3d 4f84 a693 87a2c238bb26 True eabf5185 376b 4d4a a5b5 99f912d98279 c593fc30 bf57 11e8 98e6 f1a9f45fc4d8 5e22159b f3a9 4f8a 9944 97375df570e9 True 3c0ae2d1 c836 4aa4 8fe2 5db6cc1f92fc c7af1400 bf57 11e8 98e6 f1a9f45fc4d8 0ccb3df7 78d0 4434 8a7f 4bfa8d714275 True 31a292e0 2e28 4a7d 8c84 8d4cf0c57483 c4e0d8d0 bf57 11e8 98e6 f1a9f45fc4d8 8fe7625c f482 4eb6 b33e 271440777403 True 11 rows Uh oh The rebels now has strategic information about empire troop strengths at each location in the galaxy But even more nasty from a security perspective is that the outpost container can also access information in any keyspace including the deathstar keyspace For example run code block shell session cqlsh SELECT FROM deathstar scrum notes empire member id content creation 34e564c2 781b 477e acd0 b357d67f94f2 Designed protective shield for deathstar Could be based on nearby moon Feature punted to v2 Not blocked c3c8b210 bf57 11e8 98e6 f1a9f45fc4d8 dfa974ea 88cd 4e9b 85e3 542b9d00e2df I think the exhaust port could be vulnerable to a direct hit Hope no one finds out about it Not blocked c37f4d00 bf57 11e8 98e6 f1a9f45fc4d8 ee12306a 7b44 46a4 ad68 42e86f0f111e Trying to figure out if we should paint it medium grey light grey or medium light grey Not blocked c32daa90 bf57 11e8 98e6 f1a9f45fc4d8 3 rows We see that any outpost can actually access the deathstar scrum notes which mentions a pretty serious issue with the exhaust port Securing Access to Cassandra with Cilium Obviously it would be much more secure to limit each pod s access to the Cassandra server to be least privilege i e only what is needed for the app to operate correctly and nothing more We can do that with the following Cilium security policy As with Cilium HTTP policies we can write policies that identify pods by labels and then limit the traffic in out of this pod In this case we ll create a policy that identifies the tables that each client should be able to access the actions that are allowed on those tables and deny the rest As an example a policy could limit containers with label app empire outpost to only be able to insert entries into the table attendance daily reports but would block any attempt by a compromised outpost to read all attendance information or access other keyspaces image images cilium cass gsg attack png Here is the CiliumNetworkPolicy rule that limits access of pods with label app empire outpost to only insert records into attendance daily reports literalinclude examples kubernetes cassandra cass sw security policy yaml A CiliumNetworkPolicy contains a list of rules that define allowed requests meaning that requests that do not match any rules are denied as invalid The rule explicitly matches Cassandra connections destined to TCP 9042 on cass server pods and allows query actions like select insert update delete only on a specified set of tables The above rule applies to inbound i e ingress connections to cass server pods as indicated by app cass server in the endpointSelector section The rule applies different rules based on whether the client pod has labels app empire outpost or app empire hq as indicated by the fromEndpoints section The policy limits the empire outpost pod to performing select queries on the system and system schema keyspaces required by cqlsh on startup and insert queries to the attendance daily records table The full policy adds another rule that allows all queries from the empire hq pod Apply this Cassandra aware network security policy using kubectl in a new window parsed literal kubectl create f SCM WEB examples kubernetes cassandra cass sw security policy yaml If we then again try to perform the attacks from the empire outpost pod we ll see that they are denied code block shell session cqlsh SELECT FROM attendance daily records Unauthorized Error from server code 2100 Unauthorized message Request Unauthorized This is because the policy only permits pods with labels app empire outpost to insert into attendance daily records it does not permit select on that table or any action on other tables with the exception of the system and system schema keyspaces Its worth noting that we don t simply drop the message which could easily be confused with a network error but rather we respond with the Cassandra Unauthorized error message similar to how HTTP would return an error code of 403 unauthorized Likewise if the outpost pod ever tries to access a table in another keyspace like deathstar this request will also be denied code block shell session cqlsh SELECT FROM deathstar scrum notes Unauthorized Error from server code 2100 Unauthorized message Request Unauthorized This is blocked as well thanks to the Cilium network policy Use another window to confirm that the empire hq pod still has full access to the cassandra cluster code block shell session kubectl exec it HQ POD cqlsh cassandra svc Connected to Test Cluster at cassandra svc 9042 cqlsh 5 0 1 Cassandra 3 11 3 CQL spec 3 4 4 Native protocol v4 Use HELP for help cqlsh The power of Cilium s identity based security allows empire hq to still have full access to both tables code block shell session cqlsh SELECT FROM attendance daily records loc id creation empire member id present a855e745 69d8 4159 b8b6 e2bafed8387a c692ce90 bf57 11e8 98e6 f1a9f45fc4d8 cee6d956 dbeb 4b09 ad21 1dd93290fa6c True snip 12 rows Similarly the deathstar can still access the scrum notes code block shell session cqlsh SELECT FROM deathstar scrum notes snip 3 rows Cassandra Aware Visibility Bonus As a bonus you can re run the above queries with policy enforced and view how Cilium provides Cassandra aware visibility including whether requests are forwarded or denied First use kubectl exec to access the cilium pod code block shell session CILIUM POD kubectl get pods n kube system l k8s app cilium o jsonpath items 0 metadata name kubectl exec it n kube system CILIUM POD bin bash root minikube Next start Cilium monitor and limit the output to only l7 type messages using the t flag root minikube cilium dbg monitor t l7 Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl C to quit In the other windows re run the above queries and you will see that Cilium provides full visibility at the level of each Cassandra request indicating The Kubernetes label based identity of both the sending and receiving pod The details of the Cassandra request including the query action e g select insert and query table e g system local attendance daily records The verdict indicating whether the request was allowed by policy Forwarded or Denied Example output is below All requests are from empire outpost to cass server The first two requests are allowed a select into system local and an insert into attendance daily records The second two requests are denied a select into attendance daily records and a select into deathstar scrum notes Request cassandra from 0 k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost to 64503 k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 12443 16168 verdict Forwarded query table system local query action selec Request cassandra from 0 k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost to 64503 k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 12443 16168 verdict Forwarded query action insert query table attendance daily records Request cassandra from 0 k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost to 64503 k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 12443 16168 verdict Denied query action select query table attendance daily records Request cassandra from 0 k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost to 64503 k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default identity 12443 16168 verdict Denied query table deathstar scrum notes query action select Clean Up You have now installed Cilium deployed a demo app and tested L7 Cassandra aware network security policies To clean up run parsed literal kubectl delete f SCM WEB examples kubernetes cassandra cass sw app yaml kubectl delete cnp secure empire cassandra After this you can re run the tutorial from Step 1
cilium This document serves as an introduction to using Cilium to enforce Kafka aware Securing a Kafka Cluster docs cilium io security policies It is a detailed walk through of getting a single node gskafka
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. _gs_kafka: ************************ Securing a Kafka Cluster ************************ This document serves as an introduction to using Cilium to enforce Kafka-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes. .. include:: gsg_requirements.rst Deploy the Demo Application =========================== Now that we have Cilium deployed and ``kube-dns`` operating correctly we can deploy our demo Kafka application. Since our first demo of Cilium + HTTP-aware security policies was Star Wars-themed we decided to do the same for Kafka. While the `HTTP-aware Cilium Star Wars demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_ showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the Rebel Alliance, this Kafka demo shows how the lack of Kafka-aware security policies allowed the Rebels to steal the Death Star plans in the first place. Kafka is a powerful platform for passing datastreams between different components of an application. A cluster of "Kafka brokers" connect nodes that "produce" data into a data stream, or "consume" data from a datastream. Kafka refers to each datastream as a "topic". Because scalable and highly-available Kafka clusters are non-trivial to run, the same cluster of Kafka brokers often handles many different topics at once (read this `Introduction to Kafka <https://kafka.apache.org/intro>`_ for more background). In our simple example, the Empire uses a Kafka cluster to handle two different topics: - *empire-announce* : Used to broadcast announcements to sites spread across the galaxy - *deathstar-plans* : Used by a small group of sites coordinating on building the ultimate battlestation. To keep the setup small, we will just launch a small number of pods to represent this setup: - *kafka-broker* : A single pod running Kafka and Zookeeper representing the Kafka cluster (label app=kafka). - *empire-hq* : A pod representing the Empire's Headquarters, which is the only pod that should produce messages to *empire-announce* or *deathstar-plans* (label app=empire-hq). - *empire-backup* : A secure backup facility located in `Scarif <https://starwars.fandom.com/wiki/Scarif_vault>`_ , which is allowed to "consume" from the secret *deathstar-plans* topic (label app=empire-backup). - *empire-outpost-8888* : A random outpost in the empire. It needs to "consume" messages from the *empire-announce* topic (label app=empire-outpost). - *empire-outpost-9999* : Another random outpost in the empire that "consumes" messages from the *empire-announce* topic (label app=empire-outpost). All pods other than *kafka-broker* are Kafka clients, which need access to the *kafka-broker* container on TCP port 9092 in order to send Kafka protocol messages. .. image:: images/cilium_kafka_gsg_topology.png The file ``kafka-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described above, as well as a Kubernetes Service for both Kafka and Zookeeper. .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-kafka/kafka-sw-app.yaml deployment "kafka-broker" created deployment "zookeeper" created service "zook" created service "kafka-service" created deployment "empire-hq" created deployment "empire-outpost-8888" created deployment "empire-outpost-9999" created deployment "empire-backup" created Kubernetes will deploy the pods and service in the background. Running ``kubectl get svc,pods`` will inform you about the progress of the operation. Each pod will go through several states until it reaches ``Running`` at which point the setup is ready. .. code-block:: shell-session $ kubectl get svc,pods NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kafka-service ClusterIP None <none> 9092/TCP 2m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m zook ClusterIP 10.97.250.131 <none> 2181/TCP 2m NAME READY STATUS RESTARTS AGE empire-backup-6f4567d5fd-gcrvg 1/1 Running 0 2m empire-hq-59475b4b64-mrdww 1/1 Running 0 2m empire-outpost-8888-78dffd49fb-tnnhf 1/1 Running 0 2m empire-outpost-9999-7dd9fc5f5b-xp6jw 1/1 Running 0 2m kafka-broker-b874c78fd-jdwqf 1/1 Running 0 2m zookeeper-85f64b8cd4-nprck 1/1 Running 0 2m Setup Client Terminals ====================== First we will open a set of windows to represent the different Kafka clients discussed above. For consistency, we recommend opening them in the pattern shown in the image below, but this is optional. .. image:: images/cilium_kafka_gsg_terminal_layout.png In each window, use copy-paste to have each terminal provide a shell inside each pod. empire-hq terminal: .. code-block:: shell-session $ HQ_POD=$(kubectl get pods -l app=empire-hq -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $HQ_POD -- sh -c "PS1=\"empire-hq $\" /bin/bash" empire-backup terminal: .. code-block:: shell-session $ BACKUP_POD=$(kubectl get pods -l app=empire-backup -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $BACKUP_POD -- sh -c "PS1=\"empire-backup $\" /bin/bash" outpost-8888 terminal: .. code-block:: shell-session $ OUTPOST_8888_POD=$(kubectl get pods -l outpostid=8888 -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $OUTPOST_8888_POD -- sh -c "PS1=\"outpost-8888 $\" /bin/bash" outpost-9999 terminal: .. code-block:: shell-session $ OUTPOST_9999_POD=$(kubectl get pods -l outpostid=9999 -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $OUTPOST_9999_POD -- sh -c "PS1=\"outpost-9999 $\" /bin/bash" Test Basic Kafka Produce & Consume ================================== First, let's start the consumer clients listening to their respective Kafka topics. All of the consumer commands below will hang intentionally, waiting to print data they consume from the Kafka topic: In the *empire-backup* window, start listening on the top-secret *deathstar-plans* topic: .. code-block:: shell-session $ ./kafka-consume.sh --topic deathstar-plans In the *outpost-8888* window, start listening to *empire-announcement*: .. code-block:: shell-session $ ./kafka-consume.sh --topic empire-announce Do the same in the *outpost-9999* window: .. code-block:: shell-session $ ./kafka-consume.sh --topic empire-announce Now from the *empire-hq*, first produce a message to the *empire-announce* topic: .. code-block:: shell-session $ echo "Happy 40th Birthday to General Tagge" | ./kafka-produce.sh --topic empire-announce This message will be posted to the *empire-announce* topic, and shows up in both the *outpost-8888* and *outpost-9999* windows who consume that topic. It will not show up in *empire-backup*. *empire-hq* can also post a version of the top-secret deathstar plans to the *deathstar-plans* topic: .. code-block:: shell-session $ echo "deathstar reactor design v3" | ./kafka-produce.sh --topic deathstar-plans This message shows up in the *empire-backup* window, but not for the outposts. Congratulations, Kafka is working as expected :) The Danger of a Compromised Kafka Client ======================================== But what if a rebel spy gains access to any of the remote outposts that act as Kafka clients? Since every client has access to the Kafka broker on port 9092, it can do some bad stuff. For starters, the outpost container can actually switch roles from a consumer to a producer, sending "malicious" data to all other consumers on the topic. To prove this, kill the existing ``kafka-consume.sh`` command in the outpost-9999 window by typing control-C and instead run: .. code-block:: shell-session $ echo "Vader Booed at Empire Karaoke Party" | ./kafka-produce.sh --topic empire-announce Uh oh! Outpost-8888 and all of the other outposts in the empire have now received this fake announcement. But even more nasty from a security perspective is that the outpost container can access any topic on the kafka-broker. In the outpost-9999 container, run: .. code-block:: shell-session $ ./kafka-consume.sh --topic deathstar-plans "deathstar reactor design v3" We see that any outpost can actually access the secret deathstar plans. Now we know how the rebels got access to them! Securing Access to Kafka with Cilium ==================================== Obviously, it would be much more secure to limit each pod's access to the Kafka broker to be least privilege (i.e., only what is needed for the app to operate correctly and nothing more). We can do that with the following Cilium security policy. As with Cilium HTTP policies, we can write policies that identify pods by labels, and then limit the traffic in/out of this pod. In this case, we'll create a policy that identifies the exact traffic that should be allowed to reach the Kafka broker, and deny the rest. As an example, a policy could limit containers with label *app=empire-outpost* to only be able to consume topic *empire-announce*, but would block any attempt by a compromised container (e.g., empire-outpost-9999) from producing to *empire-announce* or consuming from *deathstar-plans*. .. image:: images/cilium_kafka_gsg_attack.png Here is the *CiliumNetworkPolicy* rule that limits access of pods with label *app=empire-outpost* to only consume on topic *empire-announce*: .. literalinclude:: ../../examples/policies/getting-started/kafka.yaml A *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests that do not match any rules are denied as invalid. The above rule applies to inbound (i.e., "ingress") connections to kafka-broker pods (as indicated by "app: kafka" in the "endpointSelector" section). The rule will apply to connections from pods with label "app: empire-outpost" as indicated by the "fromEndpoints" section. The rule explicitly matches Kafka connections destined to TCP 9092, and allows consume/produce actions on various topics of interest. For example we are allowing *consume* from topic *empire-announce* in this case. The full policy adds two additional rules that permit the legitimate "produce" (topic *empire-announce* and topic *deathstar-plans*) from *empire-hq* and the legitimate consume (topic = "deathstar-plans") from *empire-backup*. The full policy can be reviewed by opening the URL in the command below in a browser. Apply this Kafka-aware network security policy using ``kubectl`` in the main window: .. parsed-literal:: $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-kafka/kafka-sw-security-policy.yaml If we then again try to produce a message from outpost-9999 to *empire-annnounce*, it is denied. Type control-c and then run: .. code-block:: shell-session $ echo "Vader Trips on His Own Cape" | ./kafka-produce.sh --topic empire-announce >>[2018-04-10 23:50:34,638] ERROR Error when sending message to topic empire-announce with key: null, value: 27 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [empire-announce] This is because the policy does not allow messages with role = "produce" for topic "empire-announce" from containers with label app = empire-outpost. Its worth noting that we don't simply drop the message (which could easily be confused with a network error), but rather we respond with the Kafka access denied error (similar to how HTTP would return an error code of 403 unauthorized). Likewise, if the outpost container ever tries to consume from topic *deathstar-plans*, it is denied, as role = consume is only allowed for topic *empire-announce*. To test, from the outpost-9999 terminal, run: .. code-block:: shell-session $./kafka-consume.sh --topic deathstar-plans [2018-04-10 23:51:12,956] WARN Error while fetching metadata with correlation id 2 : {deathstar-plans=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient) This is blocked as well, thanks to the Cilium network policy. Imagine how different things would have been if the empire had been using Cilium from the beginning! Clean Up ======== You have now installed Cilium, deployed a demo app, and tested both L7 Kafka-aware network security policies. To clean up, run: .. parsed-literal:: $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-kafka/kafka-sw-app.yaml $ kubectl delete cnp secure-empire-kafka After this, you can re-run the tutorial from Step 1.
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io gs kafka Securing a Kafka Cluster This document serves as an introduction to using Cilium to enforce Kafka aware security policies It is a detailed walk through of getting a single node Cilium environment running on your machine It is designed to take 15 30 minutes include gsg requirements rst Deploy the Demo Application Now that we have Cilium deployed and kube dns operating correctly we can deploy our demo Kafka application Since our first demo of Cilium HTTP aware security policies was Star Wars themed we decided to do the same for Kafka While the HTTP aware Cilium Star Wars demo https cilium io blog 2017 5 4 demo may the force be with you showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance this Kafka demo shows how the lack of Kafka aware security policies allowed the Rebels to steal the Death Star plans in the first place Kafka is a powerful platform for passing datastreams between different components of an application A cluster of Kafka brokers connect nodes that produce data into a data stream or consume data from a datastream Kafka refers to each datastream as a topic Because scalable and highly available Kafka clusters are non trivial to run the same cluster of Kafka brokers often handles many different topics at once read this Introduction to Kafka https kafka apache org intro for more background In our simple example the Empire uses a Kafka cluster to handle two different topics empire announce Used to broadcast announcements to sites spread across the galaxy deathstar plans Used by a small group of sites coordinating on building the ultimate battlestation To keep the setup small we will just launch a small number of pods to represent this setup kafka broker A single pod running Kafka and Zookeeper representing the Kafka cluster label app kafka empire hq A pod representing the Empire s Headquarters which is the only pod that should produce messages to empire announce or deathstar plans label app empire hq empire backup A secure backup facility located in Scarif https starwars fandom com wiki Scarif vault which is allowed to consume from the secret deathstar plans topic label app empire backup empire outpost 8888 A random outpost in the empire It needs to consume messages from the empire announce topic label app empire outpost empire outpost 9999 Another random outpost in the empire that consumes messages from the empire announce topic label app empire outpost All pods other than kafka broker are Kafka clients which need access to the kafka broker container on TCP port 9092 in order to send Kafka protocol messages image images cilium kafka gsg topology png The file kafka sw app yaml contains a Kubernetes Deployment for each of the pods described above as well as a Kubernetes Service for both Kafka and Zookeeper parsed literal kubectl create f SCM WEB examples kubernetes kafka kafka sw app yaml deployment kafka broker created deployment zookeeper created service zook created service kafka service created deployment empire hq created deployment empire outpost 8888 created deployment empire outpost 9999 created deployment empire backup created Kubernetes will deploy the pods and service in the background Running kubectl get svc pods will inform you about the progress of the operation Each pod will go through several states until it reaches Running at which point the setup is ready code block shell session kubectl get svc pods NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE kafka service ClusterIP None none 9092 TCP 2m kubernetes ClusterIP 10 96 0 1 none 443 TCP 10m zook ClusterIP 10 97 250 131 none 2181 TCP 2m NAME READY STATUS RESTARTS AGE empire backup 6f4567d5fd gcrvg 1 1 Running 0 2m empire hq 59475b4b64 mrdww 1 1 Running 0 2m empire outpost 8888 78dffd49fb tnnhf 1 1 Running 0 2m empire outpost 9999 7dd9fc5f5b xp6jw 1 1 Running 0 2m kafka broker b874c78fd jdwqf 1 1 Running 0 2m zookeeper 85f64b8cd4 nprck 1 1 Running 0 2m Setup Client Terminals First we will open a set of windows to represent the different Kafka clients discussed above For consistency we recommend opening them in the pattern shown in the image below but this is optional image images cilium kafka gsg terminal layout png In each window use copy paste to have each terminal provide a shell inside each pod empire hq terminal code block shell session HQ POD kubectl get pods l app empire hq o jsonpath items 0 metadata name kubectl exec it HQ POD sh c PS1 empire hq bin bash empire backup terminal code block shell session BACKUP POD kubectl get pods l app empire backup o jsonpath items 0 metadata name kubectl exec it BACKUP POD sh c PS1 empire backup bin bash outpost 8888 terminal code block shell session OUTPOST 8888 POD kubectl get pods l outpostid 8888 o jsonpath items 0 metadata name kubectl exec it OUTPOST 8888 POD sh c PS1 outpost 8888 bin bash outpost 9999 terminal code block shell session OUTPOST 9999 POD kubectl get pods l outpostid 9999 o jsonpath items 0 metadata name kubectl exec it OUTPOST 9999 POD sh c PS1 outpost 9999 bin bash Test Basic Kafka Produce Consume First let s start the consumer clients listening to their respective Kafka topics All of the consumer commands below will hang intentionally waiting to print data they consume from the Kafka topic In the empire backup window start listening on the top secret deathstar plans topic code block shell session kafka consume sh topic deathstar plans In the outpost 8888 window start listening to empire announcement code block shell session kafka consume sh topic empire announce Do the same in the outpost 9999 window code block shell session kafka consume sh topic empire announce Now from the empire hq first produce a message to the empire announce topic code block shell session echo Happy 40th Birthday to General Tagge kafka produce sh topic empire announce This message will be posted to the empire announce topic and shows up in both the outpost 8888 and outpost 9999 windows who consume that topic It will not show up in empire backup empire hq can also post a version of the top secret deathstar plans to the deathstar plans topic code block shell session echo deathstar reactor design v3 kafka produce sh topic deathstar plans This message shows up in the empire backup window but not for the outposts Congratulations Kafka is working as expected The Danger of a Compromised Kafka Client But what if a rebel spy gains access to any of the remote outposts that act as Kafka clients Since every client has access to the Kafka broker on port 9092 it can do some bad stuff For starters the outpost container can actually switch roles from a consumer to a producer sending malicious data to all other consumers on the topic To prove this kill the existing kafka consume sh command in the outpost 9999 window by typing control C and instead run code block shell session echo Vader Booed at Empire Karaoke Party kafka produce sh topic empire announce Uh oh Outpost 8888 and all of the other outposts in the empire have now received this fake announcement But even more nasty from a security perspective is that the outpost container can access any topic on the kafka broker In the outpost 9999 container run code block shell session kafka consume sh topic deathstar plans deathstar reactor design v3 We see that any outpost can actually access the secret deathstar plans Now we know how the rebels got access to them Securing Access to Kafka with Cilium Obviously it would be much more secure to limit each pod s access to the Kafka broker to be least privilege i e only what is needed for the app to operate correctly and nothing more We can do that with the following Cilium security policy As with Cilium HTTP policies we can write policies that identify pods by labels and then limit the traffic in out of this pod In this case we ll create a policy that identifies the exact traffic that should be allowed to reach the Kafka broker and deny the rest As an example a policy could limit containers with label app empire outpost to only be able to consume topic empire announce but would block any attempt by a compromised container e g empire outpost 9999 from producing to empire announce or consuming from deathstar plans image images cilium kafka gsg attack png Here is the CiliumNetworkPolicy rule that limits access of pods with label app empire outpost to only consume on topic empire announce literalinclude examples policies getting started kafka yaml A CiliumNetworkPolicy contains a list of rules that define allowed requests meaning that requests that do not match any rules are denied as invalid The above rule applies to inbound i e ingress connections to kafka broker pods as indicated by app kafka in the endpointSelector section The rule will apply to connections from pods with label app empire outpost as indicated by the fromEndpoints section The rule explicitly matches Kafka connections destined to TCP 9092 and allows consume produce actions on various topics of interest For example we are allowing consume from topic empire announce in this case The full policy adds two additional rules that permit the legitimate produce topic empire announce and topic deathstar plans from empire hq and the legitimate consume topic deathstar plans from empire backup The full policy can be reviewed by opening the URL in the command below in a browser Apply this Kafka aware network security policy using kubectl in the main window parsed literal kubectl create f SCM WEB examples kubernetes kafka kafka sw security policy yaml If we then again try to produce a message from outpost 9999 to empire annnounce it is denied Type control c and then run code block shell session echo Vader Trips on His Own Cape kafka produce sh topic empire announce 2018 04 10 23 50 34 638 ERROR Error when sending message to topic empire announce with key null value 27 bytes with error org apache kafka clients producer internals ErrorLoggingCallback org apache kafka common errors TopicAuthorizationException Not authorized to access topics empire announce This is because the policy does not allow messages with role produce for topic empire announce from containers with label app empire outpost Its worth noting that we don t simply drop the message which could easily be confused with a network error but rather we respond with the Kafka access denied error similar to how HTTP would return an error code of 403 unauthorized Likewise if the outpost container ever tries to consume from topic deathstar plans it is denied as role consume is only allowed for topic empire announce To test from the outpost 9999 terminal run code block shell session kafka consume sh topic deathstar plans 2018 04 10 23 51 12 956 WARN Error while fetching metadata with correlation id 2 deathstar plans TOPIC AUTHORIZATION FAILED org apache kafka clients NetworkClient This is blocked as well thanks to the Cilium network policy Imagine how different things would have been if the empire had been using Cilium from the beginning Clean Up You have now installed Cilium deployed a demo app and tested both L7 Kafka aware network security policies To clean up run parsed literal kubectl delete f SCM WEB examples kubernetes kafka kafka sw app yaml kubectl delete cnp secure empire kafka After this you can re run the tutorial from Step 1
cilium Threat Model docs cilium io This section presents a threat model for Cilium This threat model controls that are in place to secure data flowing through Cilium s various components allows interested parties to understand security specific implications of Cilium s architecture
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Threat Model ============ This section presents a threat model for Cilium. This threat model allows interested parties to understand: - security-specific implications of Cilium's architecture - controls that are in place to secure data flowing through Cilium's various components - recommended controls for running Cilium in a production environment Scope and Prerequisites ----------------------- This threat model considers the possible attacks that could affect an up-to-date version of Cilium running in a production environment; it will be refreshed when there are significant changes to Cilium's architecture or security posture. This model does not consider supply-chain attacks, such as attacks where a malicious contributor is able to intentionally inject vulnerable code into Cilium. For users who are concerned about supply-chain attacks, Cilium's `security audit`_ assessed Cilium's supply chain controls against `the SLSA framework`_. In order to understand the following threat model, readers will need familiarity with basic Kubernetes concepts, as well as a high-level understanding of Cilium's :ref:`architecture and components<component_overview>`. .. _security audit: https://github.com/cilium/cilium.io/blob/main/Security-Reports/CiliumSecurityAudit2022.pdf .. _the SLSA framework: https://slsa.dev/ Methodology ----------- This threat model considers eight different types of threat actors, placed at different parts of a typical deployment stack. We will primarily use Kubernetes as an example but the threat model remains accurate if deployed with other orchestration systems, or when running Cilium outside of Kubernetes. The attackers will have different levels of initial privileges, giving us a broad overview of the security guarantees that Cilium can provide depending on the nature of the threat and the extent of a previous compromise. For each threat actor, this guide uses the `the STRIDE methodology`_ to assess likely attacks. Where one attack type in the STRIDE set can lead to others (for example, tampering leading to denial of service), we have described the attack path under the most impactful attack type. For the potential attacks that we identify, we recommend controls that can be used to reduce the risk of the identified attacks compromising a cluster. Applying the recommended controls is strongly advised in order to run Cilium securely in production. .. _the STRIDE methodology: https://en.wikipedia.org/wiki/STRIDE_(security) Reference Architecture ---------------------- For ease of understanding, consider a single Kubernetes cluster running Cilium, as illustrated below: .. image:: images/cilium_threat_model_reference_architecture.png The Threat Surface ~~~~~~~~~~~~~~~~~~ In the above scenario, the aim of Cilium's security controls is to ensure that all the components of the Cilium platform are operating correctly, to the extent possible given the abilities of the threat actor that Cilium is faced with. The key components that need to be protected are: - the Cilium agent running on a node, either as a Kubernetes pod, a host process, or as an entire virtual machine - Cilium state (either stored via CRDs or via an external key-value store like etcd) - eBPF programs loaded by Cilium into the kernel - network packets managed by Cilium - observability data collected by Cilium and stored by Hubble The Threat Model ---------------- For each type of attacker, we consider the plausible types of attacks available to them, how Cilium can be used to protect against these attacks, as well as the security controls that Cilium provides. For attacks which might arise as a consequence of the high level of privileges required by Cilium, we also suggest mitigations that users should apply to secure their environments. .. _kubernetes-workload-attacker: Kubernetes Workload Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For the first scenario, consider an attacker who has been able to gain access to a Kubernetes pod, and is now able to run arbitrary code inside a container. This could occur, for example, if a vulnerable service is exposed externally to a network. In this case, let us also assume that the compromised pod does not have any elevated privileges (in Kubernetes or on the host) or direct access to host files. .. image:: images/cilium_threat_model_workload.png In this scenario, there is no potential for compromise of the Cilium stack; in fact, Cilium provides several features that would allow users to limit the scope of such an attack: .. rst-class:: wrapped-table +-----------------+---------------------+--------------------------------+ | Threat surface | Identified STRIDE | Cilium security benefits | | | threats | | +=================+=====================+================================+ | Cilium agent | Potential denial of | Cilium can enforce | | | service if the | `bandwidth limitations`_ | | | compromised | on pods to limit the network | | | | resource utilization. | | | Kubernetes workload | | | | does not have | | | | defined resource | | | | limits. | | +-----------------+---------------------+--------------------------------+ | Cilium | None | | | configuration | | | +-----------------+---------------------+--------------------------------+ | Cilium eBPF | None | | | programs | | | +-----------------+---------------------+--------------------------------+ | Network data | None | - Cilium's network policy can | | | | be used to provide | | | | least-privilege isolation | | | | between Kubernetes | | | | workloads, and between | | | | Kubernetes workloads and | | | | "external" endpoints running | | | | outside the Kubernetes | | | | cluster, or running on the | | | | Kubernetes worker nodes. | | | | Users should ideally define | | | | specific allow rules that | | | | only permit expected | | | | communication between | | | | services. | | | | - Cilium's network | | | | connectivity will prevent an | | | | attacker from observing the | | | | traffic intended for other | | | | workloads, or sending | | | | traffic that "spoofs" the | | | | identity of another pod, | | | | even if transparent | | | | encryption is not in use. | | | | Pods cannot send traffic | | | | that "spoofs" other pods due | | | | to limits on the use of | | | | source IPs and limits on | | | | sending tunneled traffic. | +-----------------+---------------------+--------------------------------+ | Observability | None | Cilium's Hubble flow-event | | data | | observability can be used to | | | | provide reliable audit of | | | | the attacker's L3/L4 and L7 | | | | network connectivity. | +-----------------+---------------------+--------------------------------+ .. _bandwidth limitations: https://docs.cilium.io/en/stable/network/kubernetes/bandwidth-manager/ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - Kubernetes workloads should have `defined resource limits`_. This will help in ensuring that Cilium is not starved of resources due to a misbehaving deployment in a cluster. - Cilium can be given prioritized access to system resources either via Kubernetes, cgroups, or other controls. - Runtime security solutions such as `Tetragon`_ should be deployed to ensure that container compromises can be detected as they occur. .. _defined resource limits: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ .. _Tetragon: https://github.com/cilium/tetragon .. _limited-privilege-host-attacker: Limited-privilege Host Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this scenario, the attacker is someone with the ability to run arbitrary code with direct access to the host PID or network namespace (or both), but without "root" privileges that would allow them to disable Cilium components or undermine the eBPF and other kernel state Cilium relies on. This level of access could exist for a variety of reasons, including: - Pods or other containers running in the host PID or network namespace, but not with "root" privileges. This includes ``hostNetwork: true`` and ``hostPID: true`` containers. - Non-"root" SSH or other console access to a node. - A containerized workload that has "escaped" the container namespace but as a non-privileged user. .. image:: images/cilium_threat_model_non_privileged.png In this case, an attacker would be able to bypass some of Cilium's network controls, as described below: .. rst-class:: wrapped-table +-----------------+-------------------------+----------------------------+ | **Threat | **Identified STRIDE | **Cilium security | | surface** | threats** | benefits** | +=================+=========================+============================+ | Cilium agent | - If the non-privileged | | | | attacker is able to | | | | access the container | | | | runtime and Cilium is | | | | running as a | | | | container, the | | | | attacker will be able | | | | to tamper with the | | | | Cilium agent running | | | | on the node. | | | | - Denial of service is | | | | also possible via | | | | spawning workloads | | | | directly on the host. | | +-----------------+-------------------------+----------------------------+ | Cilium | Same as for the Cilium | | | configuration | agent. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +-----------------+-------------------------+----------------------------+ | Cilium eBPF | Same as for the Cilium | | | programs | agent. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +-----------------+-------------------------+----------------------------+ | Network data | Elevation of | Cilium's network | | | privilege: traffic | connectivity will prevent | | | sent by the attacker | an attacker from observing | | | will no longer be | the traffic intended for | | | subject to Kubernetes | other workloads, or | | | or | sending traffic that | | | container-networked | spoofs the identity of | | | Cilium network | another pod, even if | | | policies. | transparent encryption is | | | :ref:`Host-networked | not in use. | | | Cilium | | | | policies | | | | <host_firewall>` | | | | will continue to | | | | apply. Other traffic | | | | within the cluster | | | | remains unaffected. | | +-----------------+-------------------------+----------------------------+ | Observability | None | Cilium's Hubble flow-event | | data | | observability can be used | | | | to provide reliable audit | | | | of the attacker's L3/L4 | | | | and L7 network | | | | connectivity. Traffic sent | | | | by the attacker will be | | | | attributed to the worker | | | | node, and not to a | | | | specific Kubernetes | | | | workload. | +-----------------+-------------------------+----------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ In addition to the recommended controls against the :ref:`kubernetes-workload-attacker`: - Container images should be regularly patched to reduce the chance of compromise. - Minimal container images should be used where possible. - Host-level privileges should be avoided where possible. - Ensure that the container users do not have access to the underlying container runtime. .. _root-equivalent-host-attacker: Root-equivalent Host Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A "root" privilege host attacker has full privileges to do everything on the local host. This access could exist for several reasons, including: - Root SSH or other console access to the Kubernetes worker node. - A containerized workload that has escaped the container namespace as a privileged user. - Pods running with ``privileged: true`` or other significant capabilities like ``CAP_SYS_ADMIN`` or ``CAP_BPF``. .. image:: images/cilium_threat_model_root.png .. rst-class:: wrapped-table +-------------------+--------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +===================+==================================================+ | Cilium agent | In this situation, all potential attacks covered | | | by STRIDE are possible. Of note: | | | | | | - The attacker would be able to disable eBPF on | | | the node, disabling Cilium's network and | | | runtime visibility and enforcement. All | | | further operations by the attacker will be | | | unlimited and unaudited. | | | - The attacker would be able to observe network | | | connectivity across all workloads on the | | | host. | | | - The attacker can spoof traffic from the node | | | such that it appears to come from pods | | | with any identity. | | | - If the physical network allows ARP poisoning, | | | or if any other attack allows a | | | compromised node to "attract" traffic | | | destined to other nodes, the attacker can | | | potentially intercept all traffic in the | | | cluster, even if this traffic is encrypted | | | using IPsec, since we use a cluster-wide | | | pre-shared key. | | | - The attacker can also use Cilium's | | | credentials to :ref:`attack the Kubernetes | | | API server <kubernetes-api-server-attacker>`, | | | as well as Cilium's :ref:`etcd key-value | | | store <kv-store-attacker>` (if in use). | | | - If the compromised node is running the | | | ``cilium-operator`` pod, the attacker | | | would be able to carry out denial of | | | service attacks against other nodes using | | | the ``cilium-operator`` service account | | | credentials found on the node. | +-------------------+ | | Cilium | | | configuration | | +-------------------+ | | Cilium eBPF | | | programs | | +-------------------+ | | Network data | | +-------------------+ | | Observability | | | data | | +-------------------+--------------------------------------------------+ This attack scenario emphasizes the importance of securing Kubernetes nodes, minimizing the permissions available to container workloads, and monitoring for suspicious activity on the node, container, and API server levels. Recommended Controls ^^^^^^^^^^^^^^^^^^^^ In addition to the controls against a :ref:`limited-privilege-host-attacker`: - Workloads with privileged access should be reviewed; privileged access should only be provided to deployments if essential. - Network policies should be configured to limit connectivity to workloads with privileged access. - Kubernetes audit logging should be enabled, with audit logs being sent to a centralized external location for automated review. - Detections should be configured to alert on suspicious activity. - ``cilium-operator`` pods should not be scheduled on nodes that run regular workloads, and should instead be configured to run on control plane nodes. .. _mitm-attacker: Man-in-the-middle Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~ In this scenario, our attacker has access to the underlying network between Kubernetes worker nodes, but not the Kubernetes worker nodes themselves. This attacker may inspect, modify, or inject malicious network traffic. .. image:: images/cilium_threat_model_mitm.png The threat matrix for such an attacker is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | - Without transparent encryption, an attacker | | | could inspect traffic between workloads in both | | | overlay and native routing modes. | | | - An attacker with knowledge of pod network | | | configuration (including pod IP addresses and | | | ports) could inject traffic into a cluster by | | | forging packets. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ | Observability | - TLS is required for all connectivity between | | data | Cilium components, as well as for exporting | | | data to other destinations, removing the | | | scope for spoofing or tampering. | | | - Without transparent encryption, the attacker | | | could re-create the observability data as | | | available on the network level. | | | - Information leakage could occur via an attacker | | | scraping Hubble Prometheus metrics. These | | | metrics are disabled by default, and | | | can contain sensitive information on network | | | flows. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - :ref:`gsg_encryption` should be configured to ensure the confidentiality of communication between workloads. - TLS should be configured for communication between the Prometheus metrics endpoints and the Prometheus server. - Network policies should be configured such that only the Prometheus server is allowed to scrape :ref:`Hubble metrics <metrics>` in particular. .. _network-attacker: Network Attacker ~~~~~~~~~~~~~~~~ In our threat model, a generic network attacker has access to the same underlying IP network as Kubernetes worker nodes, but is not inline between the nodes. The assumption is that this attacker is still able to send IP layer traffic that reaches a Kubernetes worker node. This is a weaker variant of the man-in-the-middle attack described above, as the attacker can only inject traffic to worker nodes, but not see the replies. .. image:: images/cilium_threat_model_network_attacker.png For such an attacker, the threat matrix is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | - An attacker with knowledge of pod network | | | configuration (including pod IP addresses and | | | ports) could inject traffic into a cluster by | | | forging packets. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ | Observability | - Denial of service could occur depending on the | | data | behavior of the attacker. | | | - Information leakage could occur via an attacker | | | scraping Cilium or Hubble Prometheus metrics, | | | depending on the specific metrics enabled. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - :ref:`gsg_encryption` should be configured to ensure the confidentiality of communication between workloads. .. _kubernetes-api-server-attacker: Kubernetes API Server Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This type of attack could be carried out by any user or code with network access to the Kubernetes API server and credentials that allow Kubernetes API requests. Such permissions would allow the user to read or manipulate the API server state (for example by changing CRDs). This section is intended to cover any attack that might be exposed via Kubernetes API server access, regardless of whether the access is full or limited. .. image:: images/cilium_threat_model_api_server_attacker.png For such an attacker, our threat matrix is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +==================+===================================================+ | Cilium agent | - A Kubernetes API user with ``kubectl exec`` | | | access to the pod running Cilium effectively | | | becomes a :ref:`root-equivalent host | | | attacker <root-equivalent-host-attacker>`, | | | since Cilium runs as a privileged pod. | | | - An attacker with permissions to configure | | | workload settings effectively becomes a | | | :ref:`kubernetes-workload-attacker`. | +------------------+---------------------------------------------------+ | Cilium | The ability to modify the ``Cilium*`` | | configuration | CustomResourceDefinitions, as well as any | | | CustomResource from Cilium, in the cluster could | | | have the following effects: | | | | | | - The ability to create or modify CiliumIdentity | | | and CiliumEndpoint or CiliumEndpointSlice | | | resources would allow an attacker to tamper | | | with the identities of pods. | | | - The ability to delete Kubernetes or Cilium | | | NetworkPolicies would remove policy | | | enforcement. | | | - Creating a large number of CiliumIdentity | | | resources could result in denial of service. | | | - Workloads external to the cluster could be | | | added to the network. | | | - Traffic routing settings between workloads | | | could be modified | | | | | | The cumulative effect of such actions could | | | result in the escalation of a single-node | | | compromise into a multi-node compromise. | +------------------+---------------------------------------------------+ | Cilium eBPF | An attacker with ``kubectl exec`` access to the | | programs | Cilium agent pod will be able to modify eBPF | | | programs. | +------------------+---------------------------------------------------+ | Network data | Privileged Kubernetes API server access (``exec`` | | | access to Cilium pods or access to view | | | Kubernetes secrets) could allow an attacker to | | | access the pre-shared key used for IPsec. When | | | used by a :ref:`man-in-the-middle | | | attacker <mitm-attacker>`, this | | | could undermine the confidentiality and integrity | | | of workload communication. | | | |br| |br| | | | Depending on the attacker's level of access, the | | | ability to spoof identities or tamper with policy | | | enforcement could also allow them to view network | | | data. | +------------------+---------------------------------------------------+ | Observability | Users with permissions to configure workload | | data | settings could cause denial of service. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - `Kubernetes RBAC`_ should be configured to only grant necessary permissions to users and service accounts. Access to resources in the ``kube-system`` and ``cilium`` namespaces in particular should be highly limited. - Kubernetes audit logs should be used to automatically review requests made to the API server, and detections should be configured to alert on suspicious activity. .. _Kubernetes RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ .. _kv-store-attacker: Cilium Key-value Store Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cilium can use :ref:`an external key-value store <k8s_install_etcd>` such as etcd to store state. In this scenario, we consider a user with network access to the Cilium etcd endpoints and credentials to access those etcd endpoints. The credentials to the etcd endpoints are stored as Kubernetes secrets; any attacker would first have to compromise these secrets before gaining access to the key-value store. .. image:: images/cilium_threat_model_etcd_attacker.png .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | The ability to create or modify Identities or | | configuration | Endpoints in etcd would allow an attacker to | | | "give" any pod any identity. The ability to spoof | | | identities in this manner might be used to | | | escalate a single node compromise to a multi-node | | | compromise, for example by spoofing identities to | | | undermine ingress segmentation rules that would | | | be applied on remote nodes. | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | An attacker would be able to modify the routing | | | of traffic within a cluster, and as a consequence | | | gain the privileges of a :ref:`mitm-attacker`. | | | | +------------------+---------------------------------------------------+ | Observability | None | | data | | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - The ``etcd`` instance deployed to store Cilium configuration should be independent of the instance that is typically deployed as part of configuring a Kubernetes cluster. This separation reduces the risk of a Cilium ``etcd`` compromise leading to further cluster-wide impact. - Kubernetes RBAC controls should be applied to restrict access to Kubernetes secrets. - Kubernetes audit logs should be used to detect access to secret data and alert if such access is suspicious. Hubble Data Attacker ~~~~~~~~~~~~~~~~~~~~ This is an attacker with network reachability to Kubernetes worker nodes, or other systems that store or expose Hubble data, with the goal of gaining access to potentially sensitive Hubble flow or process data. .. image:: images/cilium_threat_model_hubble_attacker.png .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | **Threat | **Identified STRIDE threats** | | surface** | | +==================+===================================================+ | Cilium pods | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | None | +------------------+---------------------------------------------------+ | Observability | None, assuming correct configuration of the | | data | following: | | | | | | - Network policy to limit access to | | | ``hubble-relay`` or ``hubble-ui`` services | | | - Limited access to ``cilium``, | | | ``hubble-relay``, or ``hubble-ui`` pods | | | - TLS for external data export | | | - Security controls at the destination of any | | | exported data | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - Network policies should limit access to the ``hubble-relay`` and ``hubble-ui`` services - Kubernetes RBAC should be used to limit access to any ``cilium-*`` or ``hubble-`*`` pods - TLS should be configured for access to the Hubble Relay API and Hubble UI - TLS should be correctly configured for any data export - The destination data stores for exported data should be secured (such as by applying encryption at rest and cloud provider specific RBAC controls, for example) Overall Recommendations ----------------------- To summarize the recommended controls to be used when configuring a production Kubernetes cluster with Cilium: #. Ensure that Kubernetes roles are scoped correctly to the requirements of your users, and that service account permissions for pods are tightly scoped to the needs of the workloads. In particular, access to sensitive namespaces, ``exec`` actions, and Kubernetes secrets should all be highly controlled. #. Use resource limits for workloads where possible to reduce the chance of denial of service attacks. #. Ensure that workload privileges and capabilities are only granted when essential to the functionality of the workload, and ensure that specific controls to limit and monitor the behavior of the workload are in place. #. Use :ref:`network policies <network_policy>` to ensure that network traffic in Kubernetes is segregated. #. Use :ref:`gsg_encryption` in Cilium to ensure that communication between workloads is secured. #. Enable Kubernetes audit logging, forward the audit logs to a centralized monitoring platform, and define alerting for suspicious activity. #. Enable TLS for access to any externally-facing services, such as Hubble Relay and Hubble UI. #. Use `Tetragon`_ as a runtime security solution to rapidly detect unexpected behavior within your Kubernetes cluster. If you have questions, suggestions, or would like to help improve Cilium's security posture, reach out to [email protected]. .. |br| raw:: html <br>
cilium
only not epub or latex or html WARNING You are looking at unreleased Cilium documentation Please use the official rendered version released here https docs cilium io Threat Model This section presents a threat model for Cilium This threat model allows interested parties to understand security specific implications of Cilium s architecture controls that are in place to secure data flowing through Cilium s various components recommended controls for running Cilium in a production environment Scope and Prerequisites This threat model considers the possible attacks that could affect an up to date version of Cilium running in a production environment it will be refreshed when there are significant changes to Cilium s architecture or security posture This model does not consider supply chain attacks such as attacks where a malicious contributor is able to intentionally inject vulnerable code into Cilium For users who are concerned about supply chain attacks Cilium s security audit assessed Cilium s supply chain controls against the SLSA framework In order to understand the following threat model readers will need familiarity with basic Kubernetes concepts as well as a high level understanding of Cilium s ref architecture and components component overview security audit https github com cilium cilium io blob main Security Reports CiliumSecurityAudit2022 pdf the SLSA framework https slsa dev Methodology This threat model considers eight different types of threat actors placed at different parts of a typical deployment stack We will primarily use Kubernetes as an example but the threat model remains accurate if deployed with other orchestration systems or when running Cilium outside of Kubernetes The attackers will have different levels of initial privileges giving us a broad overview of the security guarantees that Cilium can provide depending on the nature of the threat and the extent of a previous compromise For each threat actor this guide uses the the STRIDE methodology to assess likely attacks Where one attack type in the STRIDE set can lead to others for example tampering leading to denial of service we have described the attack path under the most impactful attack type For the potential attacks that we identify we recommend controls that can be used to reduce the risk of the identified attacks compromising a cluster Applying the recommended controls is strongly advised in order to run Cilium securely in production the STRIDE methodology https en wikipedia org wiki STRIDE security Reference Architecture For ease of understanding consider a single Kubernetes cluster running Cilium as illustrated below image images cilium threat model reference architecture png The Threat Surface In the above scenario the aim of Cilium s security controls is to ensure that all the components of the Cilium platform are operating correctly to the extent possible given the abilities of the threat actor that Cilium is faced with The key components that need to be protected are the Cilium agent running on a node either as a Kubernetes pod a host process or as an entire virtual machine Cilium state either stored via CRDs or via an external key value store like etcd eBPF programs loaded by Cilium into the kernel network packets managed by Cilium observability data collected by Cilium and stored by Hubble The Threat Model For each type of attacker we consider the plausible types of attacks available to them how Cilium can be used to protect against these attacks as well as the security controls that Cilium provides For attacks which might arise as a consequence of the high level of privileges required by Cilium we also suggest mitigations that users should apply to secure their environments kubernetes workload attacker Kubernetes Workload Attacker For the first scenario consider an attacker who has been able to gain access to a Kubernetes pod and is now able to run arbitrary code inside a container This could occur for example if a vulnerable service is exposed externally to a network In this case let us also assume that the compromised pod does not have any elevated privileges in Kubernetes or on the host or direct access to host files image images cilium threat model workload png In this scenario there is no potential for compromise of the Cilium stack in fact Cilium provides several features that would allow users to limit the scope of such an attack rst class wrapped table Threat surface Identified STRIDE Cilium security benefits threats Cilium agent Potential denial of Cilium can enforce service if the bandwidth limitations compromised on pods to limit the network resource utilization Kubernetes workload does not have defined resource limits Cilium None configuration Cilium eBPF None programs Network data None Cilium s network policy can be used to provide least privilege isolation between Kubernetes workloads and between Kubernetes workloads and external endpoints running outside the Kubernetes cluster or running on the Kubernetes worker nodes Users should ideally define specific allow rules that only permit expected communication between services Cilium s network connectivity will prevent an attacker from observing the traffic intended for other workloads or sending traffic that spoofs the identity of another pod even if transparent encryption is not in use Pods cannot send traffic that spoofs other pods due to limits on the use of source IPs and limits on sending tunneled traffic Observability None Cilium s Hubble flow event data observability can be used to provide reliable audit of the attacker s L3 L4 and L7 network connectivity bandwidth limitations https docs cilium io en stable network kubernetes bandwidth manager Recommended Controls Kubernetes workloads should have defined resource limits This will help in ensuring that Cilium is not starved of resources due to a misbehaving deployment in a cluster Cilium can be given prioritized access to system resources either via Kubernetes cgroups or other controls Runtime security solutions such as Tetragon should be deployed to ensure that container compromises can be detected as they occur defined resource limits https kubernetes io docs concepts configuration manage resources containers Tetragon https github com cilium tetragon limited privilege host attacker Limited privilege Host Attacker In this scenario the attacker is someone with the ability to run arbitrary code with direct access to the host PID or network namespace or both but without root privileges that would allow them to disable Cilium components or undermine the eBPF and other kernel state Cilium relies on This level of access could exist for a variety of reasons including Pods or other containers running in the host PID or network namespace but not with root privileges This includes hostNetwork true and hostPID true containers Non root SSH or other console access to a node A containerized workload that has escaped the container namespace but as a non privileged user image images cilium threat model non privileged png In this case an attacker would be able to bypass some of Cilium s network controls as described below rst class wrapped table Threat Identified STRIDE Cilium security surface threats benefits Cilium agent If the non privileged attacker is able to access the container runtime and Cilium is running as a container the attacker will be able to tamper with the Cilium agent running on the node Denial of service is also possible via spawning workloads directly on the host Cilium Same as for the Cilium configuration agent Cilium eBPF Same as for the Cilium programs agent Network data Elevation of Cilium s network privilege traffic connectivity will prevent sent by the attacker an attacker from observing will no longer be the traffic intended for subject to Kubernetes other workloads or or sending traffic that container networked spoofs the identity of Cilium network another pod even if policies transparent encryption is ref Host networked not in use Cilium policies host firewall will continue to apply Other traffic within the cluster remains unaffected Observability None Cilium s Hubble flow event data observability can be used to provide reliable audit of the attacker s L3 L4 and L7 network connectivity Traffic sent by the attacker will be attributed to the worker node and not to a specific Kubernetes workload Recommended Controls In addition to the recommended controls against the ref kubernetes workload attacker Container images should be regularly patched to reduce the chance of compromise Minimal container images should be used where possible Host level privileges should be avoided where possible Ensure that the container users do not have access to the underlying container runtime root equivalent host attacker Root equivalent Host Attacker A root privilege host attacker has full privileges to do everything on the local host This access could exist for several reasons including Root SSH or other console access to the Kubernetes worker node A containerized workload that has escaped the container namespace as a privileged user Pods running with privileged true or other significant capabilities like CAP SYS ADMIN or CAP BPF image images cilium threat model root png rst class wrapped table Threat Identified STRIDE threats surface Cilium agent In this situation all potential attacks covered by STRIDE are possible Of note The attacker would be able to disable eBPF on the node disabling Cilium s network and runtime visibility and enforcement All further operations by the attacker will be unlimited and unaudited The attacker would be able to observe network connectivity across all workloads on the host The attacker can spoof traffic from the node such that it appears to come from pods with any identity If the physical network allows ARP poisoning or if any other attack allows a compromised node to attract traffic destined to other nodes the attacker can potentially intercept all traffic in the cluster even if this traffic is encrypted using IPsec since we use a cluster wide pre shared key The attacker can also use Cilium s credentials to ref attack the Kubernetes API server kubernetes api server attacker as well as Cilium s ref etcd key value store kv store attacker if in use If the compromised node is running the cilium operator pod the attacker would be able to carry out denial of service attacks against other nodes using the cilium operator service account credentials found on the node Cilium configuration Cilium eBPF programs Network data Observability data This attack scenario emphasizes the importance of securing Kubernetes nodes minimizing the permissions available to container workloads and monitoring for suspicious activity on the node container and API server levels Recommended Controls In addition to the controls against a ref limited privilege host attacker Workloads with privileged access should be reviewed privileged access should only be provided to deployments if essential Network policies should be configured to limit connectivity to workloads with privileged access Kubernetes audit logging should be enabled with audit logs being sent to a centralized external location for automated review Detections should be configured to alert on suspicious activity cilium operator pods should not be scheduled on nodes that run regular workloads and should instead be configured to run on control plane nodes mitm attacker Man in the middle Attacker In this scenario our attacker has access to the underlying network between Kubernetes worker nodes but not the Kubernetes worker nodes themselves This attacker may inspect modify or inject malicious network traffic image images cilium threat model mitm png The threat matrix for such an attacker is as follows rst class wrapped table Threat Identified STRIDE threats surface Cilium agent None Cilium None configuration Cilium eBPF None programs Network data Without transparent encryption an attacker could inspect traffic between workloads in both overlay and native routing modes An attacker with knowledge of pod network configuration including pod IP addresses and ports could inject traffic into a cluster by forging packets Denial of service could occur depending on the behavior of the attacker Observability TLS is required for all connectivity between data Cilium components as well as for exporting data to other destinations removing the scope for spoofing or tampering Without transparent encryption the attacker could re create the observability data as available on the network level Information leakage could occur via an attacker scraping Hubble Prometheus metrics These metrics are disabled by default and can contain sensitive information on network flows Denial of service could occur depending on the behavior of the attacker Recommended Controls ref gsg encryption should be configured to ensure the confidentiality of communication between workloads TLS should be configured for communication between the Prometheus metrics endpoints and the Prometheus server Network policies should be configured such that only the Prometheus server is allowed to scrape ref Hubble metrics metrics in particular network attacker Network Attacker In our threat model a generic network attacker has access to the same underlying IP network as Kubernetes worker nodes but is not inline between the nodes The assumption is that this attacker is still able to send IP layer traffic that reaches a Kubernetes worker node This is a weaker variant of the man in the middle attack described above as the attacker can only inject traffic to worker nodes but not see the replies image images cilium threat model network attacker png For such an attacker the threat matrix is as follows rst class wrapped table Threat Identified STRIDE threats surface Cilium agent None Cilium None configuration Cilium eBPF None programs Network data An attacker with knowledge of pod network configuration including pod IP addresses and ports could inject traffic into a cluster by forging packets Denial of service could occur depending on the behavior of the attacker Observability Denial of service could occur depending on the data behavior of the attacker Information leakage could occur via an attacker scraping Cilium or Hubble Prometheus metrics depending on the specific metrics enabled Recommended Controls ref gsg encryption should be configured to ensure the confidentiality of communication between workloads kubernetes api server attacker Kubernetes API Server Attacker This type of attack could be carried out by any user or code with network access to the Kubernetes API server and credentials that allow Kubernetes API requests Such permissions would allow the user to read or manipulate the API server state for example by changing CRDs This section is intended to cover any attack that might be exposed via Kubernetes API server access regardless of whether the access is full or limited image images cilium threat model api server attacker png For such an attacker our threat matrix is as follows rst class wrapped table Threat Identified STRIDE threats surface Cilium agent A Kubernetes API user with kubectl exec access to the pod running Cilium effectively becomes a ref root equivalent host attacker root equivalent host attacker since Cilium runs as a privileged pod An attacker with permissions to configure workload settings effectively becomes a ref kubernetes workload attacker Cilium The ability to modify the Cilium configuration CustomResourceDefinitions as well as any CustomResource from Cilium in the cluster could have the following effects The ability to create or modify CiliumIdentity and CiliumEndpoint or CiliumEndpointSlice resources would allow an attacker to tamper with the identities of pods The ability to delete Kubernetes or Cilium NetworkPolicies would remove policy enforcement Creating a large number of CiliumIdentity resources could result in denial of service Workloads external to the cluster could be added to the network Traffic routing settings between workloads could be modified The cumulative effect of such actions could result in the escalation of a single node compromise into a multi node compromise Cilium eBPF An attacker with kubectl exec access to the programs Cilium agent pod will be able to modify eBPF programs Network data Privileged Kubernetes API server access exec access to Cilium pods or access to view Kubernetes secrets could allow an attacker to access the pre shared key used for IPsec When used by a ref man in the middle attacker mitm attacker this could undermine the confidentiality and integrity of workload communication br br Depending on the attacker s level of access the ability to spoof identities or tamper with policy enforcement could also allow them to view network data Observability Users with permissions to configure workload data settings could cause denial of service Recommended Controls Kubernetes RBAC should be configured to only grant necessary permissions to users and service accounts Access to resources in the kube system and cilium namespaces in particular should be highly limited Kubernetes audit logs should be used to automatically review requests made to the API server and detections should be configured to alert on suspicious activity Kubernetes RBAC https kubernetes io docs reference access authn authz rbac kv store attacker Cilium Key value Store Attacker Cilium can use ref an external key value store k8s install etcd such as etcd to store state In this scenario we consider a user with network access to the Cilium etcd endpoints and credentials to access those etcd endpoints The credentials to the etcd endpoints are stored as Kubernetes secrets any attacker would first have to compromise these secrets before gaining access to the key value store image images cilium threat model etcd attacker png rst class wrapped table Threat Identified STRIDE threats surface Cilium agent None Cilium The ability to create or modify Identities or configuration Endpoints in etcd would allow an attacker to give any pod any identity The ability to spoof identities in this manner might be used to escalate a single node compromise to a multi node compromise for example by spoofing identities to undermine ingress segmentation rules that would be applied on remote nodes Cilium eBPF None programs Network data An attacker would be able to modify the routing of traffic within a cluster and as a consequence gain the privileges of a ref mitm attacker Observability None data Recommended Controls The etcd instance deployed to store Cilium configuration should be independent of the instance that is typically deployed as part of configuring a Kubernetes cluster This separation reduces the risk of a Cilium etcd compromise leading to further cluster wide impact Kubernetes RBAC controls should be applied to restrict access to Kubernetes secrets Kubernetes audit logs should be used to detect access to secret data and alert if such access is suspicious Hubble Data Attacker This is an attacker with network reachability to Kubernetes worker nodes or other systems that store or expose Hubble data with the goal of gaining access to potentially sensitive Hubble flow or process data image images cilium threat model hubble attacker png rst class wrapped table Threat Identified STRIDE threats surface Cilium pods None Cilium None configuration Cilium eBPF None programs Network data None Observability None assuming correct configuration of the data following Network policy to limit access to hubble relay or hubble ui services Limited access to cilium hubble relay or hubble ui pods TLS for external data export Security controls at the destination of any exported data Recommended Controls Network policies should limit access to the hubble relay and hubble ui services Kubernetes RBAC should be used to limit access to any cilium or hubble pods TLS should be configured for access to the Hubble Relay API and Hubble UI TLS should be correctly configured for any data export The destination data stores for exported data should be secured such as by applying encryption at rest and cloud provider specific RBAC controls for example Overall Recommendations To summarize the recommended controls to be used when configuring a production Kubernetes cluster with Cilium Ensure that Kubernetes roles are scoped correctly to the requirements of your users and that service account permissions for pods are tightly scoped to the needs of the workloads In particular access to sensitive namespaces exec actions and Kubernetes secrets should all be highly controlled Use resource limits for workloads where possible to reduce the chance of denial of service attacks Ensure that workload privileges and capabilities are only granted when essential to the functionality of the workload and ensure that specific controls to limit and monitor the behavior of the workload are in place Use ref network policies network policy to ensure that network traffic in Kubernetes is segregated Use ref gsg encryption in Cilium to ensure that communication between workloads is secured Enable Kubernetes audit logging forward the audit logs to a centralized monitoring platform and define alerting for suspicious activity Enable TLS for access to any externally facing services such as Hubble Relay and Hubble UI Use Tetragon as a runtime security solution to rapidly detect unexpected behavior within your Kubernetes cluster If you have questions suggestions or would like to help improve Cilium s security posture reach out to security cilium io br raw html br
ckad excersises Create busybox pod with two containers each one will have the image busybox and will run the sleep 3600 command Make both containers mount an emptyDir at etc foo Connect to the second busybox write the first column of etc passwd file to etc foo passwd Connect to the first busybox and write etc foo passwd file to standard output Delete pod kubernetes io Documentation Tasks Configure Pods and Containers State Persistence 8 Define volumes
![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/state&empty) # State Persistence (8%) kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a Volume for Storage](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/) kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a PersistentVolume for Storage](https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/) ## Define volumes ### Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both containers mount an emptyDir at '/etc/foo'. Connect to the second busybox, write the first column of '/etc/passwd' file to '/etc/foo/passwd'. Connect to the first busybox and write '/etc/foo/passwd' file to standard output. Delete pod. <details><summary>show</summary> <p> *This question is probably a better fit for the 'Multi-container-pods' section but I'm keeping it here as it will help you get acquainted with state* Easiest way to do this is to create a template pod with: ```bash kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'sleep 3600' > pod.yaml vi pod.yaml ``` Copy paste the container definition and type the lines that have a comment in the end: ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: dnsPolicy: ClusterFirst restartPolicy: Never containers: - args: - /bin/sh - -c - sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} volumeMounts: # - name: myvolume # mountPath: /etc/foo # - args: - /bin/sh - -c - sleep 3600 image: busybox name: busybox2 # don't forget to change the name during copy paste, must be different from the first container's name! volumeMounts: # - name: myvolume # mountPath: /etc/foo # volumes: # - name: myvolume # emptyDir: {} # ``` In case you forget to add ```bash -- /bin/sh -c 'sleep 3600'``` in template pod create command, you can include command field in config file ```YAML spec: containers: - image: busybox name: busybox command: ["/bin/sh", "-c", "sleep 3600"] ``` Connect to the second container: ```bash kubectl exec -it busybox -c busybox2 -- /bin/sh cat /etc/passwd | cut -f 1 -d ':' > /etc/foo/passwd # instead of cut command you can use awk -F ":" '{print $1}' cat /etc/foo/passwd # confirm that stuff has been written successfully exit ``` Connect to the first container: ```bash kubectl exec -it busybox -c busybox -- /bin/sh mount | grep foo # confirm the mounting cat /etc/foo/passwd exit kubectl delete po busybox ``` </p> </details> ### Create a PersistentVolume of 10Gi, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster <details><summary>show</summary> <p> ```bash vi pv.yaml ``` ```YAML kind: PersistentVolume apiVersion: v1 metadata: name: myvolume spec: storageClassName: normal capacity: storage: 10Gi accessModes: - ReadWriteOnce - ReadWriteMany hostPath: path: /etc/foo ``` Show the PersistentVolumes: ```bash kubectl create -f pv.yaml # will have status 'Available' kubectl get pv ``` </p> </details> ### Create a PersistentVolumeClaim for this storage class, called 'mypvc', a request of 4Gi and an accessMode of ReadWriteOnce, with the storageClassName of normal, and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster <details><summary>show</summary> <p> ```bash vi pvc.yaml ``` ```YAML kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc spec: storageClassName: normal accessModes: - ReadWriteOnce resources: requests: storage: 4Gi ``` Create it on the cluster: ```bash kubectl create -f pvc.yaml ``` Show the PersistentVolumeClaims and PersistentVolumes: ```bash kubectl get pvc # will show as 'Bound' kubectl get pv # will show as 'Bound' as well ``` </p> </details> ### Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo/passwd' <details><summary>show</summary> <p> Create a skeleton pod: ```bash kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'sleep 3600' > pod.yaml vi pod.yaml ``` Add the lines that finish with a comment: ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - /bin/sh - -c - sleep 3600 image: busybox imagePullPolicy: IfNotPresent name: busybox resources: {} volumeMounts: # - name: myvolume # mountPath: /etc/foo # dnsPolicy: ClusterFirst restartPolicy: Never volumes: # - name: myvolume # persistentVolumeClaim: # claimName: mypvc # status: {} ``` Create the pod: ```bash kubectl create -f pod.yaml ``` Connect to the pod and copy '/etc/passwd' to '/etc/foo/passwd': ```bash kubectl exec busybox -it -- cp /etc/passwd /etc/foo/passwd ``` </p> </details> ### Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete pods to cleanup. Note: If you can't see the file from the second pod, can you figure out why? What would you do to fix that? <details><summary>show</summary> <p> Create the second pod, called busybox2: ```bash vim pod.yaml # change 'metadata.name: busybox' to 'metadata.name: busybox2' kubectl create -f pod.yaml kubectl exec busybox2 -- ls /etc/foo # will show 'passwd' # cleanup kubectl delete po busybox busybox2 kubectl delete pvc mypvc kubectl delete pv myvolume ``` If the file doesn't show on the second pod but it shows on the first, it has most likely been scheduled on a different node. ```bash # check which nodes the pods are on kubectl get po busybox -o wide kubectl get po busybox2 -o wide ``` If they are on different nodes, you won't see the file, because we used the `hostPath` volume type. If you need to access the same files in a multi-node cluster, you need a volume type that is independent of a specific node. There are lots of different types per cloud provider [(see here)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes), a general solution could be to use NFS. </p> </details> ### Create a busybox pod with 'sleep 3600' as arguments. Copy '/etc/passwd' from the pod to your local folder <details><summary>show</summary> <p> ```bash kubectl run busybox --image=busybox --restart=Never -- sleep 3600 kubectl cp busybox:/etc/passwd ./passwd # kubectl cp command # previous command might report an error, feel free to ignore it since copy command works cat passwd ``` </p> </details>
ckad excersises
https gaforgithub azurewebsites net api repo CKAD exercises state empty State Persistence 8 kubernetes io Documentation Tasks Configure Pods and Containers Configure a Pod to Use a Volume for Storage https kubernetes io docs tasks configure pod container configure volume storage kubernetes io Documentation Tasks Configure Pods and Containers Configure a Pod to Use a PersistentVolume for Storage https kubernetes io docs tasks configure pod container configure persistent volume storage Define volumes Create busybox pod with two containers each one will have the image busybox and will run the sleep 3600 command Make both containers mount an emptyDir at etc foo Connect to the second busybox write the first column of etc passwd file to etc foo passwd Connect to the first busybox and write etc foo passwd file to standard output Delete pod details summary show summary p This question is probably a better fit for the Multi container pods section but I m keeping it here as it will help you get acquainted with state Easiest way to do this is to create a template pod with bash kubectl run busybox image busybox restart Never o yaml dry run client bin sh c sleep 3600 pod yaml vi pod yaml Copy paste the container definition and type the lines that have a comment in the end YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run busybox name busybox spec dnsPolicy ClusterFirst restartPolicy Never containers args bin sh c sleep 3600 image busybox imagePullPolicy IfNotPresent name busybox resources volumeMounts name myvolume mountPath etc foo args bin sh c sleep 3600 image busybox name busybox2 don t forget to change the name during copy paste must be different from the first container s name volumeMounts name myvolume mountPath etc foo volumes name myvolume emptyDir In case you forget to add bash bin sh c sleep 3600 in template pod create command you can include command field in config file YAML spec containers image busybox name busybox command bin sh c sleep 3600 Connect to the second container bash kubectl exec it busybox c busybox2 bin sh cat etc passwd cut f 1 d etc foo passwd instead of cut command you can use awk F print 1 cat etc foo passwd confirm that stuff has been written successfully exit Connect to the first container bash kubectl exec it busybox c busybox bin sh mount grep foo confirm the mounting cat etc foo passwd exit kubectl delete po busybox p details Create a PersistentVolume of 10Gi called myvolume Make it have accessMode of ReadWriteOnce and ReadWriteMany storageClassName normal mounted on hostPath etc foo Save it on pv yaml add it to the cluster Show the PersistentVolumes that exist on the cluster details summary show summary p bash vi pv yaml YAML kind PersistentVolume apiVersion v1 metadata name myvolume spec storageClassName normal capacity storage 10Gi accessModes ReadWriteOnce ReadWriteMany hostPath path etc foo Show the PersistentVolumes bash kubectl create f pv yaml will have status Available kubectl get pv p details Create a PersistentVolumeClaim for this storage class called mypvc a request of 4Gi and an accessMode of ReadWriteOnce with the storageClassName of normal and save it on pvc yaml Create it on the cluster Show the PersistentVolumeClaims of the cluster Show the PersistentVolumes of the cluster details summary show summary p bash vi pvc yaml YAML kind PersistentVolumeClaim apiVersion v1 metadata name mypvc spec storageClassName normal accessModes ReadWriteOnce resources requests storage 4Gi Create it on the cluster bash kubectl create f pvc yaml Show the PersistentVolumeClaims and PersistentVolumes bash kubectl get pvc will show as Bound kubectl get pv will show as Bound as well p details Create a busybox pod with command sleep 3600 save it on pod yaml Mount the PersistentVolumeClaim to etc foo Connect to the busybox pod and copy the etc passwd file to etc foo passwd details summary show summary p Create a skeleton pod bash kubectl run busybox image busybox restart Never o yaml dry run client bin sh c sleep 3600 pod yaml vi pod yaml Add the lines that finish with a comment YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run busybox name busybox spec containers args bin sh c sleep 3600 image busybox imagePullPolicy IfNotPresent name busybox resources volumeMounts name myvolume mountPath etc foo dnsPolicy ClusterFirst restartPolicy Never volumes name myvolume persistentVolumeClaim claimName mypvc status Create the pod bash kubectl create f pod yaml Connect to the pod and copy etc passwd to etc foo passwd bash kubectl exec busybox it cp etc passwd etc foo passwd p details Create a second pod which is identical with the one you just created you can easily do it by changing the name property on pod yaml Connect to it and verify that etc foo contains the passwd file Delete pods to cleanup Note If you can t see the file from the second pod can you figure out why What would you do to fix that details summary show summary p Create the second pod called busybox2 bash vim pod yaml change metadata name busybox to metadata name busybox2 kubectl create f pod yaml kubectl exec busybox2 ls etc foo will show passwd cleanup kubectl delete po busybox busybox2 kubectl delete pvc mypvc kubectl delete pv myvolume If the file doesn t show on the second pod but it shows on the first it has most likely been scheduled on a different node bash check which nodes the pods are on kubectl get po busybox o wide kubectl get po busybox2 o wide If they are on different nodes you won t see the file because we used the hostPath volume type If you need to access the same files in a multi node cluster you need a volume type that is independent of a specific node There are lots of different types per cloud provider see here https kubernetes io docs concepts storage persistent volumes types of persistent volumes a general solution could be to use NFS p details Create a busybox pod with sleep 3600 as arguments Copy etc passwd from the pod to your local folder details summary show summary p bash kubectl run busybox image busybox restart Never sleep 3600 kubectl cp busybox etc passwd passwd kubectl cp command previous command might report an error feel free to ignore it since copy command works cat passwd p details
ckad excersises Pod design 20
![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/pod_design&empty) # Pod design (20%) [Labels And Annotations](#labels-and-annotations) [Deployments](#deployments) [Jobs](#jobs) [Cron Jobs](#cron-jobs) ## Labels and Annotations kubernetes.io > Documentation > Concepts > Overview > Working with Kubernetes Objects > [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) ### Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1 <details><summary>show</summary> <p> ```bash kubectl run nginx1 --image=nginx --restart=Never --labels=app=v1 kubectl run nginx2 --image=nginx --restart=Never --labels=app=v1 kubectl run nginx3 --image=nginx --restart=Never --labels=app=v1 # or for i in `seq 1 3`; do kubectl run nginx$i --image=nginx -l app=v1 ; done ``` </p> </details> ### Show all labels of the pods <details><summary>show</summary> <p> ```bash kubectl get po --show-labels ``` </p> </details> ### Change the labels of pod 'nginx2' to be app=v2 <details><summary>show</summary> <p> ```bash kubectl label po nginx2 app=v2 --overwrite ``` </p> </details> ### Get the label 'app' for the pods (show a column with APP labels) <details><summary>show</summary> <p> ```bash kubectl get po -L app # or kubectl get po --label-columns=app ``` </p> </details> ### Get only the 'app=v2' pods <details><summary>show</summary> <p> ```bash kubectl get po -l app=v2 # or kubectl get po -l 'app in (v2)' # or kubectl get po --selector=app=v2 ``` </p> </details> ### Add a new label tier=web to all pods having 'app=v2' or 'app=v1' labels <details><summary>show</summary> <p> ```bash kubectl label po -l "app in(v1,v2)" tier=web ``` </p> </details> ### Add an annotation 'owner: marketing' to all pods having 'app=v2' label <details><summary>show</summary> <p> ```bash kubectl annotate po -l "app=v2" owner=marketing ``` </p> </details> ### Remove the 'app' label from the pods we created before <details><summary>show</summary> <p> ```bash kubectl label po nginx1 nginx2 nginx3 app- # or kubectl label po nginx{1..3} app- # or kubectl label po -l app app- ``` </p> </details> ### Annotate pods nginx1, nginx2, nginx3 with "description='my description'" value <details><summary>show</summary> <p> ```bash kubectl annotate po nginx1 nginx2 nginx3 description='my description' #or kubectl annotate po nginx{1..3} description='my description' ``` </p> </details> ### Check the annotations for pod nginx1 <details><summary>show</summary> <p> ```bash kubectl annotate pod nginx1 --list # or kubectl describe po nginx1 | grep -i 'annotations' # or kubectl get po nginx1 -o custom-columns=Name:metadata.name,ANNOTATIONS:metadata.annotations.description ``` As an alternative to using `| grep` you can use jsonPath like `kubectl get po nginx1 -o jsonpath='{.metadata.annotations}{"\n"}'` </p> </details> ### Remove the annotations for these three pods <details><summary>show</summary> <p> ```bash kubectl annotate po nginx{1..3} description- owner- ``` </p> </details> ### Remove these pods to have a clean state in your cluster <details><summary>show</summary> <p> ```bash kubectl delete po nginx{1..3} ``` </p> </details> ## Pod Placement ### Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100' <details><summary>show</summary> <p> Add the label to a node: ```bash kubectl label nodes <your-node-name> accelerator=nvidia-tesla-p100 kubectl get nodes --show-labels ``` We can use the 'nodeSelector' property on the Pod YAML: ```YAML apiVersion: v1 kind: Pod metadata: name: cuda-test spec: containers: - name: cuda-test image: "k8s.gcr.io/cuda-vector-add:v0.1" nodeSelector: # add this accelerator: nvidia-tesla-p100 # the selection label ``` You can easily find out where in the YAML it should be placed by: ```bash kubectl explain po.spec ``` OR: Use node affinity (https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/#schedule-a-pod-using-required-node-affinity) ```YAML apiVersion: v1 kind: Pod metadata: name: affinity-pod spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: accelerator operator: In values: - nvidia-tesla-p100 containers: ... ``` </p> </details> ### Taint a node with key `tier` and value `frontend` with the effect `NoSchedule`. Then, create a pod that tolerates this taint. <details><summary>show</summary> <p> Taint a node: ```bash kubectl taint node node1 tier=frontend:NoSchedule # key=value:Effect kubectl describe node node1 # view the taints on a node ``` And to tolerate the taint: ```yaml apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: nginx image: nginx tolerations: - key: "tier" operator: "Equal" value: "frontend" effect: "NoSchedule" ``` </p> </details> ### Create a pod that will be placed on node `controlplane`. Use nodeSelector and tolerations. <details><summary>show</summary> <p> ```bash vi pod.yaml ``` ```yaml apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: nginx image: nginx nodeSelector: kubernetes.io/hostname: controlplane tolerations: - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" ``` ```bash kubectl create -f pod.yaml ``` </p> </details> ## Deployments kubernetes.io > Documentation > Concepts > Workloads > Workload Resources > [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment) ### Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment) <details><summary>show</summary> <p> ```bash kubectl create deployment nginx --image=nginx:1.18.0 --dry-run=client -o yaml > deploy.yaml vi deploy.yaml # change the replicas field from 1 to 2 # add this section to the container spec and save the deploy.yaml file # ports: # - containerPort: 80 kubectl apply -f deploy.yaml ``` or, do something like: ```bash kubectl create deployment nginx --image=nginx:1.18.0 --dry-run=client -o yaml | sed 's/replicas: 1/replicas: 2/g' | sed 's/image: nginx:1.18.0/image: nginx:1.18.0\n ports:\n - containerPort: 80/g' | kubectl apply -f - ``` or, ```bash kubectl create deploy nginx --image=nginx:1.18.0 --replicas=2 --port=80 ``` </p> </details> ### View the YAML of this deployment <details><summary>show</summary> <p> ```bash kubectl get deploy nginx -o yaml ``` </p> </details> ### View the YAML of the replica set that was created by this deployment <details><summary>show</summary> <p> ```bash kubectl describe deploy nginx # you'll see the name of the replica set on the Events section and in the 'NewReplicaSet' property # OR you can find rs directly by: kubectl get rs -l run=nginx # if you created deployment by 'run' command kubectl get rs -l app=nginx # if you created deployment by 'create' command # you could also just do kubectl get rs kubectl get rs nginx-7bf7478b77 -o yaml ``` </p> </details> ### Get the YAML for one of the pods <details><summary>show</summary> <p> ```bash kubectl get po # get all the pods # OR you can find pods directly by: kubectl get po -l run=nginx # if you created deployment by 'run' command kubectl get po -l app=nginx # if you created deployment by 'create' command kubectl get po nginx-7bf7478b77-gjzp8 -o yaml ``` </p> </details> ### Check how the deployment rollout is going <details><summary>show</summary> <p> ```bash kubectl rollout status deploy nginx ``` </p> </details> ### Update the nginx image to nginx:1.19.8 <details><summary>show</summary> <p> ```bash kubectl set image deploy nginx nginx=nginx:1.19.8 # alternatively... kubectl edit deploy nginx # change the .spec.template.spec.containers[0].image ``` The syntax of the 'kubectl set image' command is `kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N [options]` </p> </details> ### Check the rollout history and confirm that the replicas are OK <details><summary>show</summary> <p> ```bash kubectl rollout history deploy nginx kubectl get deploy nginx kubectl get rs # check that a new replica set has been created kubectl get po ``` </p> </details> ### Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0) <details><summary>show</summary> <p> ```bash kubectl rollout undo deploy nginx # wait a bit kubectl get po # select one 'Running' Pod kubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.18.0 ``` </p> </details> ### Do an on purpose update of the deployment with a wrong image nginx:1.91 <details><summary>show</summary> <p> ```bash kubectl set image deploy nginx nginx=nginx:1.91 # or kubectl edit deploy nginx # change the image to nginx:1.91 # vim tip: type (without quotes) '/image' and Enter, to navigate quickly ``` </p> </details> ### Verify that something's wrong with the rollout <details><summary>show</summary> <p> ```bash kubectl rollout status deploy nginx # or kubectl get po # you'll see 'ErrImagePull' or 'ImagePullBackOff' ``` </p> </details> ### Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8 <details><summary>show</summary> <p> ```bash kubectl rollout undo deploy nginx --to-revision=2 kubectl describe deploy nginx | grep Image: kubectl rollout status deploy nginx # Everything should be OK ``` </p> </details> ### Check the details of the fourth revision (number 4) <details><summary>show</summary> <p> ```bash kubectl rollout history deploy nginx --revision=4 # You'll also see the wrong image displayed here ``` </p> </details> ### Scale the deployment to 5 replicas <details><summary>show</summary> <p> ```bash kubectl scale deploy nginx --replicas=5 kubectl get po kubectl describe deploy nginx ``` </p> </details> ### Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80% <details><summary>show</summary> <p> ```bash kubectl autoscale deploy nginx --min=5 --max=10 --cpu-percent=80 # view the horizontalpodautoscalers.autoscaling for nginx kubectl get hpa nginx ``` </p> </details> ### Pause the rollout of the deployment <details><summary>show</summary> <p> ```bash kubectl rollout pause deploy nginx ``` </p> </details> ### Update the image to nginx:1.19.9 and check that there's nothing going on, since we paused the rollout <details><summary>show</summary> <p> ```bash kubectl set image deploy nginx nginx=nginx:1.19.9 # or kubectl edit deploy nginx # change the image to nginx:1.19.9 kubectl rollout history deploy nginx # no new revision ``` </p> </details> ### Resume the rollout and check that the nginx:1.19.9 image has been applied <details><summary>show</summary> <p> ```bash kubectl rollout resume deploy nginx kubectl rollout history deploy nginx kubectl rollout history deploy nginx --revision=6 # insert the number of your latest revision ``` </p> </details> ### Delete the deployment and the horizontal pod autoscaler you created <details><summary>show</summary> <p> ```bash kubectl delete deploy nginx kubectl delete hpa nginx #Or kubectl delete deploy/nginx hpa/nginx ``` </p> </details> ### Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio <details><summary>show</summary> <p> Deploy 3 replicas of v1: ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v1 labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app version: v1 template: metadata: labels: app: my-app version: v1 spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html initContainers: - name: install image: busybox:1.28 command: - /bin/sh - -c - "echo version-1 > /work-dir/index.html" volumeMounts: - name: workdir mountPath: "/work-dir" volumes: - name: workdir emptyDir: {} ``` Create the service: ``` apiVersion: v1 kind: Service metadata: name: my-app-svc labels: app: my-app spec: type: ClusterIP ports: - name: http port: 80 targetPort: 80 selector: app: my-app ``` Test if the deployment was successful from within a Pod: ``` # run a wget to the Service my-app-svc kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox --command -- wget -qO- my-app-svc version-1 ``` Deploy 1 replica of v2: ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-app-v2 labels: app: my-app spec: replicas: 1 selector: matchLabels: app: my-app version: v2 template: metadata: labels: app: my-app version: v2 spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html initContainers: - name: install image: busybox:1.28 command: - /bin/sh - -c - "echo version-2 > /work-dir/index.html" volumeMounts: - name: workdir mountPath: "/work-dir" volumes: - name: workdir emptyDir: {} ``` Observe that calling the ip exposed by the service the requests are load balanced across the two versions: ``` # run a busyBox pod that will make a wget call to the service my-app-svc and print out the version of the pod it reached. kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- /bin/sh -c 'while sleep 1; do wget -qO- my-app-svc; done' version-1 version-1 version-1 version-2 version-2 version-1 ``` If the v2 is stable, scale it up to 4 replicas and shoutdown the v1: ``` kubectl scale --replicas=4 deploy my-app-v2 kubectl delete deploy my-app-v1 while sleep 0.1; do curl $(kubectl get svc my-app-svc -o jsonpath="{.spec.clusterIP}"); done version-2 version-2 version-2 version-2 version-2 version-2 ``` </p> </details> ## Jobs ### Create a job named pi with image perl:5.34 that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'" <details><summary>show</summary> <p> ```bash kubectl create job pi --image=perl:5.34 -- perl -Mbignum=bpi -wle 'print bpi(2000)' ``` </p> </details> ### Wait till it's done, get the output <details><summary>show</summary> <p> ```bash kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big) kubectl get po # get the pod name kubectl logs pi-**** # get the pi numbers kubectl delete job pi ``` OR ```bash kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big) kubectl logs job/pi kubectl delete job pi ``` OR ```bash kubectl wait --for=condition=complete --timeout=300s job pi kubectl logs job/pi kubectl delete job pi ``` </p> </details> ### Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world' <details><summary>show</summary> <p> ```bash kubectl create job busybox --image=busybox -- /bin/sh -c 'echo hello;sleep 30;echo world' ``` </p> </details> ### Follow the logs for the pod (you'll wait for 30 seconds) <details><summary>show</summary> <p> ```bash kubectl get po # find the job pod kubectl logs busybox-ptx58 -f # follow the logs ``` </p> </details> ### See the status of the job, describe it and see the logs <details><summary>show</summary> <p> ```bash kubectl get jobs kubectl describe jobs busybox kubectl logs job/busybox ``` </p> </details> ### Delete the job <details><summary>show</summary> <p> ```bash kubectl delete job busybox ``` </p> </details> ### Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute <details><summary>show</summary> <p> ```bash kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'while true; do echo hello; sleep 10;done' > job.yaml vi job.yaml ``` Add job.spec.activeDeadlineSeconds=30 ```bash apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: activeDeadlineSeconds: 30 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - while true; do echo hello; sleep 10;done image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {} ``` </p> </details> ### Create the same job, make it run 5 times, one after the other. Verify its status and delete it <details><summary>show</summary> <p> ```bash kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'echo hello;sleep 30;echo world' > job.yaml vi job.yaml ``` Add job.spec.completions=5 ```YAML apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: completions: 5 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 30;echo world image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {} ``` ```bash kubectl create -f job.yaml ``` Verify that it has been completed: ```bash kubectl get job busybox -w # will take two and a half minutes kubectl delete jobs busybox ``` </p> </details> ### Create the same job, but make it run 5 parallel times <details><summary>show</summary> <p> ```bash vi job.yaml ``` Add job.spec.parallelism=5 ```YAML apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null labels: run: busybox name: busybox spec: parallelism: 5 # add this line template: metadata: creationTimestamp: null labels: run: busybox spec: containers: - args: - /bin/sh - -c - echo hello;sleep 30;echo world image: busybox name: busybox resources: {} restartPolicy: OnFailure status: {} ``` ```bash kubectl create -f job.yaml kubectl get jobs ``` It will take some time for the parallel jobs to finish (>= 30 seconds) ```bash kubectl delete job busybox ``` </p> </details> ## Cron jobs kubernetes.io > Documentation > Tasks > Run Jobs > [Running Automated Tasks with a CronJob](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/) ### Create a cron job with image busybox that runs on a schedule of "*/1 * * * *" and writes 'date; echo Hello from the Kubernetes cluster' to standard output <details><summary>show</summary> <p> ```bash kubectl create cronjob busybox --image=busybox --schedule="*/1 * * * *" -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' ``` </p> </details> ### See its logs and delete it <details><summary>show</summary> <p> ```bash kubectl get po # copy the ID of the pod whose container was just created kubectl logs <busybox-***> # you will see the date and message kubectl delete cj busybox # cj stands for cronjob ``` </p> </details> ### Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job <details><summary>show</summary> <p> ```bash kubectl get cj kubectl get jobs --watch kubectl get po --show-labels # observe that the pods have a label that mentions their 'parent' job kubectl logs busybox-1529745840-m867r # Bear in mind that Kubernetes will run a new job/pod for each new cron job kubectl delete cj busybox ``` </p> </details> ### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time). <details><summary>show</summary> <p> ```bash kubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule="* * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml vi time-limited-job.yaml ``` Add cronjob.spec.startingDeadlineSeconds=17 ```bash apiVersion: batch/v1 kind: CronJob metadata: creationTimestamp: null name: time-limited-job spec: startingDeadlineSeconds: 17 # add this line jobTemplate: metadata: creationTimestamp: null name: time-limited-job spec: template: metadata: creationTimestamp: null spec: containers: - args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster image: busybox name: time-limited-job resources: {} restartPolicy: Never schedule: '* * * * *' status: {} ``` </p> </details> ### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution. <details><summary>show</summary> <p> ```bash kubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule="* * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml vi time-limited-job.yaml ``` Add cronjob.spec.jobTemplate.spec.activeDeadlineSeconds=12 ```bash apiVersion: batch/v1 kind: CronJob metadata: creationTimestamp: null name: time-limited-job spec: jobTemplate: metadata: creationTimestamp: null name: time-limited-job spec: activeDeadlineSeconds: 12 # add this line template: metadata: creationTimestamp: null spec: containers: - args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster image: busybox name: time-limited-job resources: {} restartPolicy: Never schedule: '* * * * *' status: {} ``` </p> </details> ### Create a job from cronjob. <details><summary>show</summary> <p> ```bash kubectl create job --from=cronjob/sample-cron-job sample-job ``` </p> </details>
ckad excersises
https gaforgithub azurewebsites net api repo CKAD exercises pod design empty Pod design 20 Labels And Annotations labels and annotations Deployments deployments Jobs jobs Cron Jobs cron jobs Labels and Annotations kubernetes io Documentation Concepts Overview Working with Kubernetes Objects Labels and Selectors https kubernetes io docs concepts overview working with objects labels label selectors Create 3 pods with names nginx1 nginx2 nginx3 All of them should have the label app v1 details summary show summary p bash kubectl run nginx1 image nginx restart Never labels app v1 kubectl run nginx2 image nginx restart Never labels app v1 kubectl run nginx3 image nginx restart Never labels app v1 or for i in seq 1 3 do kubectl run nginx i image nginx l app v1 done p details Show all labels of the pods details summary show summary p bash kubectl get po show labels p details Change the labels of pod nginx2 to be app v2 details summary show summary p bash kubectl label po nginx2 app v2 overwrite p details Get the label app for the pods show a column with APP labels details summary show summary p bash kubectl get po L app or kubectl get po label columns app p details Get only the app v2 pods details summary show summary p bash kubectl get po l app v2 or kubectl get po l app in v2 or kubectl get po selector app v2 p details Add a new label tier web to all pods having app v2 or app v1 labels details summary show summary p bash kubectl label po l app in v1 v2 tier web p details Add an annotation owner marketing to all pods having app v2 label details summary show summary p bash kubectl annotate po l app v2 owner marketing p details Remove the app label from the pods we created before details summary show summary p bash kubectl label po nginx1 nginx2 nginx3 app or kubectl label po nginx 1 3 app or kubectl label po l app app p details Annotate pods nginx1 nginx2 nginx3 with description my description value details summary show summary p bash kubectl annotate po nginx1 nginx2 nginx3 description my description or kubectl annotate po nginx 1 3 description my description p details Check the annotations for pod nginx1 details summary show summary p bash kubectl annotate pod nginx1 list or kubectl describe po nginx1 grep i annotations or kubectl get po nginx1 o custom columns Name metadata name ANNOTATIONS metadata annotations description As an alternative to using grep you can use jsonPath like kubectl get po nginx1 o jsonpath metadata annotations n p details Remove the annotations for these three pods details summary show summary p bash kubectl annotate po nginx 1 3 description owner p details Remove these pods to have a clean state in your cluster details summary show summary p bash kubectl delete po nginx 1 3 p details Pod Placement Create a pod that will be deployed to a Node that has the label accelerator nvidia tesla p100 details summary show summary p Add the label to a node bash kubectl label nodes your node name accelerator nvidia tesla p100 kubectl get nodes show labels We can use the nodeSelector property on the Pod YAML YAML apiVersion v1 kind Pod metadata name cuda test spec containers name cuda test image k8s gcr io cuda vector add v0 1 nodeSelector add this accelerator nvidia tesla p100 the selection label You can easily find out where in the YAML it should be placed by bash kubectl explain po spec OR Use node affinity https kubernetes io docs tasks configure pod container assign pods nodes using node affinity schedule a pod using required node affinity YAML apiVersion v1 kind Pod metadata name affinity pod spec affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key accelerator operator In values nvidia tesla p100 containers p details Taint a node with key tier and value frontend with the effect NoSchedule Then create a pod that tolerates this taint details summary show summary p Taint a node bash kubectl taint node node1 tier frontend NoSchedule key value Effect kubectl describe node node1 view the taints on a node And to tolerate the taint yaml apiVersion v1 kind Pod metadata name frontend spec containers name nginx image nginx tolerations key tier operator Equal value frontend effect NoSchedule p details Create a pod that will be placed on node controlplane Use nodeSelector and tolerations details summary show summary p bash vi pod yaml yaml apiVersion v1 kind Pod metadata name frontend spec containers name nginx image nginx nodeSelector kubernetes io hostname controlplane tolerations key node role kubernetes io control plane operator Exists effect NoSchedule bash kubectl create f pod yaml p details Deployments kubernetes io Documentation Concepts Workloads Workload Resources Deployments https kubernetes io docs concepts workloads controllers deployment Create a deployment with image nginx 1 18 0 called nginx having 2 replicas defining port 80 as the port that this container exposes don t create a service for this deployment details summary show summary p bash kubectl create deployment nginx image nginx 1 18 0 dry run client o yaml deploy yaml vi deploy yaml change the replicas field from 1 to 2 add this section to the container spec and save the deploy yaml file ports containerPort 80 kubectl apply f deploy yaml or do something like bash kubectl create deployment nginx image nginx 1 18 0 dry run client o yaml sed s replicas 1 replicas 2 g sed s image nginx 1 18 0 image nginx 1 18 0 n ports n containerPort 80 g kubectl apply f or bash kubectl create deploy nginx image nginx 1 18 0 replicas 2 port 80 p details View the YAML of this deployment details summary show summary p bash kubectl get deploy nginx o yaml p details View the YAML of the replica set that was created by this deployment details summary show summary p bash kubectl describe deploy nginx you ll see the name of the replica set on the Events section and in the NewReplicaSet property OR you can find rs directly by kubectl get rs l run nginx if you created deployment by run command kubectl get rs l app nginx if you created deployment by create command you could also just do kubectl get rs kubectl get rs nginx 7bf7478b77 o yaml p details Get the YAML for one of the pods details summary show summary p bash kubectl get po get all the pods OR you can find pods directly by kubectl get po l run nginx if you created deployment by run command kubectl get po l app nginx if you created deployment by create command kubectl get po nginx 7bf7478b77 gjzp8 o yaml p details Check how the deployment rollout is going details summary show summary p bash kubectl rollout status deploy nginx p details Update the nginx image to nginx 1 19 8 details summary show summary p bash kubectl set image deploy nginx nginx nginx 1 19 8 alternatively kubectl edit deploy nginx change the spec template spec containers 0 image The syntax of the kubectl set image command is kubectl set image f FILENAME TYPE NAME CONTAINER NAME 1 CONTAINER IMAGE 1 CONTAINER NAME N CONTAINER IMAGE N options p details Check the rollout history and confirm that the replicas are OK details summary show summary p bash kubectl rollout history deploy nginx kubectl get deploy nginx kubectl get rs check that a new replica set has been created kubectl get po p details Undo the latest rollout and verify that new pods have the old image nginx 1 18 0 details summary show summary p bash kubectl rollout undo deploy nginx wait a bit kubectl get po select one Running Pod kubectl describe po nginx 5ff4457d65 nslcl grep i image should be nginx 1 18 0 p details Do an on purpose update of the deployment with a wrong image nginx 1 91 details summary show summary p bash kubectl set image deploy nginx nginx nginx 1 91 or kubectl edit deploy nginx change the image to nginx 1 91 vim tip type without quotes image and Enter to navigate quickly p details Verify that something s wrong with the rollout details summary show summary p bash kubectl rollout status deploy nginx or kubectl get po you ll see ErrImagePull or ImagePullBackOff p details Return the deployment to the second revision number 2 and verify the image is nginx 1 19 8 details summary show summary p bash kubectl rollout undo deploy nginx to revision 2 kubectl describe deploy nginx grep Image kubectl rollout status deploy nginx Everything should be OK p details Check the details of the fourth revision number 4 details summary show summary p bash kubectl rollout history deploy nginx revision 4 You ll also see the wrong image displayed here p details Scale the deployment to 5 replicas details summary show summary p bash kubectl scale deploy nginx replicas 5 kubectl get po kubectl describe deploy nginx p details Autoscale the deployment pods between 5 and 10 targetting CPU utilization at 80 details summary show summary p bash kubectl autoscale deploy nginx min 5 max 10 cpu percent 80 view the horizontalpodautoscalers autoscaling for nginx kubectl get hpa nginx p details Pause the rollout of the deployment details summary show summary p bash kubectl rollout pause deploy nginx p details Update the image to nginx 1 19 9 and check that there s nothing going on since we paused the rollout details summary show summary p bash kubectl set image deploy nginx nginx nginx 1 19 9 or kubectl edit deploy nginx change the image to nginx 1 19 9 kubectl rollout history deploy nginx no new revision p details Resume the rollout and check that the nginx 1 19 9 image has been applied details summary show summary p bash kubectl rollout resume deploy nginx kubectl rollout history deploy nginx kubectl rollout history deploy nginx revision 6 insert the number of your latest revision p details Delete the deployment and the horizontal pod autoscaler you created details summary show summary p bash kubectl delete deploy nginx kubectl delete hpa nginx Or kubectl delete deploy nginx hpa nginx p details Implement canary deployment by running two instances of nginx marked as version v1 and version v2 so that the load is balanced at 75 25 ratio details summary show summary p Deploy 3 replicas of v1 apiVersion apps v1 kind Deployment metadata name my app v1 labels app my app spec replicas 3 selector matchLabels app my app version v1 template metadata labels app my app version v1 spec containers name nginx image nginx ports containerPort 80 volumeMounts name workdir mountPath usr share nginx html initContainers name install image busybox 1 28 command bin sh c echo version 1 work dir index html volumeMounts name workdir mountPath work dir volumes name workdir emptyDir Create the service apiVersion v1 kind Service metadata name my app svc labels app my app spec type ClusterIP ports name http port 80 targetPort 80 selector app my app Test if the deployment was successful from within a Pod run a wget to the Service my app svc kubectl run it rm restart Never busybox image gcr io google containers busybox command wget qO my app svc version 1 Deploy 1 replica of v2 apiVersion apps v1 kind Deployment metadata name my app v2 labels app my app spec replicas 1 selector matchLabels app my app version v2 template metadata labels app my app version v2 spec containers name nginx image nginx ports containerPort 80 volumeMounts name workdir mountPath usr share nginx html initContainers name install image busybox 1 28 command bin sh c echo version 2 work dir index html volumeMounts name workdir mountPath work dir volumes name workdir emptyDir Observe that calling the ip exposed by the service the requests are load balanced across the two versions run a busyBox pod that will make a wget call to the service my app svc and print out the version of the pod it reached kubectl run it rm restart Never busybox image gcr io google containers busybox bin sh c while sleep 1 do wget qO my app svc done version 1 version 1 version 1 version 2 version 2 version 1 If the v2 is stable scale it up to 4 replicas and shoutdown the v1 kubectl scale replicas 4 deploy my app v2 kubectl delete deploy my app v1 while sleep 0 1 do curl kubectl get svc my app svc o jsonpath spec clusterIP done version 2 version 2 version 2 version 2 version 2 version 2 p details Jobs Create a job named pi with image perl 5 34 that runs the command with arguments perl Mbignum bpi wle print bpi 2000 details summary show summary p bash kubectl create job pi image perl 5 34 perl Mbignum bpi wle print bpi 2000 p details Wait till it s done get the output details summary show summary p bash kubectl get jobs w wait till SUCCESSFUL is 1 will take some time perl image might be big kubectl get po get the pod name kubectl logs pi get the pi numbers kubectl delete job pi OR bash kubectl get jobs w wait till SUCCESSFUL is 1 will take some time perl image might be big kubectl logs job pi kubectl delete job pi OR bash kubectl wait for condition complete timeout 300s job pi kubectl logs job pi kubectl delete job pi p details Create a job with the image busybox that executes the command echo hello sleep 30 echo world details summary show summary p bash kubectl create job busybox image busybox bin sh c echo hello sleep 30 echo world p details Follow the logs for the pod you ll wait for 30 seconds details summary show summary p bash kubectl get po find the job pod kubectl logs busybox ptx58 f follow the logs p details See the status of the job describe it and see the logs details summary show summary p bash kubectl get jobs kubectl describe jobs busybox kubectl logs job busybox p details Delete the job details summary show summary p bash kubectl delete job busybox p details Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute details summary show summary p bash kubectl create job busybox image busybox dry run client o yaml bin sh c while true do echo hello sleep 10 done job yaml vi job yaml Add job spec activeDeadlineSeconds 30 bash apiVersion batch v1 kind Job metadata creationTimestamp null labels run busybox name busybox spec activeDeadlineSeconds 30 add this line template metadata creationTimestamp null labels run busybox spec containers args bin sh c while true do echo hello sleep 10 done image busybox name busybox resources restartPolicy OnFailure status p details Create the same job make it run 5 times one after the other Verify its status and delete it details summary show summary p bash kubectl create job busybox image busybox dry run client o yaml bin sh c echo hello sleep 30 echo world job yaml vi job yaml Add job spec completions 5 YAML apiVersion batch v1 kind Job metadata creationTimestamp null labels run busybox name busybox spec completions 5 add this line template metadata creationTimestamp null labels run busybox spec containers args bin sh c echo hello sleep 30 echo world image busybox name busybox resources restartPolicy OnFailure status bash kubectl create f job yaml Verify that it has been completed bash kubectl get job busybox w will take two and a half minutes kubectl delete jobs busybox p details Create the same job but make it run 5 parallel times details summary show summary p bash vi job yaml Add job spec parallelism 5 YAML apiVersion batch v1 kind Job metadata creationTimestamp null labels run busybox name busybox spec parallelism 5 add this line template metadata creationTimestamp null labels run busybox spec containers args bin sh c echo hello sleep 30 echo world image busybox name busybox resources restartPolicy OnFailure status bash kubectl create f job yaml kubectl get jobs It will take some time for the parallel jobs to finish 30 seconds bash kubectl delete job busybox p details Cron jobs kubernetes io Documentation Tasks Run Jobs Running Automated Tasks with a CronJob https kubernetes io docs tasks job automated tasks with cron jobs Create a cron job with image busybox that runs on a schedule of 1 and writes date echo Hello from the Kubernetes cluster to standard output details summary show summary p bash kubectl create cronjob busybox image busybox schedule 1 bin sh c date echo Hello from the Kubernetes cluster p details See its logs and delete it details summary show summary p bash kubectl get po copy the ID of the pod whose container was just created kubectl logs busybox you will see the date and message kubectl delete cj busybox cj stands for cronjob p details Create the same cron job again and watch the status Once it ran check which job ran by the created cron job Check the log and delete the cron job details summary show summary p bash kubectl get cj kubectl get jobs watch kubectl get po show labels observe that the pods have a label that mentions their parent job kubectl logs busybox 1529745840 m867r Bear in mind that Kubernetes will run a new job pod for each new cron job kubectl delete cj busybox p details Create a cron job with image busybox that runs every minute and writes date echo Hello from the Kubernetes cluster to standard output The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time i e the job missed its scheduled time details summary show summary p bash kubectl create cronjob time limited job image busybox restart Never dry run client schedule o yaml bin sh c date echo Hello from the Kubernetes cluster time limited job yaml vi time limited job yaml Add cronjob spec startingDeadlineSeconds 17 bash apiVersion batch v1 kind CronJob metadata creationTimestamp null name time limited job spec startingDeadlineSeconds 17 add this line jobTemplate metadata creationTimestamp null name time limited job spec template metadata creationTimestamp null spec containers args bin sh c date echo Hello from the Kubernetes cluster image busybox name time limited job resources restartPolicy Never schedule status p details Create a cron job with image busybox that runs every minute and writes date echo Hello from the Kubernetes cluster to standard output The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution details summary show summary p bash kubectl create cronjob time limited job image busybox restart Never dry run client schedule o yaml bin sh c date echo Hello from the Kubernetes cluster time limited job yaml vi time limited job yaml Add cronjob spec jobTemplate spec activeDeadlineSeconds 12 bash apiVersion batch v1 kind CronJob metadata creationTimestamp null name time limited job spec jobTemplate metadata creationTimestamp null name time limited job spec activeDeadlineSeconds 12 add this line template metadata creationTimestamp null spec containers args bin sh c date echo Hello from the Kubernetes cluster image busybox name time limited job resources restartPolicy Never schedule status p details Create a job from cronjob details summary show summary p bash kubectl create job from cronjob sample cron job sample job p details
ckad excersises Services and Networking 13 kubectl run nginx image nginx restart Never port 80 expose bash Create a pod with image nginx called nginx and expose its port 80 details summary show summary p
![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/services&empty) # Services and Networking (13%) ### Create a pod with image nginx called nginx and expose its port 80 <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never --port=80 --expose # observe that a pod as well as a service are created ``` </p> </details> ### Confirm that ClusterIP has been created. Also check endpoints <details><summary>show</summary> <p> ```bash kubectl get svc nginx # services kubectl get ep # endpoints ``` </p> </details> ### Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget <details><summary>show</summary> <p> ```bash kubectl get svc nginx # get the IP (something like 10.108.93.130) kubectl run busybox --rm --image=busybox -it --restart=Never -- wget -O- [PUT THE POD'S IP ADDRESS HERE]:80 exit ``` </p> or <p> ```bash IP=$(kubectl get svc nginx --template=) # get the IP (something like 10.108.93.130) kubectl run busybox --rm --image=busybox -it --restart=Never --env="IP=$IP" -- wget -O- $IP:80 --timeout 2 # Tip: --timeout is optional, but it helps to get answer more quickly when connection fails (in seconds vs minutes) ``` </p> </details> ### Convert the ClusterIP to NodePort for the same service and find the NodePort port. Hit service using Node's IP. Delete the service and the pod at the end. <details><summary>show</summary> <p> ```bash kubectl edit svc nginx ``` ```yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-06-25T07:55:16Z name: nginx namespace: default resourceVersion: "93442" selfLink: /api/v1/namespaces/default/services/nginx uid: 191e3dac-784d-11e8-86b1-00155d9f663c spec: clusterIP: 10.97.242.220 ports: - port: 80 protocol: TCP targetPort: 80 selector: run: nginx sessionAffinity: None type: NodePort # change cluster IP to nodeport status: loadBalancer: {} ``` or ```bash kubectl patch svc nginx -p '{"spec":{"type":"NodePort"}}' ``` ```bash kubectl get svc ``` ``` # result: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.107.253.138 <none> 80:31931/TCP 3m ``` ```bash wget -O- NODE_IP:31931 # if you're using Kubernetes with Docker for Windows/Mac, try 127.0.0.1 #if you're using minikube, try minikube ip, then get the node ip such as 192.168.99.117 ``` ```bash kubectl delete svc nginx # Deletes the service kubectl delete pod nginx # Deletes the pod ``` </p> </details> ### Create a deployment called foo using image 'dgkanatsios/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet) <details><summary>show</summary> <p> ```bash kubectl create deploy foo --image=dgkanatsios/simpleapp --port=8080 --replicas=3 kubectl label deployment foo --overwrite app=foo #This is optional since kubectl create deploy foo will create label app=foo by default ``` </p> </details> ### Get the pod IPs. Create a temp busybox pod and try hitting them on port 8080 <details><summary>show</summary> <p> ```bash kubectl get pods -l app=foo -o wide # 'wide' will show pod IPs kubectl run busybox --image=busybox --restart=Never -it --rm -- sh wget -O- <POD_IP>:8080 # do not try with pod name, will not work # try hitting all IPs generated after running 1st command to confirm that hostname is different exit # or kubectl get po -o wide -l app=foo | awk '{print $6}' | grep -v IP | xargs -L1 -I '{}' kubectl run --rm -ti tmp --restart=Never --image=busybox -- wget -O- http://\{\}:8080 # or kubectl get po -l app=foo -o jsonpath='{range .items[*]}{.status.podIP}{"\n"}{end}' | xargs -L1 -I '{}' kubectl run --rm -ti tmp --restart=Never --image=busybox -- wget -O- http://\{\}:8080 ``` </p> </details> ### Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints <details><summary>show</summary> <p> ```bash kubectl expose deploy foo --port=6262 --target-port=8080 kubectl get service foo # you will see ClusterIP as well as port 6262 kubectl get endpoints foo # you will see the IPs of the three replica pods, listening on port 8080 ``` </p> </details> ### Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster <details><summary>show</summary> <p> ```bash kubectl get svc # get the foo service ClusterIP kubectl run busybox --image=busybox -it --rm --restart=Never -- sh wget -O- foo:6262 # DNS works! run it many times, you'll see different pods responding wget -O- <SERVICE_CLUSTER_IP>:6262 # ClusterIP works as well # you can also kubectl logs on deployment pods to see the container logs kubectl delete svc foo kubectl delete deploy foo ``` </p> </details> ### Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the deployment and apply it kubernetes.io > Documentation > Concepts > Services, Load Balancing, and Networking > [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) > Note that network policies may not be enforced by default, depending on your k8s implementation. E.g. Azure AKS by default won't have policy enforcement, the cluster must be created with an explicit support for `netpol` https://docs.microsoft.com/en-us/azure/aks/use-network-policies#overview-of-network-policy <details><summary>show</summary> <p> ```bash kubectl create deployment nginx --image=nginx --replicas=2 kubectl expose deployment nginx --port=80 kubectl describe svc nginx # see the 'app=nginx' selector for the pods # or kubectl get svc nginx -o yaml vi policy.yaml ``` ```YAML kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx # pick a name spec: podSelector: matchLabels: app: nginx # selector for the pods ingress: # allow ingress traffic - from: - podSelector: # from pods matchLabels: # with this label access: granted ``` ```bash # Create the NetworkPolicy kubectl create -f policy.yaml # Check if the Network Policy has been created correctly # make sure that your cluster's network provider supports Network Policy (https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/#before-you-begin) kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://nginx:80 --timeout 2 # This should not work. --timeout is optional here. But it helps to get answer more quickly (in seconds vs minutes) kubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=granted -- wget -O- http://nginx:80 --timeout 2 # This should be fine ``` </p> </details>
ckad excersises
https gaforgithub azurewebsites net api repo CKAD exercises services empty Services and Networking 13 Create a pod with image nginx called nginx and expose its port 80 details summary show summary p bash kubectl run nginx image nginx restart Never port 80 expose observe that a pod as well as a service are created p details Confirm that ClusterIP has been created Also check endpoints details summary show summary p bash kubectl get svc nginx services kubectl get ep endpoints p details Get service s ClusterIP create a temp busybox pod and hit that IP with wget details summary show summary p bash kubectl get svc nginx get the IP something like 10 108 93 130 kubectl run busybox rm image busybox it restart Never wget O PUT THE POD S IP ADDRESS HERE 80 exit p or p bash IP kubectl get svc nginx template get the IP something like 10 108 93 130 kubectl run busybox rm image busybox it restart Never env IP IP wget O IP 80 timeout 2 Tip timeout is optional but it helps to get answer more quickly when connection fails in seconds vs minutes p details Convert the ClusterIP to NodePort for the same service and find the NodePort port Hit service using Node s IP Delete the service and the pod at the end details summary show summary p bash kubectl edit svc nginx yaml apiVersion v1 kind Service metadata creationTimestamp 2018 06 25T07 55 16Z name nginx namespace default resourceVersion 93442 selfLink api v1 namespaces default services nginx uid 191e3dac 784d 11e8 86b1 00155d9f663c spec clusterIP 10 97 242 220 ports port 80 protocol TCP targetPort 80 selector run nginx sessionAffinity None type NodePort change cluster IP to nodeport status loadBalancer or bash kubectl patch svc nginx p spec type NodePort bash kubectl get svc result NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE kubernetes ClusterIP 10 96 0 1 none 443 TCP 1d nginx NodePort 10 107 253 138 none 80 31931 TCP 3m bash wget O NODE IP 31931 if you re using Kubernetes with Docker for Windows Mac try 127 0 0 1 if you re using minikube try minikube ip then get the node ip such as 192 168 99 117 bash kubectl delete svc nginx Deletes the service kubectl delete pod nginx Deletes the pod p details Create a deployment called foo using image dgkanatsios simpleapp a simple server that returns hostname and 3 replicas Label it as app foo Declare that containers in this pod will accept traffic on port 8080 do NOT create a service yet details summary show summary p bash kubectl create deploy foo image dgkanatsios simpleapp port 8080 replicas 3 kubectl label deployment foo overwrite app foo This is optional since kubectl create deploy foo will create label app foo by default p details Get the pod IPs Create a temp busybox pod and try hitting them on port 8080 details summary show summary p bash kubectl get pods l app foo o wide wide will show pod IPs kubectl run busybox image busybox restart Never it rm sh wget O POD IP 8080 do not try with pod name will not work try hitting all IPs generated after running 1st command to confirm that hostname is different exit or kubectl get po o wide l app foo awk print 6 grep v IP xargs L1 I kubectl run rm ti tmp restart Never image busybox wget O http 8080 or kubectl get po l app foo o jsonpath range items status podIP n end xargs L1 I kubectl run rm ti tmp restart Never image busybox wget O http 8080 p details Create a service that exposes the deployment on port 6262 Verify its existence check the endpoints details summary show summary p bash kubectl expose deploy foo port 6262 target port 8080 kubectl get service foo you will see ClusterIP as well as port 6262 kubectl get endpoints foo you will see the IPs of the three replica pods listening on port 8080 p details Create a temp busybox pod and connect via wget to foo service Verify that each time there s a different hostname returned Delete deployment and services to cleanup the cluster details summary show summary p bash kubectl get svc get the foo service ClusterIP kubectl run busybox image busybox it rm restart Never sh wget O foo 6262 DNS works run it many times you ll see different pods responding wget O SERVICE CLUSTER IP 6262 ClusterIP works as well you can also kubectl logs on deployment pods to see the container logs kubectl delete svc foo kubectl delete deploy foo p details Create an nginx deployment of 2 replicas expose it via a ClusterIP service on port 80 Create a NetworkPolicy so that only pods with labels access granted can access the deployment and apply it kubernetes io Documentation Concepts Services Load Balancing and Networking Network Policies https kubernetes io docs concepts services networking network policies Note that network policies may not be enforced by default depending on your k8s implementation E g Azure AKS by default won t have policy enforcement the cluster must be created with an explicit support for netpol https docs microsoft com en us azure aks use network policies overview of network policy details summary show summary p bash kubectl create deployment nginx image nginx replicas 2 kubectl expose deployment nginx port 80 kubectl describe svc nginx see the app nginx selector for the pods or kubectl get svc nginx o yaml vi policy yaml YAML kind NetworkPolicy apiVersion networking k8s io v1 metadata name access nginx pick a name spec podSelector matchLabels app nginx selector for the pods ingress allow ingress traffic from podSelector from pods matchLabels with this label access granted bash Create the NetworkPolicy kubectl create f policy yaml Check if the Network Policy has been created correctly make sure that your cluster s network provider supports Network Policy https kubernetes io docs tasks administer cluster declare network policy before you begin kubectl run busybox image busybox rm it restart Never wget O http nginx 80 timeout 2 This should not work timeout is optional here But it helps to get answer more quickly in seconds vs minutes kubectl run busybox image busybox rm it restart Never labels access granted wget O http nginx 80 timeout 2 This should be fine p details
ckad excersises Note The topic is part of the new CKAD syllabus Here are a few examples of using podman to manage the life cycle of container images The use of docker had been the industry standard for many years but now large companies like are moving to a new suite of open source tools podman skopeo and buildah Also Kubernetes has moved in this In particular is meant to be the replacement of the command so it makes sense to get familiar with it although they are quite interchangeable considering that they use the same syntax Podman basics details summary show summary Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page p Define build and modify container images
# Define, build and modify container images - Note: The topic is part of the new CKAD syllabus. Here are a few examples of using **podman** to manage the life cycle of container images. The use of **docker** had been the industry standard for many years, but now large companies like [Red Hat](https://www.redhat.com/en/blog/say-hello-buildah-podman-and-skopeo) are moving to a new suite of open source tools: podman, skopeo and buildah. Also Kubernetes has moved in this [direction](https://kubernetes.io/blog/2022/02/17/dockershim-faq/). In particular, `podman` is meant to be the replacement of the `docker` command: so it makes sense to get familiar with it, although they are quite interchangeable considering that they use the same syntax. ## Podman basics ### Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page <details><summary>show</summary> <p> ```Dockerfile FROM docker.io/httpd:2.4 RUN echo "Hello, World!" > /usr/local/apache2/htdocs/index.html ``` </p> </details> ### Build and see how many layers the image consists of <details><summary>show</summary> <p> ```bash :~$ podman build -t simpleapp . STEP 1/2: FROM httpd:2.4 STEP 2/2: RUN echo "Hello, World!" > /usr/local/apache2/htdocs/index.html COMMIT simpleapp --> ef4b14a72d0 Successfully tagged localhost/simpleapp:latest ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b :~$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/simpleapp latest ef4b14a72d02 8 seconds ago 148 MB docker.io/library/httpd 2.4 98f93cd0ec3b 7 days ago 148 MB :~$ podman image tree localhost/simpleapp:latest Image ID: ef4b14a72d02 Tags: [localhost/simpleapp:latest] Size: 147.8MB Image Layers ├── ID: ad6562704f37 Size: 83.9MB ├── ID: c234616e1912 Size: 3.072kB ├── ID: c23a797b2d04 Size: 2.721MB ├── ID: ede2e092faf0 Size: 61.11MB ├── ID: 971c2cdf3872 Size: 3.584kB Top Layer of: [docker.io/library/httpd:2.4] └── ID: 61644e82ef1f Size: 6.144kB Top Layer of: [localhost/simpleapp:latest] ``` </p> </details> ### Run the image locally, inspect its status and logs, finally test that it responds as expected <details><summary>show</summary> <p> ```bash :~$ podman run -d --name test -p 8080:80 localhost/simpleapp 2f3d7d613ea6ba19703811d30704d4025123c7302ff6fa295affc9bd30e532f8 :~$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2f3d7d613ea6 localhost/simpleapp:latest httpd-foreground 5 seconds ago Up 6 seconds ago 0.0.0.0:8080->80/tcp test :~$ podman logs test AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message [Sat Jun 04 16:15:38.071377 2022] [mpm_event:notice] [pid 1:tid 139756978220352] AH00489: Apache/2.4.53 (Unix) configured -- resuming normal operations [Sat Jun 04 16:15:38.073570 2022] [core:notice] [pid 1:tid 139756978220352] AH00094: Command line: 'httpd -D FOREGROUND' :~$ curl 0.0.0.0:8080 Hello, World! ``` </p> </details> ### Run a command inside the pod to print out the index.html file <details><summary>show</summary> <p> ```bash :~$ podman exec -it test cat /usr/local/apache2/htdocs/index.html Hello, World! ``` </p> </details> ### Tag the image with ip and port of a private local registry and then push the image to this registry <details><summary>show</summary> <p> > Note: Some small distributions of Kubernetes (such as [microk8s](https://microk8s.io/docs/registry-built-in)) have a built-in registry you can use for this exercise. If this is not your case, you'll have to setup it on your own. ```bash :~$ podman tag localhost/simpleapp $registry_ip:5000/simpleapp :~$ podman push $registry_ip:5000/simpleapp ``` </p> </details> Verify that the registry contains the pushed image and that you can pull it <details><summary>show</summary> <p> ```bash :~$ curl http://$registry_ip:5000/v2/_catalog {"repositories":["simpleapp"]} # remove the image already present :~$ podman rmi $registry_ip:5000/simpleapp :~$ podman pull $registry_ip:5000/simpleapp Trying to pull 10.152.183.13:5000/simpleapp:latest... Getting image source signatures Copying blob 643ea8c2c185 skipped: already exists Copying blob 972107ece720 skipped: already exists Copying blob 9857eeea6120 skipped: already exists Copying blob 93859aa62dbd skipped: already exists Copying blob 8e47efbf2b7e skipped: already exists Copying blob 42e0f5a91e40 skipped: already exists Copying config ef4b14a72d done Writing manifest to image destination Storing signatures ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b ``` </p> </details> ### Run a pod with the image pushed to the registry <details><summary>show</summary> <p> ```bash :~$ kubectl run simpleapp --image=$registry_ip:5000/simpleapp --port=80 :~$ curl ${kubectl get pods simpleapp -o jsonpath='{.status.podIP}'} Hello, World! ``` </p> </details> ### Log into a remote registry server and then read the credentials from the default file <details><summary>show</summary> <p> > Note: The two most used container registry servers with a free plan are [DockerHub](https://hub.docker.com/) and [Quay.io](https://quay.io/). ```bash :~$ podman login --username $YOUR_USER --password $YOUR_PWD docker.io :~$ cat ${XDG_RUNTIME_DIR}/containers/auth.json { "auths": { "docker.io": { "auth": "Z2l1bGl0JLSGtvbkxCcX1xb617251xh0x3zaUd4QW45Q3JuV3RDOTc=" } } } ``` </p> </details> ### Create a secret both from existing login credentials and from the CLI <details><summary>show</summary> <p> ```bash :~$ kubectl create secret generic mysecret --from-file=.dockerconfigjson=${XDG_RUNTIME_DIR}/containers/auth.json --type=kubernetes.io/dockeconfigjson secret/mysecret created :~$ kubectl create secret docker-registry mysecret2 --docker-server=https://index.docker.io/v1/ --docker-username=$YOUR_USR --docker-password=$YOUR_PWD secret/mysecret2 created ``` </p> </details> ### Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry <details><summary>show</summary> <p> ```yaml apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: $YOUR_PRIVATE_IMAGE imagePullSecrets: - name: mysecret ``` </p> </details> ### Clean up all the images and containers <details><summary>show</summary> <p> ```bash :~$ podman rm --all --force :~$ podman rmi --all :~$ kubectl delete pod simpleapp ``` </p> </details>
ckad excersises
Define build and modify container images Note The topic is part of the new CKAD syllabus Here are a few examples of using podman to manage the life cycle of container images The use of docker had been the industry standard for many years but now large companies like Red Hat https www redhat com en blog say hello buildah podman and skopeo are moving to a new suite of open source tools podman skopeo and buildah Also Kubernetes has moved in this direction https kubernetes io blog 2022 02 17 dockershim faq In particular podman is meant to be the replacement of the docker command so it makes sense to get familiar with it although they are quite interchangeable considering that they use the same syntax Podman basics Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page details summary show summary p Dockerfile FROM docker io httpd 2 4 RUN echo Hello World usr local apache2 htdocs index html p details Build and see how many layers the image consists of details summary show summary p bash podman build t simpleapp STEP 1 2 FROM httpd 2 4 STEP 2 2 RUN echo Hello World usr local apache2 htdocs index html COMMIT simpleapp ef4b14a72d0 Successfully tagged localhost simpleapp latest ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost simpleapp latest ef4b14a72d02 8 seconds ago 148 MB docker io library httpd 2 4 98f93cd0ec3b 7 days ago 148 MB podman image tree localhost simpleapp latest Image ID ef4b14a72d02 Tags localhost simpleapp latest Size 147 8MB Image Layers ID ad6562704f37 Size 83 9MB ID c234616e1912 Size 3 072kB ID c23a797b2d04 Size 2 721MB ID ede2e092faf0 Size 61 11MB ID 971c2cdf3872 Size 3 584kB Top Layer of docker io library httpd 2 4 ID 61644e82ef1f Size 6 144kB Top Layer of localhost simpleapp latest p details Run the image locally inspect its status and logs finally test that it responds as expected details summary show summary p bash podman run d name test p 8080 80 localhost simpleapp 2f3d7d613ea6ba19703811d30704d4025123c7302ff6fa295affc9bd30e532f8 podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2f3d7d613ea6 localhost simpleapp latest httpd foreground 5 seconds ago Up 6 seconds ago 0 0 0 0 8080 80 tcp test podman logs test AH00558 httpd Could not reliably determine the server s fully qualified domain name using 10 0 2 100 Set the ServerName directive globally to suppress this message AH00558 httpd Could not reliably determine the server s fully qualified domain name using 10 0 2 100 Set the ServerName directive globally to suppress this message Sat Jun 04 16 15 38 071377 2022 mpm event notice pid 1 tid 139756978220352 AH00489 Apache 2 4 53 Unix configured resuming normal operations Sat Jun 04 16 15 38 073570 2022 core notice pid 1 tid 139756978220352 AH00094 Command line httpd D FOREGROUND curl 0 0 0 0 8080 Hello World p details Run a command inside the pod to print out the index html file details summary show summary p bash podman exec it test cat usr local apache2 htdocs index html Hello World p details Tag the image with ip and port of a private local registry and then push the image to this registry details summary show summary p Note Some small distributions of Kubernetes such as microk8s https microk8s io docs registry built in have a built in registry you can use for this exercise If this is not your case you ll have to setup it on your own bash podman tag localhost simpleapp registry ip 5000 simpleapp podman push registry ip 5000 simpleapp p details Verify that the registry contains the pushed image and that you can pull it details summary show summary p bash curl http registry ip 5000 v2 catalog repositories simpleapp remove the image already present podman rmi registry ip 5000 simpleapp podman pull registry ip 5000 simpleapp Trying to pull 10 152 183 13 5000 simpleapp latest Getting image source signatures Copying blob 643ea8c2c185 skipped already exists Copying blob 972107ece720 skipped already exists Copying blob 9857eeea6120 skipped already exists Copying blob 93859aa62dbd skipped already exists Copying blob 8e47efbf2b7e skipped already exists Copying blob 42e0f5a91e40 skipped already exists Copying config ef4b14a72d done Writing manifest to image destination Storing signatures ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b p details Run a pod with the image pushed to the registry details summary show summary p bash kubectl run simpleapp image registry ip 5000 simpleapp port 80 curl kubectl get pods simpleapp o jsonpath status podIP Hello World p details Log into a remote registry server and then read the credentials from the default file details summary show summary p Note The two most used container registry servers with a free plan are DockerHub https hub docker com and Quay io https quay io bash podman login username YOUR USER password YOUR PWD docker io cat XDG RUNTIME DIR containers auth json auths docker io auth Z2l1bGl0JLSGtvbkxCcX1xb617251xh0x3zaUd4QW45Q3JuV3RDOTc p details Create a secret both from existing login credentials and from the CLI details summary show summary p bash kubectl create secret generic mysecret from file dockerconfigjson XDG RUNTIME DIR containers auth json type kubernetes io dockeconfigjson secret mysecret created kubectl create secret docker registry mysecret2 docker server https index docker io v1 docker username YOUR USR docker password YOUR PWD secret mysecret2 created p details Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry details summary show summary p yaml apiVersion v1 kind Pod metadata name private reg spec containers name private reg container image YOUR PRIVATE IMAGE imagePullSecrets name mysecret p details Clean up all the images and containers details summary show summary p bash podman rm all force podman rmi all kubectl delete pod simpleapp p details
ckad excersises Configuration 18
![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/configuration&empty) # Configuration (18%) [ConfigMaps](#configmaps) [SecurityContext](#securitycontext) [Requests and Limits](#requests-and-limits) [Secrets](#secrets) [Service Accounts](#serviceaccounts) <br>#Tips, export to variable<br> <br>export ns="-n secret-ops"</br> <br>export do="--dry-run=client -oyaml"</br> ## ConfigMaps kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) ### Create a configmap named config with values foo=lala,foo2=lolo <details><summary>show</summary> <p> ```bash kubectl create configmap config --from-literal=foo=lala --from-literal=foo2=lolo ``` </p> </details> ### Display its values <details><summary>show</summary> <p> ```bash kubectl get cm config -o yaml # or kubectl describe cm config ``` </p> </details> ### Create and display a configmap from a file Create the file with ```bash echo -e "foo3=lili\nfoo4=lele" > config.txt ``` <details><summary>show</summary> <p> ```bash kubectl create cm configmap2 --from-file=config.txt kubectl get cm configmap2 -o yaml ``` </p> </details> ### Create and display a configmap from a .env file Create the file with the command ```bash echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env ``` <details><summary>show</summary> <p> ```bash kubectl create cm configmap3 --from-env-file=config.env kubectl get cm configmap3 -o yaml ``` </p> </details> ### Create and display a configmap from a file, giving the key 'special' Create the file with ```bash echo -e "var3=val3\nvar4=val4" > config4.txt ``` <details><summary>show</summary> <p> ```bash kubectl create cm configmap4 --from-file=special=config4.txt kubectl describe cm configmap4 kubectl get cm configmap4 -o yaml ``` </p> </details> ### Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option' <details><summary>show</summary> <p> ```bash kubectl create cm options --from-literal=var5=val5 kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} env: - name: option # name of the env variable valueFrom: configMapKeyRef: name: options # name of config map key: var5 # name of the entity in config map dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl exec -it nginx -- env | grep option # will show 'option=val5' ``` </p> </details> ### Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod <details><summary>show</summary> <p> ```bash kubectl create configmap anotherone --from-literal=var6=val6 --from-literal=var7=val7 kubectl run --restart=Never nginx --image=nginx -o yaml --dry-run=client > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} envFrom: # different than previous one, that was 'env' - configMapRef: # different from the previous one, was 'configMapKeyRef' name: anotherone # the name of the config map dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl exec -it nginx -- env ``` </p> </details> ### Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '/etc/lala'. Create the pod and 'ls' into the '/etc/lala' directory. <details><summary>show</summary> <p> ```bash kubectl create configmap cmvolume --from-literal=var8=val8 --from-literal=var9=val9 kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: volumes: # add a volumes list - name: myvolume # just a name, you'll reference this in the pods configMap: name: cmvolume # name of your configmap containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} volumeMounts: # your volume mounts are listed here - name: myvolume # the name that you specified in pod.spec.volumes.name mountPath: /etc/lala # the path inside your container dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl exec -it nginx -- /bin/sh cd /etc/lala ls # will show var8 var9 cat var8 # will show val8 ``` </p> </details> ## SecurityContext kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) ### Create the YAML for an nginx pod that runs with the user ID 101. No need to create the pod <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: securityContext: # insert this line runAsUser: 101 # UID for the user containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` </p> </details> ### Create the YAML for an nginx pod that has the capabilities "NET_ADMIN", "SYS_TIME" added to its single container <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx securityContext: # insert this line capabilities: # and this add: ["NET_ADMIN", "SYS_TIME"] # this as well resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` </p> </details> ## Resource requests and limits kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) ### Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "200m" dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` </p> </details> ## Limit Ranges kubernetes.io > Documentation > Concepts > Policies > Limit Ranges (https://kubernetes.io/docs/concepts/policy/limit-range/) ### Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi <details><summary>show</summary> <p> ```bash kubectl create ns limitrange ``` vi 1.yaml ```YAML apiVersion: v1 kind: LimitRange metadata: name: ns-memory-limit namespace: limitrange spec: limits: - max: # max and min define the limit range memory: "500Mi" min: memory: "100Mi" type: Container ``` ```bash kubectl apply -f 1.yaml ``` </p> </details> ### Describe the namespace limitrange <details><summary>show</summary> <p> ```bash kubectl describe limitrange ns-memory-limit -n limitrange ``` </p> </details> ### Create an nginx pod that requests 250Mi of memory in the limitrange namespace <details><summary>show</summary> <p> vi 2.yaml ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx namespace: limitrange spec: containers: - image: nginx name: nginx resources: requests: memory: "250Mi" limits: memory: "500Mi" # limit has to be specified and be <= limitrange dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` ```bash kubectl apply -f 2.yaml ``` </p> </details> ## Resource Quotas kubernetes.io > Documentation > Concepts > Policies > Resource Quotas (https://kubernetes.io/docs/concepts/policy/resource-quotas/) ### Create ResourceQuota in namespace `one` with hard requests `cpu=1`, `memory=1Gi` and hard limits `cpu=2`, `memory=2Gi`. <details><summary>show</summary> <p> Create the namespace: ```bash kubectl create ns one ``` Create the ResourceQuota ```bash vi rq-one.yaml ``` ```YAML apiVersion: v1 kind: ResourceQuota metadata: name: my-rq namespace: one spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi ``` ```bash kubectl apply -f rq-one.yaml ``` or ```bash kubectl create quota my-rq --namespace=one --hard=requests.cpu=1,requests.memory=1Gi,limits.cpu=2,limits.memory=2Gi ``` </p> </details> ### Attempt to create a pod with resource requests `cpu=2`, `memory=3Gi` and limits `cpu=3`, `memory=4Gi` in namespace `one` <details><summary>show</summary> <p> ```bash vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx namespace: one spec: containers: - image: nginx name: nginx resources: requests: memory: "3Gi" cpu: "2" limits: memory: "4Gi" cpu: "3" dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` ```bash kubectl create -f pod.yaml ``` Expected error message: ```bash Error from server (Forbidden): error when creating "pod.yaml": pods "nginx" is forbidden: exceeded quota: my-rq, requested: limits.cpu=3,limits.memory=4Gi,requests.cpu=2,requests.memory=3Gi, used: limits.cpu=0,limits.memory=0,requests.cpu=0,requests.memory=0, limited: limits.cpu=2,limits.memory=2Gi,requests.cpu=1,requests.memory=1Gi ``` </p> </details> ### Create a pod with resource requests `cpu=0.5`, `memory=1Gi` and limits `cpu=1`, `memory=2Gi` in namespace `one` <details><summary>show</summary> <p> ```bash vi pod2.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx namespace: one spec: containers: - image: nginx name: nginx resources: requests: memory: "1Gi" cpu: "0.5" limits: memory: "2Gi" cpu: "1" dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` ```bash kubectl create -f pod2.yaml ``` Show the ResourceQuota usage in namespace `one` ```bash kubectl get resourcequota -n one ``` ``` NAME AGE REQUEST LIMIT my-rq 10m requests.cpu: 500m/1, requests.memory: 1Gi/1Gi limits.cpu: 1/2, limits.memory: 2Gi/2Gi ``` </p> </details> ## Secrets kubernetes.io > Documentation > Concepts > Configuration > [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) kubernetes.io > Documentation > Tasks > Inject Data Into Applications > [Distribute Credentials Securely Using Secrets](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/) ### Create a secret called mysecret with the values password=mypass <details><summary>show</summary> <p> ```bash kubectl create secret generic mysecret --from-literal=password=mypass ``` </p> </details> ### Create a secret called mysecret2 that gets key/value from a file Create a file called username with the value admin: ```bash echo -n admin > username ``` <details><summary>show</summary> <p> ```bash kubectl create secret generic mysecret2 --from-file=username ``` </p> </details> ### Get the value of mysecret2 <details><summary>show</summary> <p> ```bash kubectl get secret mysecret2 -o yaml echo -n YWRtaW4= | base64 -d # on MAC it is -D, which decodes the value and shows 'admin' ``` Alternative using `--jsonpath`: ```bash kubectl get secret mysecret2 -o jsonpath='{.data.username}' | base64 -d # on MAC it is -D ``` Alternative using `--template`: ```bash kubectl get secret mysecret2 --template '' | base64 -d # on MAC it is -D ``` Alternative using `jq`: ```bash kubectl get secret mysecret2 -o json | jq -r .data.username | base64 -d # on MAC it is -D ``` </p> </details> ### Create an nginx pod that mounts the secret mysecret2 in a volume on path /etc/foo <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: volumes: # specify the volumes - name: foo # this name will be used for reference inside the container secret: # we want a secret secretName: mysecret2 # name of the secret - this must already exist on pod creation containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} volumeMounts: # our volume mounts - name: foo # name on pod.spec.volumes mountPath: /etc/foo #our mount path dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl exec -it nginx -- /bin/bash ls /etc/foo # shows username cat /etc/foo/username # shows admin ``` </p> </details> ### Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME' <details><summary>show</summary> <p> ```bash kubectl delete po nginx kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} env: # our env variables - name: USERNAME # asked name valueFrom: secretKeyRef: # secret reference name: mysecret2 # our secret's name key: username # the key of the data in the secret dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl exec -it nginx -- env | grep USERNAME | cut -d '=' -f 2 # will show 'admin' ``` </p> </details> ### Create a Secret named 'ext-service-secret' in the namespace 'secret-ops'. Then, provide the key-value pair API_KEY=LmLHbYhsgWZwNifiqaRorH8T as literal. <details><summary>show</summary> <p> ```bash export ns="-n secret-ops" export do="--dry-run=client -oyaml" k create secret generic ext-service-secret --from-literal=API_KEY=LmLHbYhsgWZwNifiqaRorH8T $ns $do > sc.yaml k apply -f sc.yaml ``` </p> </details> ### Consuming the Secret. Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops' and consume the Secret as an environment variable. Then, open an interactive shell to the Pod, and print all environment variables. <details><summary>show</summary> <p> ```bash export ns="-n secret-ops" export do="--dry-run=client -oyaml" k run consumer --image=nginx $ns $do > nginx.yaml vi nginx.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: consumer name: consumer namespace: secret-ops spec: containers: - image: nginx name: consumer resources: {} env: - name: API_KEY valueFrom: secretKeyRef: name: ext-service-secret key: API_KEY dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` ```bash k exec -it $ns consumer -- /bin/sh #env ``` </p> </details> ### Create a Secret named 'my-secret' of type 'kubernetes.io/ssh-auth' in the namespace 'secret-ops'. Define a single key named 'ssh-privatekey', and point it to the file 'id_rsa' in this directory. <details><summary>show</summary> <p> ```bash #Tips, export to variable export do="--dry-run=client -oyaml" export ns="-n secret-ops" #if id_rsa file didn't exist. ssh-keygen k create secret generic my-secret $ns --type="kubernetes.io/ssh-auth" --from-file=ssh-privatekey=id_rsa $do > sc.yaml k apply -f sc.yaml ``` </p> </details> ### Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops', and consume the Secret as Volume. Mount the Secret as Volume to the path /var/app with read-only access. Open an interactive shell to the Pod, and render the contents of the file. <details><summary>show</summary> <p> ```bash #Tips, export to variable export ns="-n secret-ops" export do="--dry-run=client -oyaml" k run consumer --image=nginx $ns $do > nginx.yaml vi nginx.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: consumer name: consumer namespace: secret-ops spec: containers: - image: nginx name: consumer resources: {} volumeMounts: - name: foo mountPath: "/var/app" readOnly: true volumes: - name: foo secret: secretName: my-secret optional: true dnsPolicy: ClusterFirst restartPolicy: Always status: {} ``` ```bash k exec -it $ns consumer -- /bin/sh # cat /var/app/ssh-privatekey # exit ``` </p> </details> ## ServiceAccounts kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) ### See all the service accounts of the cluster in all namespaces <details><summary>show</summary> <p> ```bash kubectl get sa --all-namespaces ``` Alternatively ```bash kubectl get sa -A ``` </p> </details> ### Create a new serviceaccount called 'myuser' <details><summary>show</summary> <p> ```bash kubectl create sa myuser ``` Alternatively: ```bash # let's get a template easily kubectl get sa default -o yaml > sa.yaml vim sa.yaml ``` ```YAML apiVersion: v1 kind: ServiceAccount metadata: name: myuser ``` ```bash kubectl create -f sa.yaml ``` </p> </details> ### Create an nginx pod that uses 'myuser' as a service account <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: serviceAccountName: myuser # we use pod.spec.serviceAccountName containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` or ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: serviceAccount: myuser # we use pod.spec.serviceAccount containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl describe pod nginx # will see that a new secret called myuser-token-***** has been mounted ``` </p> </details> ### Generate an API token for the service account 'myuser' <details><summary>show</summary> <p> ```bash kubectl create token myuser ``` </p> </details>
ckad excersises
https gaforgithub azurewebsites net api repo CKAD exercises configuration empty Configuration 18 ConfigMaps configmaps SecurityContext securitycontext Requests and Limits requests and limits Secrets secrets Service Accounts serviceaccounts br Tips export to variable br br export ns n secret ops br br export do dry run client oyaml br ConfigMaps kubernetes io Documentation Tasks Configure Pods and Containers Configure a Pod to Use a ConfigMap https kubernetes io docs tasks configure pod container configure pod configmap Create a configmap named config with values foo lala foo2 lolo details summary show summary p bash kubectl create configmap config from literal foo lala from literal foo2 lolo p details Display its values details summary show summary p bash kubectl get cm config o yaml or kubectl describe cm config p details Create and display a configmap from a file Create the file with bash echo e foo3 lili nfoo4 lele config txt details summary show summary p bash kubectl create cm configmap2 from file config txt kubectl get cm configmap2 o yaml p details Create and display a configmap from a env file Create the file with the command bash echo e var1 val1 n this is a comment n nvar2 val2 n anothercomment config env details summary show summary p bash kubectl create cm configmap3 from env file config env kubectl get cm configmap3 o yaml p details Create and display a configmap from a file giving the key special Create the file with bash echo e var3 val3 nvar4 val4 config4 txt details summary show summary p bash kubectl create cm configmap4 from file special config4 txt kubectl describe cm configmap4 kubectl get cm configmap4 o yaml p details Create a configMap called options with the value var5 val5 Create a new nginx pod that loads the value from variable var5 in an env variable called option details summary show summary p bash kubectl create cm options from literal var5 val5 kubectl run nginx image nginx restart Never dry run client o yaml pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources env name option name of the env variable valueFrom configMapKeyRef name options name of config map key var5 name of the entity in config map dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl exec it nginx env grep option will show option val5 p details Create a configMap anotherone with values var6 val6 var7 val7 Load this configMap as env variables into a new nginx pod details summary show summary p bash kubectl create configmap anotherone from literal var6 val6 from literal var7 val7 kubectl run restart Never nginx image nginx o yaml dry run client pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources envFrom different than previous one that was env configMapRef different from the previous one was configMapKeyRef name anotherone the name of the config map dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl exec it nginx env p details Create a configMap cmvolume with values var8 val8 var9 val9 Load this as a volume inside an nginx pod on path etc lala Create the pod and ls into the etc lala directory details summary show summary p bash kubectl create configmap cmvolume from literal var8 val8 from literal var9 val9 kubectl run nginx image nginx restart Never o yaml dry run client pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec volumes add a volumes list name myvolume just a name you ll reference this in the pods configMap name cmvolume name of your configmap containers image nginx imagePullPolicy IfNotPresent name nginx resources volumeMounts your volume mounts are listed here name myvolume the name that you specified in pod spec volumes name mountPath etc lala the path inside your container dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl exec it nginx bin sh cd etc lala ls will show var8 var9 cat var8 will show val8 p details SecurityContext kubernetes io Documentation Tasks Configure Pods and Containers Configure a Security Context for a Pod or Container https kubernetes io docs tasks configure pod container security context Create the YAML for an nginx pod that runs with the user ID 101 No need to create the pod details summary show summary p bash kubectl run nginx image nginx restart Never dry run client o yaml pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec securityContext insert this line runAsUser 101 UID for the user containers image nginx imagePullPolicy IfNotPresent name nginx resources dnsPolicy ClusterFirst restartPolicy Never status p details Create the YAML for an nginx pod that has the capabilities NET ADMIN SYS TIME added to its single container details summary show summary p bash kubectl run nginx image nginx restart Never dry run client o yaml pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx securityContext insert this line capabilities and this add NET ADMIN SYS TIME this as well resources dnsPolicy ClusterFirst restartPolicy Never status p details Resource requests and limits kubernetes io Documentation Tasks Configure Pods and Containers Assign CPU Resources to Containers and Pods https kubernetes io docs tasks configure pod container assign cpu resource Create an nginx pod with requests cpu 100m memory 256Mi and limits cpu 200m memory 512Mi details summary show summary p bash kubectl run nginx image nginx dry run client o yaml pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx name nginx resources requests memory 256Mi cpu 100m limits memory 512Mi cpu 200m dnsPolicy ClusterFirst restartPolicy Always status p details Limit Ranges kubernetes io Documentation Concepts Policies Limit Ranges https kubernetes io docs concepts policy limit range Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi details summary show summary p bash kubectl create ns limitrange vi 1 yaml YAML apiVersion v1 kind LimitRange metadata name ns memory limit namespace limitrange spec limits max max and min define the limit range memory 500Mi min memory 100Mi type Container bash kubectl apply f 1 yaml p details Describe the namespace limitrange details summary show summary p bash kubectl describe limitrange ns memory limit n limitrange p details Create an nginx pod that requests 250Mi of memory in the limitrange namespace details summary show summary p vi 2 yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx namespace limitrange spec containers image nginx name nginx resources requests memory 250Mi limits memory 500Mi limit has to be specified and be limitrange dnsPolicy ClusterFirst restartPolicy Always status bash kubectl apply f 2 yaml p details Resource Quotas kubernetes io Documentation Concepts Policies Resource Quotas https kubernetes io docs concepts policy resource quotas Create ResourceQuota in namespace one with hard requests cpu 1 memory 1Gi and hard limits cpu 2 memory 2Gi details summary show summary p Create the namespace bash kubectl create ns one Create the ResourceQuota bash vi rq one yaml YAML apiVersion v1 kind ResourceQuota metadata name my rq namespace one spec hard requests cpu 1 requests memory 1Gi limits cpu 2 limits memory 2Gi bash kubectl apply f rq one yaml or bash kubectl create quota my rq namespace one hard requests cpu 1 requests memory 1Gi limits cpu 2 limits memory 2Gi p details Attempt to create a pod with resource requests cpu 2 memory 3Gi and limits cpu 3 memory 4Gi in namespace one details summary show summary p bash vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx namespace one spec containers image nginx name nginx resources requests memory 3Gi cpu 2 limits memory 4Gi cpu 3 dnsPolicy ClusterFirst restartPolicy Always status bash kubectl create f pod yaml Expected error message bash Error from server Forbidden error when creating pod yaml pods nginx is forbidden exceeded quota my rq requested limits cpu 3 limits memory 4Gi requests cpu 2 requests memory 3Gi used limits cpu 0 limits memory 0 requests cpu 0 requests memory 0 limited limits cpu 2 limits memory 2Gi requests cpu 1 requests memory 1Gi p details Create a pod with resource requests cpu 0 5 memory 1Gi and limits cpu 1 memory 2Gi in namespace one details summary show summary p bash vi pod2 yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx namespace one spec containers image nginx name nginx resources requests memory 1Gi cpu 0 5 limits memory 2Gi cpu 1 dnsPolicy ClusterFirst restartPolicy Always status bash kubectl create f pod2 yaml Show the ResourceQuota usage in namespace one bash kubectl get resourcequota n one NAME AGE REQUEST LIMIT my rq 10m requests cpu 500m 1 requests memory 1Gi 1Gi limits cpu 1 2 limits memory 2Gi 2Gi p details Secrets kubernetes io Documentation Concepts Configuration Secrets https kubernetes io docs concepts configuration secret kubernetes io Documentation Tasks Inject Data Into Applications Distribute Credentials Securely Using Secrets https kubernetes io docs tasks inject data application distribute credentials secure Create a secret called mysecret with the values password mypass details summary show summary p bash kubectl create secret generic mysecret from literal password mypass p details Create a secret called mysecret2 that gets key value from a file Create a file called username with the value admin bash echo n admin username details summary show summary p bash kubectl create secret generic mysecret2 from file username p details Get the value of mysecret2 details summary show summary p bash kubectl get secret mysecret2 o yaml echo n YWRtaW4 base64 d on MAC it is D which decodes the value and shows admin Alternative using jsonpath bash kubectl get secret mysecret2 o jsonpath data username base64 d on MAC it is D Alternative using template bash kubectl get secret mysecret2 template base64 d on MAC it is D Alternative using jq bash kubectl get secret mysecret2 o json jq r data username base64 d on MAC it is D p details Create an nginx pod that mounts the secret mysecret2 in a volume on path etc foo details summary show summary p bash kubectl run nginx image nginx restart Never o yaml dry run client pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec volumes specify the volumes name foo this name will be used for reference inside the container secret we want a secret secretName mysecret2 name of the secret this must already exist on pod creation containers image nginx imagePullPolicy IfNotPresent name nginx resources volumeMounts our volume mounts name foo name on pod spec volumes mountPath etc foo our mount path dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl exec it nginx bin bash ls etc foo shows username cat etc foo username shows admin p details Delete the pod you just created and mount the variable username from secret mysecret2 onto a new nginx pod in env variable called USERNAME details summary show summary p bash kubectl delete po nginx kubectl run nginx image nginx restart Never o yaml dry run client pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources env our env variables name USERNAME asked name valueFrom secretKeyRef secret reference name mysecret2 our secret s name key username the key of the data in the secret dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl exec it nginx env grep USERNAME cut d f 2 will show admin p details Create a Secret named ext service secret in the namespace secret ops Then provide the key value pair API KEY LmLHbYhsgWZwNifiqaRorH8T as literal details summary show summary p bash export ns n secret ops export do dry run client oyaml k create secret generic ext service secret from literal API KEY LmLHbYhsgWZwNifiqaRorH8T ns do sc yaml k apply f sc yaml p details Consuming the Secret Create a Pod named consumer with the image nginx in the namespace secret ops and consume the Secret as an environment variable Then open an interactive shell to the Pod and print all environment variables details summary show summary p bash export ns n secret ops export do dry run client oyaml k run consumer image nginx ns do nginx yaml vi nginx yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run consumer name consumer namespace secret ops spec containers image nginx name consumer resources env name API KEY valueFrom secretKeyRef name ext service secret key API KEY dnsPolicy ClusterFirst restartPolicy Always status bash k exec it ns consumer bin sh env p details Create a Secret named my secret of type kubernetes io ssh auth in the namespace secret ops Define a single key named ssh privatekey and point it to the file id rsa in this directory details summary show summary p bash Tips export to variable export do dry run client oyaml export ns n secret ops if id rsa file didn t exist ssh keygen k create secret generic my secret ns type kubernetes io ssh auth from file ssh privatekey id rsa do sc yaml k apply f sc yaml p details Create a Pod named consumer with the image nginx in the namespace secret ops and consume the Secret as Volume Mount the Secret as Volume to the path var app with read only access Open an interactive shell to the Pod and render the contents of the file details summary show summary p bash Tips export to variable export ns n secret ops export do dry run client oyaml k run consumer image nginx ns do nginx yaml vi nginx yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run consumer name consumer namespace secret ops spec containers image nginx name consumer resources volumeMounts name foo mountPath var app readOnly true volumes name foo secret secretName my secret optional true dnsPolicy ClusterFirst restartPolicy Always status bash k exec it ns consumer bin sh cat var app ssh privatekey exit p details ServiceAccounts kubernetes io Documentation Tasks Configure Pods and Containers Configure Service Accounts for Pods https kubernetes io docs tasks configure pod container configure service account See all the service accounts of the cluster in all namespaces details summary show summary p bash kubectl get sa all namespaces Alternatively bash kubectl get sa A p details Create a new serviceaccount called myuser details summary show summary p bash kubectl create sa myuser Alternatively bash let s get a template easily kubectl get sa default o yaml sa yaml vim sa yaml YAML apiVersion v1 kind ServiceAccount metadata name myuser bash kubectl create f sa yaml p details Create an nginx pod that uses myuser as a service account details summary show summary p bash kubectl run nginx image nginx restart Never o yaml dry run client pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec serviceAccountName myuser we use pod spec serviceAccountName containers image nginx imagePullPolicy IfNotPresent name nginx resources dnsPolicy ClusterFirst restartPolicy Never status or YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec serviceAccount myuser we use pod spec serviceAccount containers image nginx imagePullPolicy IfNotPresent name nginx resources dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl describe pod nginx will see that a new secret called myuser token has been mounted p details Generate an API token for the service account myuser details summary show summary p bash kubectl create token myuser p details
ckad excersises Observability 18 details summary show summary Create an nginx pod with a liveness probe that just runs the command ls Save its YAML in pod yaml Run it check its probe status delete it Liveness readiness and startup probes kubernetes io Documentation Tasks Configure Pods and Containers
![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/observability&empty) # Observability (18%) ## Liveness, readiness and startup probes kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) ### Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it. <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} livenessProbe: # our probe exec: # add this line command: # command definition - ls # ls command dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl describe pod nginx | grep -i liveness # run this to see that liveness probe works kubectl delete -f pod.yaml ``` </p> </details> ### Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it. <details><summary>show</summary> <p> ```bash kubectl explain pod.spec.containers.livenessProbe # get the exact names ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} livenessProbe: initialDelaySeconds: 5 # add this line periodSeconds: 5 # add this line as well exec: command: - ls dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl describe po nginx | grep -i liveness kubectl delete -f pod.yaml ``` </p> </details> ### Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it. <details><summary>show</summary> <p> ```bash kubectl run nginx --image=nginx --dry-run=client -o yaml --restart=Never --port=80 > pod.yaml vi pod.yaml ``` ```YAML apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} ports: - containerPort: 80 # Note: Readiness probes runs on the container during its whole lifecycle. Since nginx exposes 80, containerPort: 80 is not required for readiness to work. readinessProbe: # declare the readiness probe httpGet: # add this line path: / # port: 80 # dnsPolicy: ClusterFirst restartPolicy: Never status: {} ``` ```bash kubectl create -f pod.yaml kubectl describe pod nginx | grep -i readiness # to see the pod readiness details kubectl delete -f pod.yaml ``` </p> </details> ### Lots of pods are running in `qa`,`alan`,`test`,`production` namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of `<namespace>/<pod name>` per line. <details><summary>show</summary> <p> A typical liveness probe failure event ``` LAST SEEN TYPE REASON OBJECT MESSAGE 22m Warning Unhealthy pod/liveness-exec Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory ``` collect failed pods namespace by namespace ```sh kubectl get events -o json | jq -r '.items[] | select(.message | contains("Liveness probe failed")).involvedObject | .namespace + "/" + .name' ``` </p> </details> ## Logging ### Create a busybox pod that runs `i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done`. Check its logs <details><summary>show</summary> <p> ```bash kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done' kubectl logs busybox -f # follow the logs ``` </p> </details> ## Debugging ### Create a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod <details><summary>show</summary> <p> ```bash kubectl run busybox --restart=Never --image=busybox -- /bin/sh -c 'ls /notexist' # show that there's an error kubectl logs busybox kubectl describe po busybox kubectl delete po busybox ``` </p> </details> ### Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period <details><summary>show</summary> <p> ```bash kubectl run busybox --restart=Never --image=busybox -- notexist kubectl logs busybox # will bring nothing! container never started kubectl describe po busybox # in the events section, you'll see the error # also... kubectl get events | grep -i error # you'll see the error here as well kubectl delete po busybox --force --grace-period=0 ``` </p> </details> ### Get CPU/memory utilization for nodes ([metrics-server](https://github.com/kubernetes-incubator/metrics-server) must be running) <details><summary>show</summary> <p> ```bash kubectl top nodes ``` </p> </details>
ckad excersises
https gaforgithub azurewebsites net api repo CKAD exercises observability empty Observability 18 Liveness readiness and startup probes kubernetes io Documentation Tasks Configure Pods and Containers Configure Liveness Readiness and Startup Probes https kubernetes io docs tasks configure pod container configure liveness readiness startup probes Create an nginx pod with a liveness probe that just runs the command ls Save its YAML in pod yaml Run it check its probe status delete it details summary show summary p bash kubectl run nginx image nginx restart Never dry run client o yaml pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources livenessProbe our probe exec add this line command command definition ls ls command dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl describe pod nginx grep i liveness run this to see that liveness probe works kubectl delete f pod yaml p details Modify the pod yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds Run it check the probe delete it details summary show summary p bash kubectl explain pod spec containers livenessProbe get the exact names YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources livenessProbe initialDelaySeconds 5 add this line periodSeconds 5 add this line as well exec command ls dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl describe po nginx grep i liveness kubectl delete f pod yaml p details Create an nginx pod that includes port 80 with an HTTP readinessProbe on path on port 80 Again run it check the readinessProbe delete it details summary show summary p bash kubectl run nginx image nginx dry run client o yaml restart Never port 80 pod yaml vi pod yaml YAML apiVersion v1 kind Pod metadata creationTimestamp null labels run nginx name nginx spec containers image nginx imagePullPolicy IfNotPresent name nginx resources ports containerPort 80 Note Readiness probes runs on the container during its whole lifecycle Since nginx exposes 80 containerPort 80 is not required for readiness to work readinessProbe declare the readiness probe httpGet add this line path port 80 dnsPolicy ClusterFirst restartPolicy Never status bash kubectl create f pod yaml kubectl describe pod nginx grep i readiness to see the pod readiness details kubectl delete f pod yaml p details Lots of pods are running in qa alan test production namespaces All of these pods are configured with liveness probe Please list all pods whose liveness probe are failed in the format of namespace pod name per line details summary show summary p A typical liveness probe failure event LAST SEEN TYPE REASON OBJECT MESSAGE 22m Warning Unhealthy pod liveness exec Liveness probe failed cat can t open tmp healthy No such file or directory collect failed pods namespace by namespace sh kubectl get events o json jq r items select message contains Liveness probe failed involvedObject namespace name p details Logging Create a busybox pod that runs i 0 while true do echo i date i i 1 sleep 1 done Check its logs details summary show summary p bash kubectl run busybox image busybox restart Never bin sh c i 0 while true do echo i date i i 1 sleep 1 done kubectl logs busybox f follow the logs p details Debugging Create a busybox pod that runs ls notexist Determine if there s an error of course there is see it In the end delete the pod details summary show summary p bash kubectl run busybox restart Never image busybox bin sh c ls notexist show that there s an error kubectl logs busybox kubectl describe po busybox kubectl delete po busybox p details Create a busybox pod that runs notexist Determine if there s an error of course there is see it In the end delete the pod forcefully with a 0 grace period details summary show summary p bash kubectl run busybox restart Never image busybox notexist kubectl logs busybox will bring nothing container never started kubectl describe po busybox in the events section you ll see the error also kubectl get events grep i error you ll see the error here as well kubectl delete po busybox force grace period 0 p details Get CPU memory utilization for nodes metrics server https github com kubernetes incubator metrics server must be running details summary show summary p bash kubectl top nodes p details
ckad excersises Helm in K8s Creating a basic Helm chart Note Helm is part of the new CKAD syllabus Here are a few examples of using Helm to manage Kubernetes details summary show summary p Managing Kubernetes with Helm
# Managing Kubernetes with Helm - Note: Helm is part of the new CKAD syllabus. Here are a few examples of using Helm to manage Kubernetes. ## Helm in K8s ### Creating a basic Helm chart <details><summary>show</summary> <p> ```bash helm create chart-test ## this would create a helm ``` </p> </details> ### Running a Helm chart <details><summary>show</summary> <p> ```bash helm install -f myvalues.yaml myredis ./redis ``` </p> </details> ### Find pending Helm deployments on all namespaces <details><summary>show</summary> <p> ```bash helm list --pending -A ``` </p> </details> ### Uninstall a Helm release <details><summary>show</summary> <p> ```bash helm uninstall -n namespace release_name ``` </p> </details> ### Upgrading a Helm chart <details><summary>show</summary> <p> ```bash helm upgrade -f myvalues.yaml -f override.yaml redis ./redis ``` </p> </details> ### Using Helm repo <details><summary>show</summary> <p> Add, list, remove, update and index chart repos ```bash helm repo add [NAME] [URL] [flags] helm repo list / helm repo ls helm repo remove [REPO1] [flags] helm repo update / helm repo up helm repo update [REPO1] [flags] helm repo index [DIR] [flags] ``` </p> </details> ### Download a Helm chart from a repository <details><summary>show</summary> <p> ```bash helm pull [chart URL | repo/chartname] [...] [flags] ## this would download a helm, not install helm pull --untar [rep/chartname] # untar the chart after downloading it ``` </p> </details> ### Add the Bitnami repo at https://charts.bitnami.com/bitnami to Helm <details><summary>show</summary> <p> ```bash helm repo add bitnami https://charts.bitnami.com/bitnami ``` </p> </details> ### Write the contents of the values.yaml file of the `bitnami/node` chart to standard output <details><summary>show</summary> <p> ```bash helm show values bitnami/node ``` </p> </details> ### Install the `bitnami/node` chart setting the number of replicas to 5 <details><summary>show</summary> <p> To achieve this, we need two key pieces of information: - The name of the attribute in values.yaml which controls replica count - A simple way to set the value of this attribute during installation To identify the name of the attribute in the values.yaml file, we could get all the values, as in the previous task, and then grep to find attributes matching the pattern `replica` ```bash helm show values bitnami/node | grep -i replica ``` which returns ```bash ## @param replicaCount Specify the number of replicas for the application replicaCount: 1 ``` We can use the `--set` argument during installation to override attribute values. Hence, to set the replica count to 5, we need to run ```bash helm install mynode bitnami/node --set replicaCount=5 ``` </p> </details>
ckad excersises
Managing Kubernetes with Helm Note Helm is part of the new CKAD syllabus Here are a few examples of using Helm to manage Kubernetes Helm in K8s Creating a basic Helm chart details summary show summary p bash helm create chart test this would create a helm p details Running a Helm chart details summary show summary p bash helm install f myvalues yaml myredis redis p details Find pending Helm deployments on all namespaces details summary show summary p bash helm list pending A p details Uninstall a Helm release details summary show summary p bash helm uninstall n namespace release name p details Upgrading a Helm chart details summary show summary p bash helm upgrade f myvalues yaml f override yaml redis redis p details Using Helm repo details summary show summary p Add list remove update and index chart repos bash helm repo add NAME URL flags helm repo list helm repo ls helm repo remove REPO1 flags helm repo update helm repo up helm repo update REPO1 flags helm repo index DIR flags p details Download a Helm chart from a repository details summary show summary p bash helm pull chart URL repo chartname flags this would download a helm not install helm pull untar rep chartname untar the chart after downloading it p details Add the Bitnami repo at https charts bitnami com bitnami to Helm details summary show summary p bash helm repo add bitnami https charts bitnami com bitnami p details Write the contents of the values yaml file of the bitnami node chart to standard output details summary show summary p bash helm show values bitnami node p details Install the bitnami node chart setting the number of replicas to 5 details summary show summary p To achieve this we need two key pieces of information The name of the attribute in values yaml which controls replica count A simple way to set the value of this attribute during installation To identify the name of the attribute in the values yaml file we could get all the values as in the previous task and then grep to find attributes matching the pattern replica bash helm show values bitnami node grep i replica which returns bash param replicaCount Specify the number of replicas for the application replicaCount 1 We can use the set argument during installation to override attribute values Hence to set the replica count to 5 we need to run bash helm install mynode bitnami node set replicaCount 5 p details