With an EKS cluster created I followed the AWS instructions for integrating secrets manager so I could pull secrets and use them as environment variables in pods.
The secrets were mounted fine and I could cat them out when exec’ing into the container. However, nothing I did would allow the secrets to be used as environment variables.
The instructions from AWS were:
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts helm install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
I spent the best part of 2 days trying many combinations of settings but
kubectl get secrets -n default
did not show any secrets created in k8s. Eventually I stumbled upon an option I could pass into the helm chart:
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts helm install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --set syncSecret.enabled=true kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
Notice the config option passed into the helm chart:
Uninstalling with helm and reinstalling with this option instantly solved the problem.
So if you can see the secrets inside the container in the mounted volume but you can’t set them as environment variables you should check that you installed with this value so that the k8s secret is created. Otherwise the secretObjects > secretName section (as below) is not actioned/synced to k8s secrets.
secretObjects: - secretName: awscredentials type: Opaque data: - objectName: accesskey key: accesskey - objectName: secretkey key: secretkey