Select Page
by

Sagayaraj D

|
last updated on August 3, 2022
Share

Advanced Spinnaker users want to keep the Kubernetes account information in Hashicorp vault, so that they can be be updated idenpendent of Spinnaker, yet keeping the account information in a confidential manner. 

This feature of storing accounts in an external configuration store is made available from Spinnaker 1.15.x. The external configuration store feature is available to implement for Spinnaker services – Clouddriver, Igor and Echo. While the Kubernetes account types of Clouddriver service is refreshed automatically, the other Cloud-provider accounts, and Igor & Echo accounts are honored by Spinnaker only at the time of the respective POD/service is started. 

Keeping the Kubernetes accounts and kubeconfig file(s) in Hashicorp Vault is preferred over GitHub or AWS S3 because of the shielding capablity that Vault offers.

Scope

This document describes the steps for configuring Spinnaker’s Clouddriver with external configuration store as Hashicorp Vault under the following environment,
– Spinnaker 1.17.6 (However the same can work on new versions of Spinnaker)
– Hashicorp Vault as external configuration store
– Halyard service’s deployment profile is default (i.e $HOME/.hal/default directory)

Since the accounts are Kubernetes type, they will be dynamically refereshed – so you have the liberty to update the accounts during run-time also.

How does it work?

Spinnaker loads Cloud-provider accounts using the clouddriver service configuration. By default, all of the Spinnaker services’ accounts are retrieved from halconfig file. Configuring Spinnaker’s inclusive ‘Spring Cloud Config server’ with Vault external store makes the services to look for their corresponding configuration from Vault end-point.

The Vault store houses the Clouddriver accounts information (Clouddriver-local.json, not yml) in a secret key (clouddriver/clouddriver-local) beneath a backend engine (spinnaker). The kubeconfig files of the accounts are stored separately under different path in the same backend engine. These kubeconfig files are pulled to Clouddriver Pod using a periodic/Cron script. This pull is achieved using a Sidecar container within Clouddriver Pod.

You can change the Vault backend, secret path and other values as per your need.

Procedure Outline

  1. Vault administrator creates a Token for Spinnaker
  2. Store Clouddriver-local.json containing accounts into Vault
  3. Store the Accounts’ kubeconfig files in Vault
  4. Update Clouddriver Pod with Sidecar container to pull Kubeconfig files
  5. Configure Spring Cloud Config Server in Spinnaker
  6. Verify the new accounts are reflected from Vault store

Detailed Procedure

1. Vault administrator creates a Token for Spinnaker

The Vault administator creates a token that has previleges to read, write, update secrets in a given backend – spinnaker.

Steps: Open Vault URL > Click ‘Enable a Secrets Engine’ > Select ‘Generic’ engine (kv type) > Type Path as ‘spinnaker’. We refer this ‘spinnaker’ path as secret-engine/backend.

If Vault is enterprise edition, you will also create a namespace – lets call it as ‘vaultns-spinnaker’.

2. Store Clouddriver-local.json containing accounts into Vault

The kubernetes accounts from Halconfig are extracted to Clouddriver-local.yml file, then it converted to JSON format using a suitable Internet site or ‘yq’ commandline tool. Then the Json content is stored under ‘spinnaker’ backend and path ‘clouddriver’.

Example: Clouddriver-local.yml, extracted from the halconfig file

				
					saga@sagamc:~$ cat clouddriver-local.yml
kubernetes:
  enabled: true
  accounts:
  - name: my-account
    requiredGroupMembership: []
    providerVersion: V2
    permissions: {}
    dockerRegistries: []
    configureImagePullSecrets: true
    cacheThreads: 1
    namespaces: []
    omitNamespaces: []
    kinds: []
    omitKinds: []
    customResources: []
    cachingPolicies: []
    kubeconfigFile: /tmp/k8configs/my-account.yml
    #kubeconfigFile: /home/spinnaker/.hal/default/staging/dependencies/2054179555-config
    liveManifestCalls: true
    oAuthScopes: []
    onlySpinnakerManaged: true
				
			

IMPORTANT: Review of the line – kubeconfigFile: /tmp/k8configs/my-account.yml The kubeconfig: element should be the absolute path of the kubeconfig file on the Clouddriver Pod. It is must that this file exists for the Account to be recognized and used by the Clouddriver. (Downloading this file onto the Clouddriver Pod is done by Cron script [Step 4], make sure the path in the main container stay in sync with this Clouddriver-local.yml(or, .json) account’s kubeconfig entry)

Converting clouddriver-local.yml to json:

				
					yq read clouddriver-local.yml -j -P > clouddriver-local.json
				
			

Once the clouddriver-local.json content is available, we store it in Vault with the command:

				
					vault kv put spinnaker/clouddriver @clouddriver-local.json
				
			

In the above command, we understnd the specification as below,
– Vault backend: spinnaker
– Path: clouddriver
– Value: Content of clouddriver-local.json file

3. Store the Accounts’ kubeconfig files in Vault

Kubernetes accounts’ kubeconfig files are stored in Vault as independent of the ‘Spring Cloud Config server’ of Spinnaker. That is, the Cloud Config server cannot retrieve the kubeconfig files. Hence, storing and retieving the files to the Clouddriver Pod is implemented by the user.

We use the following specification to store the kubeconfig files.
– Vault backend: spinnaker
– Path: k8configs/<account>
– Key/Field: kubeconfig
– Value: Content of Kubeconfig file

Every kubeconfig file is store in Vault with the command

				
					vault kv put spinnaker/k8configs/my-account kubeconfig=@my-account.yml
				
			

4. Update Clouddriver Pod with Sidecar container to pull Kubeconfig files

Once Kubeconfig files are stored in encrypted Vault, we need to make sure those files are downloaded to Clouddriver Pod, so that Clouddriver can infer the files locally. To achieve this, we will leverage the Sidecar container feature of POD object. By sharing a directory volume between the main and sidecar containers, we will run a cron script on sidecar to download the kubeconfig files into the shared directory volume, so that the kubeconfig files are also available on the main container.

The cron script is created as ConfigMap and then it is mounted as volume (file) to the sidecar container. Once the script is available on the container, it is executed periodically to download the new kubeconfig files from the Vault path.

The script mainly does the following …
* Checks for any new account

				
					vault kv list spinnaker/k8configs
				
			

* Downloads the new account’s kubeconfig file

				
					vault kv get -field=kubeconfig spinnaker/k8configs/my-account > my-account.yml
				
			

Following script goes as ConfigMap

				
					#!/bin/bash
#This script is executed in Clouddriver or its Sidecard container to download the kubeconfig files
#Make sure vault cli is loaded on the Container and is accessible by the scripts
#Ensure the VAULT_ADDR and VAULT_TOKEN is set in the SHELL environment
#Set these variables to your choice
export VK8S_PATH='spinnaker/k8configs' #Prefix V for Vault
export LK8S_PATH='/tmp/k8configs' #Prefix L for Local
export R_FILE=/tmp/remote.txt
export L_FILE=/tmp/local.txt
export NEWKEYS=/tmp/newkeys.txt
#Make sure vault CLI is available
command -v vault > /dev/null 2>&1
if [ $? -ne 0 ]; then
  echo "Vault CLI is not available, exiting..."
  exit
fi
#Create local config path if not exists
[ ! -d $LK8S_PATH ] && (mkdir -p $LK8S_PATH; echo Created k8sconfig directory) #|| echo Content mkdir -p $LK8S_PATH
#Set-1
vault kv list $VK8S_PATH | egrep -iv 'Keys|----' > $R_FILE
#Set-2
ls -1 $LK8S_PATH | sed s/\.yml$// > $L_FILE
#New in Vault
comm -23 $R_FILE $L_FILE > $NEWKEYS
if [ ! -s $NEWKEYS ]; then
  echo "There are no new kubeconfigs to download"
  exit
fi
#Download new kubeconfig file from Vault
for i in `cat $NEWKEYS`; do
  echo -en "Fetching kubeconfig file of Account : $i"
  vault kv get -field=kubeconfig $VK8S_PATH/$i > $LK8S_PATH/$i.yml
  if [ $? -eq 0 ]; then
    echo " [success]"
    echo "  File: $LK8S_PATH/$i.yml"
  else
    echo " [failed with return code $?]"
  fi
done
echo "Completed Kubeconfig downlod syncing"
				
			

The Vault address and its token are stored as secret

				
					apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: vaultsec
  namespace: spinnaker
data:
  #Replace the vaule here http://vault-host as vaultaddress, and token as encrypted using base64
  vaultaddr: abcovLzE3Mi40Mi40Mi4xMTE6ODIwMA==
  vaulttoken: abca054QXhQbW01T3hDUzVNaU1RbWl1Qmc=
				
			

Configuring the Sidecar container with the script

				
					kind: Deployment
metadata:
  labels:
spec:
  template:
    metadata:
    spec:
      containers:
      - name: clouddriver
        volumeMounts:
        - mountPath: /tmp/k8configs
          name: vvault
      - name: vault-c
        image: alpine:latest
        command: ["/bin/sh", " -c"]
        args:
        - |
          while true; do sh /tmp/k8configs-sync.sh; sleep 30m; done
        env:
        - name: VAULT_ADDR
          valueFrom:
            secretKeyRef:
              key: vaultaddr
              name: vaultsec
        - name: VAULT_TOKEN
          valueFrom:
            secretKeyRef:
              key: vaulttoken
              name: vaultsec
        volumeMounts:
        - mountPath: /tmp/k8configs
          name: vvault
        - mountPath: /tmp/k8configs-sync.sh
          name: vcm-vault
          subPath: k8configs-sync.sh
      volumes:
      - emptyDir: {}
        name: vvault
      - name: vcm-vault
        configMap:
          defaultMode: 420
          name: vault-k8s
				
			

If you carefully observe the above sidecar container, we are configuring two things

  1. A shared directory volume of type ‘EmptyDir’ is created and mounted to both containers under the path ‘/tmp/k8configs’. The ‘EmptyDir’ type creates a empty directory on the Host machine before the containers are started, and then shared between the colocated containers as long as the volume is referenced.
  2. The ConfigMap is mounted to the sidecar container as a script in the path /tmp/k8configs-sync.sh, which gets executed every 30 minutes to download any new kubeconfig files.

5. Configure Spring Cloud Config Server in Spinnaker

Once Vault is equiped with Clouddriver accounts and Kubeconfig files, Spinnaker is ready to turn on Cloud Config server, so that the Kubernetes accounts can be read from Vault.

The content of the $HOME/.hal/default/profiles/spinnakerconfig.yml file is as below.

				
					spring:
  profiles:
    include: vault
  cloud:
    config:
      server:
        vault:
          host: 172.42.42.111 #Or Hostname
          port: 8200 #Or 443 if https URL
          scheme: http #Or https if https URL
          backend: spinnaker
          kvVersion: 2
          token: xxxxxxxxx
          namespace: vaultns-spinnaker #If Enterprise Vault, include this entry
				
			
6. Verify the new accounts are reflected
 
Once the Config server is activated with Vault being the external configuration store, we can login to Spinnaker and check the URL http:///credentials to see available accounts. Additionally, you can go to any Kubernetes dummy pipeline and add with a ‘Deploy (Manifest)’ stage and check if the account from Vault’s spinnaker/clouddriver path are listed. You can also add/modify accounts in spinnaker/clouddriver value and see if they get refelected in Spinnaker by using the steps just mentioned above.
Perform a deployment to the target Kubetnets account/cluster to make sure the Kubeconfig files are downloaded to Clouddriver Pod successfully and are getting connected to the target cluster from the Pod.

0 Comments

Submit a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.