apiVersion: deploy.cloud.google.com/v1
kind: DeliveryPipeline
metadata:
name: pipeline
description: Pipeline that deploys to GKE
serialPipeline:
stages:
- targetId: target1
profiles: [dev]
strategy:
standard:
predeploy:
actions: [predeploy-action]
- targetId: target2
profiles: [staging]
apiVersion: skaffold/v4beta5
kind: Config
customActions:
- name: predeploy-action
executionMode:
kubernetesCluster:
jobManifestPath: predeploy-job.yaml
containers:
- name: predeploy-job-container
image: worker-image
profiles:
- name: dev
manifests:
rawYaml:
- dev-deployment.yaml
- name: staging
manifests:
rawYaml:
- staging-deployment.yaml
Above is my Pipeline definition that uses a pre-deploy action that is defined in skaffold.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: predeploy-job
spec:
template:
metadata:
labels:
app: predeploy-job
spec:
restartPolicy: Never
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: cloud.google.com/gke-spot
operator: In
values:
- "true"
nodeSelector:
key: value
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule"
initContainers:
- name: init-container
image: ubuntu
containers:
- name: predeploy-job-container
image: worker-image
env:
- name: env1
value: value1
volumeMounts:
- name: local-ssd-storage
mountPath: /local-ssd
readOnly: true
volumes:
- name: local-ssd-storage
hostPath:
path: /path
This is the job manifest that I would like to run on my GKE cluster before every single deployment, it’s customized to take advantage of the volume provisioning, tolerations, nodeAffinity, etc.
Expected behavior:
- The Job manifest is recognized as a part of a predeploy action and is scheduled on my GKE cluster
Actual behavior:
- Job Manifest doesn’t get applied at all, instead, a bare container without configuration is started which is not what we want.
I’m wondering if any of you have faced similar issue. What are some workarounds, or am I doing something wrong?
Thank you!