'Using a PV in an OpenShift 3 cron job

I have been able to successfully create a cron job for my OpenShift 3 project. The project is a lift and shift from an existing Linux web server. Part of the existing application requires several cron tasks to run. The one I am looking at the moment is a daily update to the applications database. As part of the execution of the cron job I want to write to a log file. There is already a PV/PVC defined for the main application and I was intending to use that hold the logs for my cron job but it seems the cron job is not being provided access to the PV.

I am using the following inProgress.yml for the definition of the cron job

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: "*/5 * * * *"       
  concurrencyPolicy: "Replace"  
  startingDeadlineSeconds: 200  
  suspend: false                
  successfulJobsHistoryLimit: 3 
  failedJobsHistoryLimit: 1     
  jobTemplate:                  
    spec:
      template:
        metadata:
          labels:               
            parent: "cronjobInProgress"
        spec:
          containers:
          - name: in-progress
            image: <image name>
            command: ["php",  "inProgress.php"]
          restartPolicy: OnFailure 
          volumeMounts:
            - mountPath: /data-pv
              name: log-vol
      volumes:
        - name: log-vol
          persistentVolumeClaim:
            claimName: data-pv

I am using the following command to create the cron job

oc create -f inProgress.yml

PHP Warning: fopen(/data-pv/logs/2022-04-27-app.log): failed to open stream: No such file or directory in /opt/app-root/src/errorHandler.php on line 75 WARNING: [2] mkdir(): Permission denied, line 80 in file /opt/app-root/src/errorLogger.php
WARNING: [2] fopen(/data-pv/logs/2022-04-27-inprogress.log): failed to open stream: No such file or directory, line 60 in file /opt/app-root/src/errorLogger.php

Looking at the yml for pod that is executed, there is no mention of data-pv - it appears as though secret volumeMount, which has been added by OpenShift, is removing any further volumeMounts.

 apiVersion: v1 kind: Pod metadata:   annotations:
     openshift.io/scc: restricted   creationTimestamp: '2022-04-27T13:25:04Z'   generateName: in-progress-1651065900- ...
 
       volumeMounts:
         - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
           name: default-token-n9jsw
           readOnly: true ...   volumes:
     - name: default-token-n9jsw
       secret:
         defaultMode: 420
         secretName: default-token-n9jsw

How can I access the PV from within the cron job?



Solution 1:[1]

Your manifest is incorrect. The volumes block needs to be part of the spec.jobTemplate.spec.template.spec, that is, it needs to be indented at the same level as spec.jobTemplate.spec.template.spec.containers. In its current position it is invisible to OpenShift. See e.g. this pod example.

Similarly, volumeMounts and restartPolicy are arguments to the container block, and need to be indented accordingly.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: '*/5 * * * *'
  concurrencyPolicy: Replace
  startingDeadlineSeconds: 200
  suspend: false
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            parent: cronjobInProgress
        spec:
          containers:
            - name: in-progress
              image: <image name>
              command:
                - php
                - inProgress.php
              restartPolicy: OnFailure
              volumeMounts:
                - mountPath: /data-pv
                  name: log-vol
          volumes:
            - name: log-vol
              persistentVolumeClaim:
                claimName: data-pv

Solution 2:[2]

Thanks for the informative response larsks.

OpenShift displayed the following when I copied your manifest suggestions

$ oc create -f InProgress.yml The CronJob "in-progress" is invalid: spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", " Never"

As your answer was very helpful I was able to resolve this problem by moving restartPolicy: OnFailure so the final manifest is below.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: in-progress
spec:
  schedule: "*/5 * * * *"       
  concurrencyPolicy: "Replace"  
  startingDeadlineSeconds: 200  
  suspend: false                
  successfulJobsHistoryLimit: 3 
  failedJobsHistoryLimit: 1     
  jobTemplate:                  
    spec:
      template:
        metadata:
          labels:               
            parent: "cronjobInProgress"
        spec:
          restartPolicy: OnFailure 
          containers:
          - name: in-progress
            image: <image name>
            command: ["php",  "updateToInProgress.php"]
            volumeMounts:
              - mountPath: /data-pv
                name: log-vol
          volumes:
            - name: log-vol
              persistentVolumeClaim:
                claimName: data-pv
      

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 larsks
Solution 2 CraigW