'How to solve the FailedScheduling error caused by k8s webhook
I have a project that needs to intercept and add some data after pod scheduling, so I use the following webhook configuration interception
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutationpod
...
...
rules:
- operations: ["CREATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods/binding"]
Then modify the pod
func (p *PodMutate) Handle(ctx context.Context, req admission.Request) admission.Response {
binding := &corev1.Binding{}
_ := p.decoder.Decode(req, binding)
pod := &corev1.Pod{}
if err = p.Client.Get(ctx , types.NamespacedName{Namespace: binding.Namespace, Name: binding.Name}, pod); err != nil {
log.Error(err)
return admission.Allowed("")
}
// modify pod code
...
...
p.Client.Update(ctx , pod)
return admission.Allowed("")
}
During the testing phase, it was found that some pods will have a FailedScheduling event
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41s default-scheduler AssumePod failed: pod 138cef2c-029b-407f-86c2-e1231eca9233 is in the cache, so can't be assumed
Normal Scheduled 41s default-scheduler Successfully assigned default/nginx-deployment-f8c77cd9d-m8zvd to <nodeName>
Normal Pulled 40s kubelet Container image "nginx:1.17.1" already present on machine
Normal Created 40s kubelet Created container nginx
Normal Started 40s kubelet Started container nginx
I'm confused as to why the interception of pods/binding causes other schedulers to schedule. How can I modify to avoid this exception
I really appreciate any help with this.
Note: I use multiple schedulers to keep high availability, but I use the
leader mechanism(--leader-elect=true)
Official examples for reference
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
