Pod fails to start after eu-central service outage — help needed!

Viewed 87

Hello,

After the eu-central service went down and then came back online today, my pod — which had been running smoothly for a week — is now failing to start and marked as "unhealthy."

In the pod details under the Events tab, the following message keeps repeating:

FailedScheduling, Last Occur: 1h3m
First Seen: 1h3m
Count: 0
0/31 nodes are available:
- 1 node(s) have insufficient CPU
- 1 node(s) had untolerated taint {database.run.claw.cloud/node: }
- 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }
- 2 node(s) had untolerated taint {monitor.run.claw.cloud/node: }
- 3 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }
- 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }
- 4 node(s) have too many pods
- 4 node(s) had untolerated taint {node.cilium.io/agent-not-ready: }
- 6 node(s) had untolerated taint {devbox.sealos.io/node: }
- 7 node(s) had volume node affinity conflict.
Preemption: 0/31 nodes are available:
- 27 nodes: Preemption is not helpful for scheduling
- 4 nodes: No preemption victims found for incoming pod.

This has been happening for over 5 hours now.

How can I get my pod back up and running? I still have important unbacked-up data in it, and I really need help recovering the service as soon as possible.

Any assistance or suggestions would be greatly appreciated!

Thank you!

1 Answers

My dockers in Germany region were shut down and I can't enter the launchpad now. Maybe their eu machine are broken.