Issue Report: NjgkQq7-er
Workspace Region: ap-northeast-1
Issue
On April 11, 2026, I noticed that one of my services (vaultwarden) was showing no healthy upstream when accessed. After entering the App Launchpad, I observed that the pod for this service was stuck in Terminating(1) status. I attempted to restart it but was unsuccessful, so I executed the following command in the terminal:
kubectl delete pod vaultwarden-0 --grace-period=0 --force
I then tried to start the service again via the App Launchpad. In the Pod Details, I found the following error:
FailedScheduling, Last Occur: 10m1s
First Seen: 10m1s
count: 0
0/53 nodes are available: 1 node(s) had untolerated taint {database.run.claw.cloud/node: }, 2 node(s) had untolerated taint {monitor.run.claw.cloud/node: }, 29 node(s) had volume node affinity conflict, 3 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) had untolerated taint {node.cilium.io/agent-not-ready: }, 7 Too many pods, 8 node(s) were unschedulable. preemption: 0/53 nodes are available: 46 Preemption is not helpful for scheduling, 7 No preemption victims found for incoming pod..
I am not very familiar with Kubernetes. Since the affected service is my self-hosted password manager, I may have done something rash out of urgency... if that is indeed the case, I sincerely apologize.
Request
- What should I do next to resolve this issue?
- Can support staff help me restore the service, or assist in retrieving the data from my persistent storage?