In today& #39;s project... this time with @javierprovecho (Thanks bro):
Is it possible to run worker instances (at home) in a managed Kubernetes Service?
Is it possible to run worker instances (at home) in a managed Kubernetes Service?
TL;DR.
You can not!
Kubernetes imposes the following fundamental requirements on any networking implementation:
- pods on a node can communicate with all pods on all nodes without NAT
Source: #the-kubernetes-network-model">https://kubernetes.io/docs/concepts/cluster-administration/networking/ #the-kubernetes-network-model">https://kubernetes.io/docs/conc...
You can not!
Kubernetes imposes the following fundamental requirements on any networking implementation:
- pods on a node can communicate with all pods on all nodes without NAT
Source: #the-kubernetes-network-model">https://kubernetes.io/docs/concepts/cluster-administration/networking/ #the-kubernetes-network-model">https://kubernetes.io/docs/conc...
The original idea was to use a free Kubernetes control plane (AKS, GKE, OVH) with your on-prem (at home) raspis, intel nucs even your laptop.
This way, you can run workloads without worrying about maintaining that critical component in a Kubernetes Cluster, the control-plane.
We tried with @OVHcloud_ES, I worked on it before, and I knew the control-plane is entirely free, allowing you to use it without worker nodes.
Then I created some CertificateSigningRequests with the specific requirements of Kubelet.
CN: system:node:<node-name>
O: system:nodes
Excellent tutorial from @lucjuggery: https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe
Just">https://medium.com/better-pr... modify dave with the spec above
CN: system:node:<node-name>
O: system:nodes
Excellent tutorial from @lucjuggery: https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe
Just">https://medium.com/better-pr... modify dave with the spec above
With the signed cert, the private key, and the admin kubeconfig, everything was ready to spin up the Kubelet.
We found a weird behaviour, the node appears in the API Server (kubectl get nodes) but it disappears after a couple of seconds.
Hit: https://github.com/kubernetes/kubernetes/issues/86094
Solution:">https://github.com/kubernete...
--feature-gates="CSIMigration=false"
Solution:">https://github.com/kubernete...
--feature-gates="CSIMigration=false"
Finally, the node becomes "ready", but the CNI pods didn& #39;t work correctly. We saw some requests flowing from the OVH Kubernetes control plane to the CNI pods in my instance (using the internal IP)
We tried opening the kubelet port in my home router to the public.
Set the node-ip as a kubelet argument didn& #39;t work:
--node-ip="your-public-home-ip"
This should enable communication between OVH and us
Set the node-ip as a kubelet argument didn& #39;t work:
--node-ip="your-public-home-ip"
This should enable communication between OVH and us
kubelet& #39;s journal:
Error updating node status, will retry: error getting node "XXX.42.YYY.217": nodes "XXX.42.YYY.217" not found
Error updating node status, will retry: error getting node "XXX.42.YYY.217": nodes "XXX.42.YYY.217" not found