This the multi-page printable view of this section. Click here to print.
Networking
1 - Adding entries to Pod /etc/hosts with HostAliases
Adding entries to a Pod's /etc/hosts
file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
Default hosts file content
Start an Nginx Pod which is assigned a Pod IP:
kubectl run nginx --image nginx
pod/nginx created
Examine a Pod IP:
kubectl get pods --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 13s 10.200.0.4 worker0
The hosts file content would look like this:
kubectl exec nginx -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.200.0.4 nginx
By default, the hosts
file only includes IPv4 and IPv6 boilerplates like
localhost
and its own hostname.
Adding additional entries with hostAliases
In addition to the default boilerplate, you can add additional entries to the
hosts
file.
For example: to resolve foo.local
, bar.local
to 127.0.0.1
and foo.remote
,
bar.remote
to 10.1.2.3
, you can configure HostAliases for a Pod under
.spec.hostAliases
:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox:1.28
command:
- cat
args:
- "/etc/hosts"
You can start a Pod with that configuration by running:
kubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml
pod/hostaliases-pod created
Examine a Pod's details to see its IPv4 address and its status:
kubectl get pod --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
hostaliases-pod 0/1 Completed 0 6s 10.200.0.5 worker0
The hosts
file content looks like this:
kubectl logs hostaliases-pod
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.200.0.5 hostaliases-pod
# Entries added by HostAliases.
127.0.0.1 foo.local bar.local
10.1.2.3 foo.remote bar.remote
with the additional entries specified at the bottom.
Why does the kubelet manage the hosts file?
The kubelet manages the
hosts
file for each container of the Pod to prevent the container runtime from
modifying the file after the containers have already been started.
Historically, Kubernetes always used Docker Engine as its container runtime, and Docker Engine would
then modify the /etc/hosts
file after each container had started.
Current Kubernetes can use a variety of container runtimes; even so, the kubelet manages the hosts file within each container so that the outcome is as intended regardless of which container runtime you use.
Avoid making manual changes to the hosts file inside a container.
If you make manual changes to the hosts file, those changes are lost when the container exits.
2 - Validate IPv4/IPv6 dual-stack
This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clusters.
Before you begin
- Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)
- A network plugin that supports dual-stack (such as Calico, Cilium or Kubenet)
- Dual-stack enabled cluster
kubectl version
.
Validate addressing
Validate node addressing
Each dual-stack Node should have a single IPv4 block and a single IPv6 block allocated. Validate that IPv4/IPv6 Pod address ranges are configured by running the following command. Replace the sample node name with a valid dual-stack Node from your cluster. In this example, the Node's name is k8s-linuxpool1-34450317-0
:
kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .spec.podCIDRs}}{{printf "%s\n" .}}{{end}}'
10.244.1.0/24
a00:100::/24
There should be one IPv4 block and one IPv6 block allocated.
Validate that the node has an IPv4 and IPv6 interface detected. Replace node name with a valid node from the cluster. In this example the node name is k8s-linuxpool1-34450317-0
:
kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s\n" .type .address}}{{end}}'
Hostname: k8s-linuxpool1-34450317-0
InternalIP: 10.240.0.5
InternalIP: 2001:1234:5678:9abc::5
Validate Pod addressing
Validate that a Pod has an IPv4 and IPv6 address assigned. Replace the Pod name with a valid Pod in your cluster. In this example the Pod name is pod01
:
kubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf "%s\n" .ip}}{{end}}'
10.244.1.4
a00:100::4
You can also validate Pod IPs using the Downward API via the status.podIPs
fieldPath. The following snippet demonstrates how you can expose the Pod IPs via an environment variable called MY_POD_IPS
within a container.
env:
- name: MY_POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
The following command prints the value of the MY_POD_IPS
environment variable from within a container. The value is a comma separated list that corresponds to the Pod's IPv4 and IPv6 addresses.
kubectl exec -it pod01 -- set | grep MY_POD_IPS
MY_POD_IPS=10.244.1.4,a00:100::4
The Pod's IP addresses will also be written to /etc/hosts
within a container. The following command executes a cat on /etc/hosts
on a dual stack Pod. From the output you can verify both the IPv4 and IPv6 IP address for the Pod.
kubectl exec -it pod01 -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.4 pod01
a00:100::4 pod01
Validate Services
Create the following Service that does not explicitly define .spec.ipFamilyPolicy
. Kubernetes will assign a cluster IP for the Service from the first configured service-cluster-ip-range
and set the .spec.ipFamilyPolicy
to SingleStack
.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: MyApp
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
Use kubectl
to view the YAML for the Service.
kubectl get svc my-service -o yaml
The Service has .spec.ipFamilyPolicy
set to SingleStack
and .spec.clusterIP
set to an IPv4 address from the first configured range set via --service-cluster-ip-range
flag on kube-controller-manager.
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: 10.0.217.164
clusterIPs:
- 10.0.217.164
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app: MyApp
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Create the following Service that explicitly defines IPv6
as the first array element in .spec.ipFamilies
. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured service-cluster-ip-range
and set the .spec.ipFamilyPolicy
to SingleStack
.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: MyApp
spec:
ipFamilies:
- IPv6
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
Use kubectl
to view the YAML for the Service.
kubectl get svc my-service -o yaml
The Service has .spec.ipFamilyPolicy
set to SingleStack
and .spec.clusterIP
set to an IPv6 address from the IPv6 range set via --service-cluster-ip-range
flag on kube-controller-manager.
apiVersion: v1
kind: Service
metadata:
labels:
app: MyApp
name: my-service
spec:
clusterIP: fd00::5118
clusterIPs:
- fd00::5118
ipFamilies:
- IPv6
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: MyApp
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Create the following Service that explicitly defines PreferDualStack
in .spec.ipFamilyPolicy
. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select the .spec.ClusterIP
from the list of .spec.ClusterIPs
based on the address family of the first element in the .spec.ipFamilies
array.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: MyApp
spec:
ipFamilyPolicy: PreferDualStack
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
The kubectl get svc
command will only show the primary IP in the CLUSTER-IP
field.
kubectl get svc -l app=MyApp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service ClusterIP 10.0.216.242 <none> 80/TCP 5s
Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using kubectl describe
. You may then validate access to the service via the IPs and ports.
kubectl describe svc -l app=MyApp
Name: my-service
Namespace: default
Labels: app=MyApp
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP Family Policy: PreferDualStack
IP Families: IPv4,IPv6
IP: 10.0.216.242
IPs: 10.0.216.242,fd00::af55
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Create a dual-stack load balanced Service
If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with PreferDualStack
in .spec.ipFamilyPolicy
, IPv6
as the first element of the .spec.ipFamilies
array and the type
field set to LoadBalancer
.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: MyApp
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
Check the Service:
kubectl get svc -l app=MyApp
Validate that the Service receives a CLUSTER-IP
address from the IPv6 address block along with an EXTERNAL-IP
. You may then validate access to the service via the IP and port.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer fd00::7ebc 2603:1030:805::5 80:30790/TCP 35s