Kubernetes 在 Pod 内无法 ping 通 Service Name

缘由: Service 的代理模式用的是 iptables,而 iptables 转发是 IP地址+端口,是不支持ICMP协议直达的 解决思路:替换Service 的代理模式为 ipvs 即可

开启内核支持

$ cat >> /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

$ sysctl -p

开启 IPVS 支持

Ubuntu系统

# 由于ubuntu系统默认已经加载ipvs内核模块
$ lsmod  grep ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  9
ip_vs                 151552  15 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_defrag_ipv6         20480  2 nf_conntrack_ipv6,ip_vs
nf_conntrack          135168  13 xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,ip6t_MASQUERADE,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,nf_nat_masquerade_ipv6,ip_vs
libcrc32c              16384  4 nf_conntrack,nf_nat,raid456,ip_vs

# 安装 ipvsadm ipset工具即可
$ apt -y install ipvsadm ipset

Centos 系统

# cetons 系统内核普遍较低,未开启 ipvs模块
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

# 安装 ipvsadm ipset工具
$ yum -y install ipvsadm ipset

修改 kube-proxy 配置文件

# 在master 节点上修改
$ kubectl edit cm kube-proxy -n kube-system
kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
...
ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"                  #修改

在 master 上重启 kube-proxy

$ kubectl  get pod -n kube-system  grep kube-proxy  awk '{print $1}'  xargs kubectl delete pod -n kube-system

验证 ipvs 是否开启

$ kubectl logs -f kube-proxy-97mkj -n kube-system
I0112 13:31:53.914132       1 node.go:172] Successfully retrieved node IP: 10.0.20.7
I0112 13:31:53.914379       1 server_others.go:140] Detected node IP 10.0.20.7
I0112 13:31:54.179617       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0112 13:31:54.179795       1 server_others.go:274] Using ipvs Proxier.
I0112 13:31:54.179859       1 server_others.go:276] creating dualStackProxier for ipvs.
W0112 13:31:54.179909       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0112 13:31:54.214403       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
I0112 13:31:54.230203       1 proxier.go:440] "IPVS scheduler not specified, use rr by default"
W0112 13:31:54.230317       1 ipset.go:113] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W0112 13:31:54.230388       1 ipset.go:113] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I0112 13:31:54.230579       1 server.go:649] Version: v1.22.3
I0112 13:31:54.231785       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0112 13:31:54.233834       1 config.go:315] Starting service config controller
I0112 13:31:54.233918       1 shared_informer.go:240] Waiting for caches to sync for service config
I0112 13:31:54.233987       1 config.go:224] Starting endpoint slice config controller
I0112 13:31:54.234055       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0112 13:31:54.335004       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0112 13:31:54.336475       1 shared_informer.go:247] Caches are synced for service config 

# 查看 ipvs 规则
$ sudo ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  master:30001 rr
  -> 172.20.2.11:http             Masq    1      0          0         
  -> 172.20.2.12:http             Masq    1      0          0         
TCP  master:30001 rr
  -> 172.20.2.11:http             Masq    1      0          0         
  -> 172.20.2.12:http             Masq    1      0          0         
TCP  master:30001 rr
  -> 172.20.2.11:http             Masq    1      0          0         
  -> 172.20.2.12:http             Masq    1      0          0    

进入pod内部,验证通信

$ kubectl exec -it busybox-deploy-5888dbcf86-6fxdw  -- sh
/ $ ping my-service
PING my-service (192.168.103.87): 56 data bytes
64 bytes from 192.168.103.87: seq=0 ttl=64 time=0.024 ms
64 bytes from 192.168.103.87: seq=1 ttl=64 time=0.080 ms
64 bytes from 192.168.103.87: seq=2 ttl=64 time=0.077 ms

Kubernetes 在 Pod 内无法 ping 通 Service Name
http://www.qiqios.cn/2022/01/12/kubernetes-在-pod-内无法-ping-通-service-name/
作者
一亩三分地
发布于
2022年1月12日
许可协议