I ran into a weird issue while setting up a docker image:
- the image was based on
alpine
and haskubectl
andawscli
installed - both AWS credentials (
~/.aws/credentials
) andkube config
(~/.kube/config
) are mounted to the container - the command for docker looks like this:
docker run -it --rm -v ${HOME}/.aws:/root/.aws -v ${HOME}/.kube:/root/.kube <my_image> /bin/bash
- connected to VPN and k8s cluster is on a VPC accessible only via VPN.
What I observed on the host (my laptop):
- both
aws
andkubectl
calls are successful.
while in the container:
aws
calls are successful. e.g.aws s3 ls s3://my-bucket
kubectl
calls are timed out consistently:k8s get pod -A
Added --v=8
to kubectl
call (on both host and container): the right endpoint are used for both host and container.
Tried nslookup
on both host and container and the private IPs are resolved correctly.
Tried telnet <private_ip> 443
on both host and container and saw the same thing:
- the host was able to establish a connection
- the container stuck on establishing a connection
Started tcpdump
(in a different shell, sudo tcpdump -nnA -s 0 "host <private_ip>"
) on both host and container to view traffic while the command (either telnet
or kubectl
) was executed in the container:
tcpdump
in the container showed traffictcpdump
on the host did not show any traffic
Checked route table in the container (netstat -nr
):
bash-4.4# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
the private IPs that my k8s cluster is resovled to do not fall into the 172.17
band so there should not be any collision.
At this point, the docker run
options seem correct and network configuration seems fine too.
After a bit more research, I found out this is a known issue for docker
on Mac OS X, and the solution is as simple as docker network prune
.
I hope this would be helpful to whoever ran into this issue.