GCE – SSH Directly Into Your Docker Container

google_cloud_plattform

In this article, I am going to change the SSH port from 22 to 52241 and create a new firewall rule in VPC. Please do your own research and make sure you understand the impact, especially on a production environment.

When you deploy a new docker container on GCE, you can SSH into the host machine ( VM ). This is simply because the IP address is pointing to the host machine. But what about the docker container that runs on it? In this case, you must use docker exec or the gcloud command-line tool in order to get access to the container, which can be pretty time consuming over time.

I am working on a project where I need direct access to my alpine docker container on port 22 or any other port. As you can see in the below output, no ports are exposed for my container.

This is mainly because the docker container shares the network with the host. Source

Containerized VMs launch containers with the network set to host mode. A container shares the host network stack, and all interfaces from the host are available to the container.

This is very convenient because it allows you to access your docker container from pretty much any port. However, only a few ports are actually open and configured in the VPC Network > Firewall settings for security reasons. For the SSH connection I decided to open a custom port 52241. You can do this in your GCP Console under VPC Network > Firewall or with the following gcloud command-line tool.

Once completed, here is how the new firewall rule looks like in the console. If you scroll down, you will also see all affected instances.

GCP Custom Firewall

Before you can SSH into your container on port 52241 you must change the port in the sshd configuration /etc/ssh/sshd_config within your container or container image. Not on the VM.

You may have to restart your ssh service before you can use the new port. Once this is done, you can SSH into your docker container.

AWS – ECS Keeps Recreating Docker Container

AWS ECS Logo

Today was a strange day. I’ve launched a new version of my docker image within an ECS cluster, which usually worked fine. However, today my container always got recreated every 5 minutes for no reason.

ecs_keeps_recreating_containers

The new docker image that I have pushed to the repository had no significant changes, except a redirect from non-SSL ( 80 ) to SSL ( 443 ) for all loaded virtual hosts. So, when I had a look at the processes, everything looked fine. There was nothing that could put the container into an unhealthy status.

Even the log output of the docker container didn’t show any error. After a little bit of research, I ended up on the following AWS developer guide page https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

It was not exactly what I was looking for, but it pretty much explained the problem I had. One of my colleagues said I should maybe have a look at the health checks of the load balancer.

Indeed, when I had a look at the settings in EC2 > Load Balancing > Target Groups > Health Checks the health check expected a success code 200 instead of 301 on port 80. Because of that, the container has been marked unhealthy every 5 minutes.

aws_ec2_load_balancer_health_checks

I hope this will point someone in the right direction.