A Docker container is a portable software package that holds an application’s code, necessary dependencies, and environment settings in a lightweight, standalone, and easily runnable form.

When running an application in Docker, you might need to perform some analysis or troubleshooting to diagnose and fix errors. Rather than recreating the environment and testing it separately, it is often easier to SSH into the Docker container to check on its health.

In this walkthrough, you will explore a few ways to connect to and run shell sessions in your running Docker containers for fast and efficient debugging.

What is SSH?

SSH, which stands for Secure Shell, is a technology that lets you securely administer systems and transfer files over insecure networks. The SSH protocol uses encryption to create a secure connection between a client and server, and provides strong password and public key authentication.

To connect with a Docker container via SSH, you first need to ensure the container has an SSH server installed. This can increase the size and complexity of your container images, particularly for smaller applications such as microservices. Most microservice-focused Docker images are designed to be as lean as possible, typically exposing only a single service.

In these situations, it’s possible to remote into Docker containers using a set of lightweight commands, particularly the built-in docker exec method. However, if you specifically need an ongoing secure connection to a container, you can include the required OpenSSH server (sshd) in the image.

Below, we’ll look at both ways of connecting to containerized environments: SSHing into running Docker containers and using built-in Docker commands.

Connecting to Docker containers via SSH

To construct an Ubuntu-based Docker image ready for SSH access with basic password authentication, start by creating a Dockerfile with the following contents:

FROM ubuntu:20.04

RUN apt update && apt install -y openssh-server
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

RUN useradd -m -s /bin/bash bilbo
RUN echo "bilbo:insecure_password" | chpasswd


ENTRYPOINT service ssh start && bash

This file creates a Docker container based on Ubuntu, installs an OpenSSH server, and configures it for password authentication. If you’re using a different image, you might need to replace Bash with a different shell, in which case echo might not be built in.

The file also contains a command for adding a user named bilbo whose password is set to insecure_password. Note that password authentication is insecure, so certificate authentication is the recommended method for production builds.

Build the image and name it sshd-image, then run it using the following command:

docker run -t sshd-image:latest

Next, find the container name or ID by entering the docker ps command.

Then, in a new terminal instance, find the IP address of the container using the following Docker command:

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-ID-or-name>

Now you are ready to connect to the server using the standard ssh command:

ssh bilbo@<the-IP-address>

You should now be in a shell session in the running container.

Using the Docker exec command

The Docker API provides some more lightweight methods to execute commands remotely. These commands are great for troubleshooting or quickly peeking inside a running Docker container, especially during development and testing.

The docker exec command is the preferred tool if you need to remote into a running Docker container.

First, start up a new nginx container:

docker run --name NGINX -d nginx

Verify that the container is running by using the docker ps command.

Next, connect to this nginx container using the docker exec command:

docker exec -i -t NGINX bash

The -i flag lets you interact with the container using a terminal session, and the -t flag allocates a TTY (a type of terminal). The docker exec command creates a new shell session in the container. In this case you can use Bash, but recall that some distributions might have a different shell installed, such as Alpine’s default, Ash. You can also connect to a container by providing its ID instead of its name.

You can now interact with the container as you normally would in a remote session. For example, you can use a shell command to fetch details about the container’s operating system (OS):

cat /etc/os-release

The output should look similar to this:

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION="11 (bullseye)"

You can also set environmental variables in the same way you would in a local Bash session — but they only persist as long as the session remains active. For example, you can set VAR:


Next, try to ping www.google.com to see if your container can connect to resources outside your network. The image you used for this container does not have the ping utility included, so go ahead and install it with the following commands:

apt-get update -y
apt-get install -y iputils-ping

If you need to install this utility in a container using a Fedora-based distribution image, use the dnf command instead:

dnf install -y iputils

Then, ping the Google search page:

ping -c 3 www.google.com

This should produce the standard output:

PING www.google.com ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=37 time=31.5 ms
64 bytes from ( icmp_seq=2 ttl=37 time=31.7 ms
64 bytes from ( icmp_seq=3 ttl=37 time=33.6 ms

--- www.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 31.450/32.258/33.631/0.975 ms

You can also use the package you installed to get other helpful information. Rather than using the docker inspect command, use the simpler shell command:

hostname -i

This outputs the container’s IP address on Docker’s automatically provided default network bridge.

To see all the running containers connected to the bridge, you can use the docker network command.

First, quit the container’s shell session. The exit command is the default way to do this. However, if you want to keep running a container that is set to stop when exited, use Ctrl+P followed by Ctrl+Q.

Then, enter the following command:

docker network inspect bridge

This produces a verbose output listing each running container:

        	"Config": [
                	"Subnet": ""
    	"Containers": {
        	"5567bb467d73efe5b0752474105ffd1d91e3d2e11d18cf4efce2ac933b3985cf": {
            	"Name": "NGINX",
            	"EndpointID": "1045235a9d875ce4ee5a0cfaa913747ed7dc8cf163116f2d2bd9900e73cbf6e1",
            	"MacAddress": "01:52:aa:11:00:02",
            	"IPv4Address": "",
            	"IPv6Address": ""

The docker exec command lets you execute additional commands in the same step while restarting your shell session.

By default, docker exec runs in the working directory that was set when the container ran. You can change this by using the -w flag and specifying a different directory to execute into:

docker exec -it -w /usr/local NGINX bash

Or, to change the value of the environmental variable you set earlier, use the -e flag:

docker exec -it -e VAR=1 NGINX bash

For now, exit your shell session using either the exit command or Ctrl+P then Ctrl+Q.

Using the Docker run command

The docker exec command is useful for connecting to already running containers, but you can also instruct Docker to start a container and open a shell session right away by using the -it flag in a docker run command. To get the most out of this feature-packed command, refer to the docker run documentation.

For example, you can start a container with its TCP port 80 mapped to the Docker host’s port 8080. By default, Docker containers created using docker create or docker run do not publish their ports outside of Docker. So, to make this container accessible outside its own network, you can map its ports to those on the Docker host:

docker run --name NGINX-2 -it -p 8080:80 nginx bash

After making any changes you want to make in the container using the shell, exit the shell session using Ctrl+P then Ctrl+Q, then test the container’s connectivity with the following command:

docker port NGINX-2

The output displays port 80 on the container as exposed and connected to the host’s port 8080:

80/tcp ->

Now, stop the running nginx container:

docker stop NGINX-2

After the container stops, try the docker port NGINX-2 command again. With the container stopped, connectivity fails and you should receive no output.

Debugging a failed CI/CD build with SSH access to a container

When a service or a job fails within a container, sometimes it is difficult to discern what has gone wrong. In the context of a CI/CD pipeline, not being able to determine the cause of build failures can be a source of significant frustration and delays. When build failures occur, recovering quickly and ensure a smooth development and deployment process.

Fortunately, using what you have learned in this article, you can connect to a misbehaving container to understand what happened. Keep in mind, though, that not all CI/CD providers support debugging with SSH, and the availability of SSH-based debugging depends on various factors, including the provider’s architecture, security policies, and the specific services they offer.

In general, the commands you will need depend on the task the container is performing. However, there are some commands that should help regardless of the job.

To debug low-level issues with the OS, you can use one of two commands: journalctl or dmesg.

If the build scripts depend on environment variables, you may also want to confirm whether they are set correctly with printenv. Optionally, you can filter using grep:

printenv | grep DOCKER_LOGIN

Instead of outputting logs into a centralized logging location like the OS journal, some scripts will output them directly to a file. Typically, you’ll find these files in the home directory of the user running that script. However, they may also live in a central directory like /var/log.

To read these files, you can use less:

less ~/very_important_log

Alternatively, you can use cat:

cat /var/log/httpd.log


Connecting to Docker containers remotely is an essential skill for engineers troubleshooting their containerized applications. Executing commands on running containers provides a more accurate picture of what went wrong in your containerized environments, including those used in a CI/CD pipeline. It is easier to inspect the exact circumstances in which a job failed, rather than recreating the environment on your local machine and debugging from there.

A continuous integration platform like CircleCI gives you the option to automate allocating and connecting to the right execution environment for each job so that you can quickly troubleshoot problems. CircleCI provides integrated SSH access, which is critical during troubleshooting complex CI/CD pipelines — especially as CircleCI supports several execution environments, including Docker.

To learn more about how you can take advantage of CircleCI’s SSH debugging capabilities to quickly recover from failures and get development back on track, sign up for a free account and follow our tutorial on Debugging CI/CD pipelines with SSH access.