Skip to content

Containers part 2: The functionality cannot be Contained!

Last updated on 13 February 2025

If you haven’t read part 1 of this blog, click the button on the left to check it out and learn about what containers are and the benefits they bring to the development cycle!

Part 2:

In my previous post, I outlined what a container is, how it compares to other tech like virtual machines, and some solutions that allow you to create containers of your own. For this blog post, I am going to take you through an installation guide to get Docker up and running on your device and showcase some other containers and their functions.

Docker was chosen because it is more bare-bones than the other two previously discussed (Azure and Kubernetes). The other two are good options for container development but do function as more of a suite of tools for collaboration and development. For the purposes of this demonstration with the focus being on containers themselves, exploring the different tools within Azure or Kubernetes would be outside of the scope. They can however be demonstrated in a future blog post.

This guide will take place on a Linux VM. If you prefer to follow alongside a video, the video below will mirror the content of this post and guide you through installation and general use for the Docker program.


Docker Setup Guide

The first step to installing Docker on a Linux distro is to open up your terminal and enter the command: sudo apt update to update local repositories to the most recent versions. This is good practice for the vast majority of cases when using a Linux machine.

Next, enter curl -fsSL https://get.docker.com/ | sh to install Docker on your device. If curl is not installed, sudo apt install curl will install it for use. Once you see a screen that looks like the one below, you have installed Docker onto your device.

Now that Docker has been successfully installed on your device, double check to ensure it is currently running by entering sudo systemctl status docker. If not started, you can start it with sudo systemctl start docker, and allow it to run on boot with sudo systemctl enable docker.

The basic syntax you want to begin every Docker command with is sudo docker, then you can follow it up with an action. Some basic commands that can be used within Docker are:

  • sudo docker image ls – List local Docker images
  • sudo docker ps– see all currently running containers
  • sudo docker ps -a – see all containers previously run
  • sudo docker search [term] – search Docker Hub for images with the [term]
  • sudo docker pull [image name]– download specified image (Hub or other place)
  • sudo docker run [image name] – create a container with the selected image

The first command we will use is sudo docker image ls to see the installed images. We do not have any currently, so let’s download one and run it! The first container we will download is a Hello-World container. Because we know the full name of the container, we can run it directly with the command sudo docker run hello-world. If the run command is used and the specified container image is not available locally, Docker will first pull the image, then run it after. Once it is run, another sudo docker image ls command and you should see it listed there. Hello-World is a basic test container to run as it confirms whether Docker is installed and setup properly or not. Its use after that is very limited.


Docker Basics

To gain additional function out of a container, next we will download a CentOS container and access a shell from within it. You can find the main CentOS container by first searching for it with sudo docker search centos, find the official version (usually at the very top) and download it with sudo docker pull centos. If you were to run this container image as is, the container would boot, then promptly shut down as the main function within is to boot. To keep the container running, we will use switches to modify how it works. The command sudo docker container run -it centos /bin/bash will run the CentOS container and allow for interaction, meaning we can perform tasks within it. Then, we request the bash terminal program access on boot. You can see some of these operations in the screenshot below:

You can rename containers by using the –name switch and entering a string following it. Entering the lengthy command sudo docker run -it –name [name] centos /bin/bash will run the CentOS container, rename the container to the string within the [name] block, allow interaction, and present us with the bash terminal window. If you want to leave your container running but exit out of it, press ctrl-p followed by ctrl-q to leave it in a running state. If you want back in, running the command sudo docker attach [container id] will let you back in.

Let’s say you have this container running and have made other modifications to it. These modifications can be anything from installing an FTP server role onto it, having it perform another specialized role, or really anything you can think of adding to it. After working inside a container, you can save your progress and run new containers from it. When you shut down your container, you can make copies of its current state and save those as a new image by using the command sudo docker commit [container name] [new image name] will create the image from which you can spin up new containers with.

Once you have grasp of Docker basic use, you can undertake some other use cases. If you’re looking for some fun things to try, check out the following video where I use containers created by others to do things like flash random information on screen to make you look like a movie computer hacker, and even run a version of MacOS inside of a container!


Final Words

This ability to create new images and share them with others is the main fundamental benefit of using containers over other forms of development environments. Images are often small in size so sharing them is fairly quick and easy, the development environments are entirely self-contained, and the emulation capabilities of them allow you to develop programs for other platforms than your native OS. There are containers available that are setup and ready to go for things such as android and iOS application development, networking peripherals such as firewalls or IDS devices, specific function devices such as malware test environments, and so on. The ability to easily share containers and access ones shared by others through the Docker Hub is an underestimated benefit to using containers. Even without additional support or tools, it is easy to begin development of an application within a Docker container, share it with the community and/or a coworker, and allow them to develop it further. When you add other platforms into the mix, like Azure or Kubernetes, the collaboration environment is even more vast and efficient.

I hope this has been helpful in dipping your toes into Docker and container development! Thank you for reading!


External resources used:

[1] jturpin. “jturpin/hollywood – Docker image | Docker Hub.” Docker Hub https://hub.docker.com/r/jturpin/hollywood (accessed 07-24-22)

[2] sickcodes. “GithHub – sickcodes/Docker-OSX.” GitHub. https://github.com/sickcodes/Docker-OSX (accessed 07-25-22)

Published inSchoolTech Article