dDocker commands links 50 command
https://www.fosstechnix.com/docker-basic-commands/
https://blog.devops.dev/docker-interview-questions-and-answers-for-every-solution-architect-devops-engineer-sdet-8435dd2ab147
https://github.com/collabnix/dockerlabs/blob/master/docker/docker-interview-questions.md VVVVIMP
Scenario based interview questions
Basic Interview Questions of Docker & Dockerfile:-
1. How will you run multiple Docker containers in one single host?
Answer: Docker Compose is the best way to run multiple containers as a single service by defining them in a docker-compose.yml file.
2. If you delete a running container, what happens to the data stored in that container?
Answer: When a running container is deleted, all data in its file system also goes away. However, we can use Docker Data Volumes to persist data even if the container is deleted.
3. How do you manage sensitive security data like passwords in Docker?
Answer: Docker Secrets and Docker Environment Variables can be used to manage sensitive data.
4. What is the difference between Docker Image and a Docker Container?
Answer: Docker Image is a template that contains the application, libraries, and dependencies required to run an application, whereas a Docker Container is the running instance of a Docker Image.
5. How do you handle persistent storage in Docker?
Answer: Docker Volumes and Docker Bind Mounts are used to handle persistent storage in Docker.
6. What is the process to create a Docker Container from a Dockerfile?
Answer: Docker Build command is used to create Docker images from a Dockerfile and then Docker Run command is used to create Containers from Docker images.
7. How will you scale Docker containers based on traffic to your application?
Answer: Docker Swarm or Kubernetes can be used to auto-scale Docker Containers based on traffic load.
8. When RUN and CMD instructions will be executed?
Answer: RUN instruction will be executed while building the Docker Image. CMD instruction will be executed while starting the Container.
9. What’s the different between COPY and ADD instructions?
Answer: Using COPY instruction,We can copy local files and folders from docker build context to Docker Image. These files and folders will be copied while creating a Docker Image.
ADD instruction works similar to COPY instruction but the only different is that we can download files from remote locations that’s from Internet while creating a Docker Image.
10. What’s the different between CMD and ENTRYPOINT instructions?
Answer: CMD instruction will be used to start the process or application inside the Container.
ENTRYPOINT instruction also works similar to CMD instruction. ENTRYPOINT instruction will also be executed while creating a container. CMD instruction can be overridden while creating a Container where as ENTRYPOINT instruction cannot be overridden while creating a Container.
11. When we have both CMD and ENTRYPOINT instructions in a Dockerfile?
Answer: CMD instruction will not be executed and CMD instruction will be passed as an argument for ENTRYPOINT.
--------------------------------------------------------------------------------
Last simple question…Describe a Docker container’s lifecycle.
Although there are several different ways of describing the steps in a Docker container’s lifecycle, the following is the most common:
Create container
Run container
Pause container
Unpause container
Start container
Stop container
Restart container
Kill container
Destroy container
The most critical Docker commands are:
Build. Builds a Docker image file
Commit. Creates a new image from container changes
Create. Creates a new container
Dockerd. Launches Docker daemon
Kill. Kills a container
What are Docker object labels?
Labels are the mechanism for applying metadata to Docker objects such as containers, images, local daemons, networks, volumes, and nodes.
How do you find stored Docker volumes?
Use the command: /var/lib/docker/volumes
How do you check the versions of Docker Client and Server?
This command gives you all the information you need: $ docker version
Show how you would create a container from an image.
To create a container, you pull an image from the Docker repository and run it using the following command: $ docker run -it -d
How about a command to stop the container?
Use the following command: $ sudo docker stop container name
How would you list all of the containers currently running?
U se the command: $ docker ps
List some of the more advanced Docker commands and what they do.
Some advanced commands include:
Docker info. Displays system-wide information regarding the Docker installation
Docker pull. Downloads an image
Docker stats. Provides you with container information
Docker images. Lists downloaded images
Can you lose data stored in a container?
Any data stored in a container remains there unless you delete the container.
Can a container restart on its own?
Since the default flag -reset is set to false, a container cannot restart by itself.
What are the important features of Docker?
Here are the essential features of Docker:
Easy Modeling
Version control
Placement/Affinity
Application Agility
Developer Productivity
Operational Efficiencies
What are the main drawbacks of Docker?
Some notable drawbacks of Docker are:
Doesn’t provide a storage option
Offer a poor monitoring option.
No automatic rescheduling of inactive Nodes
Complicated automatic horizontal scaling set up
What is Docker Engine?
Docker daemon or Docker engine represents the server. The docker daemon and the clients should be run on the same or remote host, which can communicate through command-line client binary and full RESTful API.
What is Docker image?
The Docker image help to create Docker containers. You can create the Docker image with the build command. Due to this, it creates a container that starts when it begins to run. Every docker images are stored in the Docker registry.
Explain Registries
There are two types of registry is
Public Registry
Private Registry
hat command should you run to see all running container in Docker?
$ docker ps
Write the command to stop the docker container
$ sudo docker stop container name
What is the command to run the image as a container?
$ sudo docker run -i -t alpine /bin/bash
What the states of Docker container?
Important states of Docker container are:
Running
Paused
Restarting
Exited
How can you monitor the docker in production environments?
Docker states and Docker Events are used to monitoring docker in the production environment.
What is Docker hub?
Docker hub is a cloud-based registry that which helps you to link to code repositories. It allows you to build, test, store your image in Docker cloud. You can also deploy the image to your host with the help of Docker hub.
What is Hypervisor?
The hypervisor allows you to create a virtual environment in which the guest virtual machines operate. It controls the guest systems and checks if the resources are allocated to the guests as necessary.
Write a Docker file to create and copy a directory and built it using python modules?
FROM pyhton:2.7-slim
WORKDIR /app
COPY . /app
docker build –tag
List out some important advanced docker commands
Command Description
docker info Information Command
docker pull Download an image
docker stats Container information
Docker images List of images downloaded
https://globalazure2024.azurewebsites.net/
https://www.fullstack.cafe/interview-questions/devops
70 Real Time Docker Interview Questions and Answers
Docker Interview Questions and Answers
1. What according to you is Docker?
A Docker can be a platform designed to ensure excellent efficiency and availability by containerizing the application and segregating them from each other for environments like production, testing or development.
Docker is a platform that allows developers to create, deploy, and run applications inside containers.
What is Container ?
A container is a lightweight, stand-alone, executable software package that includes everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings.
Containers are isolated from each other and the host system.
2. Define Docker hub or Docker Hub Registry?
Docker hub is a cloud-based registry service that lets you attach, create and send your assets to application databases, save images that have been manually sent, and connects to Docker cloud so that you can you can deploy images to your hosts.
Docker hub provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, workflow automation throughout the development pipeline.
3. What is command to pull docker image
Syntax
$ docker pull [OPTIONS] [PATH/]IMAGE_NAME[:TAG]
Example
$ docker pull ubuntu
4. How to pull docker image with specific version
$ docker pull ubuntu:18.04
5. How to pull the latest version of an image
$ docker pull ubuntu:latest
6. How to create a custom tag to docker image
$ docker tag ubuntu:latest fosstechnix/ubuntu:test
7. How to push docker image to docker hub registry using command line
$ docker push fosstechnix/ubuntu:demo
8. How to Remove all images in docker server
$ docker image rm -f
9. How to Pull your custom image from your docker account
$ docker pull fosstechnix/ubuntu:demo
10. How to pull Docker Image from Private Registry
First login to Private Registry
$ docker login myrepo.fosstechnix.com:9000
Then pull your images
Syntax:
$ docker pull [OPTIONS] ADDRESS:PORT[/PATH]/IMAGE_NAME[:TAG]
Example:
$ docker pull repo.fosstechnix.com:9000/demo-image
11. What do you mean by Docker container?
The program and all its components are included in Docker containers. The kernel is associated with several other containers and works in the host operating system as independent operations.
12. Tell me some about Hypervisor.
Hypervisor can also termed as the virtual machine monitor. This system basically divides the hosting system by delegating resources to each system equally. The possibility of turning virtualization into feasibility and practicality can be achieved only with Hypervisor.
13. Can you state the advantages of Docker over Hypervisor?
The prominent advantage of using Docker over Hypervisor is lightweight feature of the former one makes it highly feasible in terms of operations.
14. Define Docker images.
Docker images are required for establishing Docker containers. These images can be used for linking to any of the Docking environment.
15. How do you estimate the Docker server version and the client?
$ docker version
command will give the requisite information regarding the Docker client and server version.
16. Tell me something about Docker machine.
Docker machines are basically used for clustering of Docker swarms. Moreover, they are used for installation of Docker engines on the virtual hosting facilities, which can be managed using the commands from Docker machines.
17. Define Docker Swarms.
Docker swarms functions mainly by splitting a collection of Docker hosts into individual and virtual hosting spaces. As a result, the maintenance turns very easy and feasible in its operations.
18. Describe the usage of Dockerfile?
A Dockerfile is a series of specific instructions something we must send Docker to create the files. We can see the Dockerfile as a word document that has all the required commands to build a Docker Image.
OR
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Below is workflow to create Docker Container from Dockerfile
Dockerfile –> Docker Image –> Docker Container
19. Can you please write Dockerfile to create Ubuntu Docker Image
FROM ubuntu:18.04
MAINTAINER FOSS TECHNix support@fosstechnix.com
LABEL version="1.0" \
RUN apt-get update && apt-get install -y apache2 && apt-get clean
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
COPY index.html /var/www/html
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
20. Can you please write Dockerfile to create Node Js Docker Image
FROM node:10
RUN mkdir -p /home/nodejs/app
WORKDIR /home/nodejs/app
COPY package.json .
RUN npm install
COPY . .
CMD [“node.js”, “index.js”]
EXPOSE 3000
21. What is command to build docker image from Dockerfile ?
$ docker build -t image_name .
22. What is the command to run the image as a container?
$ docker run -it ubuntu /bin/bash
Here i -> interactive, t -> terminal
23. What is command to list docker images
$ docker images
24. What is command to list specific docker image
$ docker image ls
25. How to apply port mapping a running container?
We can apply port forwarding while running a container using below command.
$ docker run -p : -d
26. What is command to login docker container
$ docker exec -it /bin/bash
27. What is command to stop a docker container
$ docker stop
28. What is command to start a docker container
$ docker start
29. What is command to remove a Docker container
$ docker rm
30. What is command to list out Running Docker containers
$ docker ps
31. What are the common instructions in Dockerfile
Below are some common instructions in Dockerfile
FROM, MAINTAINER, RUN, CMD, LABEL, EXPOSE, ENV, ADD, COPY, ENTRYPOINT, VOLUME, USER, WORKDIR, ARG, ONBUILD, STOPSIGNAL, SHELL, HEALTHCHECK
32. What is difference between ADD and COPY in Dockerfile
COPY : Copies a file or directory from your host to Docker image, It is used to simply copying files or directories into the build context.
Syntax:
COPY
Example:
COPY index.html /var/www/html
ADD: Copies a file and directory from your host to Docker image, however can also fetch remote URLs, extract TAR/ZIP files, etc. It is used downloading remote resources, extracting TAR/ZIP files.
Syntax:
ADD
Example:
ADD java/jdk-8u231-linux-x64.tar /opt/jdk/
33. Detail us with some of the specific advantages of Docker over other containerization technologies.
Different types of files can be downloaded from a central space along with the Docker hub.
Docker also enables us to share our contents with our created containers.
It can be assessed from the official IT systems or even from the personal computers also.
34. Enumerate the lifecycle process of Docker containers
Establishing the containers.
Using the container.
Pausing
Unpausing
Halting the container.
Restarting
Destroying the container.
35. What are Docker Namespaces?
Docker Namespaces is a platform that provides a container known as independent workspaces. A variety of namespaces are generated for this database after a container is launched.
Such namespaces include a seclusion framework for the containers since each container operates in a different namespace with limited access to the namespace defined.
36. What is the EXPOSE command? What is its role?
EXPOSE [/...]
During publishing of the ports, this command can be used for mapping of the documentations.
37. Why is it necessary to monitor a Docker?
Active monitoring ensures higher productivity and better outcomes. Docker monitoring also notifies about any default in the system.
38. Define Docker compose?
Docker Compose is a method for specifying various containers and their parameters in a YAML or JSON. Docker Compose more usually uses one or several dependencies, for instance MySQL or MongoDB, for your program.
This dependencies are usually set up locally throughout innovation — a process which then has to be reworked before going to a production set-up. You can use Docker Compose to prevent these deployment and configuration bits.
OR
Docker compose is used to Defining and Running multi container Docker Application, you can use JSON or YAML file to write docker compose
39. Can Please create Nodejs, MySQL, MongoDB docker environments using docker-compose file ?
version: "2"
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- mongo
- mysql
mongo:
image: mongo
volumes:
- my-datavolume:/var/lib/mongo
volumes:
my-datavolume:
ports:
- "27017:27017"
mysql:
env_file:
- mysql-server.env
image:mysql
volumes:
- my-datavolume:/var/lib/mysql
volumes:
my-datavolume:
ports:
- "3306:3306"
40. What is command to run Docker compose
$ docker-compose up
41. What is depends_on in Docker compose
It is used to Link to containers in another service and also express dependency between services.
42. Can We use JSON instead of YAML for my compose file in Docker
Yes. We can Use.
43. Describe the role of Docker load and save command.
Docker save command makes its possible to export a Docker image as an archive with command:
$ docker save -o .tar
44. While this exported image can be easily imported to other hosts using the load command.
$ docker load -i .tar
45. Why and how to identify the status of a Docker container?
Identification of the status helps us to make any conduct to the Docker containers accordingly. Therefore, for identifying the same we need to run the following command.
$ docker ps –a
46. Which is the most feasible type of application for Docker containers – Stateless or Stateful?
It is best to generate a Stateless Docker container application. From our code we could build a container and remove programmable state parameters from framework. Now in development and in QA environments we can operate the same container with different parameters.
This allows to recreate the same image in various scenarios. A stateless framework is also much simpler than a superb program to scale with Docker Containers.
47. Differentiate between a Docker layer and a image.
The layers in a Docker represents the set of instructions from a Docker image, while the image is nothing but the set of read-only layers.
48. Say something about virtualization.
A way of logically separating mainframes is seen as virtualization what allows multiple software to run concurrently. The situation changed radically, though, as businesses and open source networks could provide a way to handle delegated instructions in one manner or another, allowing multiple operating systems to operate on a single x86-based machine concurrently.
49. What if you have accidentally out of the Docker containers, will you loose the files?
There is no way we can loose our progress in a Docker container unless we have implemented the deleting program in the container itself.
50. What are factors that decides the number of containers you can run on?
Factors such as app size and strength of the CPU can deliberately influence the count of Docker containers. Thus, it can be understood that if we are equipped goof CPU strength we can easily hell load of containers with extreme ease.
51. What is the difference between CMD and ENTRYPOINT in a Dockerfile?
CMD in Dockerfile Instruction is used to execute a command in Running container, There should be one CMD in a Dockerfile.
ENTRYPOINT in Dockerfile Instruction is used you to configure a container that you can run as an executable.
52. What is Dockerfile Instructions ?
Dockerfile contains a set of Instructions to build Docker Image -> from Docker Image -> running Docker container
We have covered docker interview questions and answers for fresher and experienced candidate.
53. What are Docker Lifecycle commands ?
Below are some Docker Lifecycle commands for every Docker Container
Docker create
docker run
docker pause
docker unpause
docker stop
docker start
docker restart
docker attach
docker wait
docker rm
docker kill
54. What is Docker Prune ?
Using Docker prune we can delete unused or dangling containers, Images , volumes and networks
Why is Docker Important for DevOps?
1. Consistency: containers package up all dependencies, an application will run the same regardless of where the container is run. This eliminates the “it works on my machine” problem.
2. Isolation: Containers ensure that applications run in isolation from each other.This means if one application crashes, it won’t affect others.
3. Scalability: With Docker, it’s easy to scale applications up or down, depending on the demand, by simply starting or stopping containers.
4. Portability: You can build a container on your local machine, then deploy it to various environments (e.g., Integration, staging, production) without changes.
5. Infrastructure Efficiency: Containers are lightweight compared to virtual machines, as they share the host system’s OS, rather than needing their own operating system.
What are key components of Docker?
1. Docker Images: Docker uses images to create containers. An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software.
2. Docker Containers: A container is a runtime instance of an image. It’s the “live” version of the Docker image.
3. Dockerfile: A script with commands to build a Docker image. It defines how the image should be built, what software it should contain, and how it should run.
4. Docker Hub: A cloud-based registry where Docker users and partners create, test, store, and distribute container images.
5. Docker Compose: A tool for defining and running multi-container Docker applications.With Compose, you use a YAML file to define the services, networks, and volumes, and then start all services with a single command.
6. Docker Volumes: Used to store data outside of containers, ensuring that data persists even if the container is deleted.
Please Explain How Does Docker Work?
At a high level, Docker works by using a technology called containerization. Unlike traditional virtualization, where each application requires a separate operating system instance, containerization allows multiple containers to run on a single OS instance. The Docker engine, which is responsible for building and running containers, achieves this.
Explain What is difference between Docker vs. Traditional VMs?
Performance: Containers are lightweight and share the host OS, while VMs have their own full OS instance.
Size: VM images are often several GBs because they include a full OS. Docker images are much smaller as they only include the application and its dependencies.
Startup Time: Containers can start almost instantly, whereas VMs can take several minutes.
Advanced Docker Interview Questions and Answers
What is the difference between containerization and virtualization?
Containerization and virtualization are both technologies for isolating applications from each other. However, they work in different ways.
Virtualization creates a virtual machine (VM) that is a complete replica of a physical machine, including the operating system, hardware, and applications. VMs are typically larger and slower than containers.
Containerization packages an application and its dependencies into a lightweight container that shares the host operating system’s kernel. Containers are typically smaller and faster than VMs.
What are the benefits of using Docker?
There are many benefits to using Docker, including:
Isolation: Docker containers isolate applications from each other, which can help to prevent conflicts and security vulnerabilities.
Portability: Docker containers can be run on any machine with Docker installed, which makes them very portable.
Reproducibility: Docker containers are always in the same state, which makes them very reproducible.
Scalability: Docker containers can be easily scaled up or down, which makes them very scalable.
What are the components of Docker?
The main components of Docker are:
Docker image: A Docker image is a read-only template that contains the instructions for creating a container.
Docker container: A Docker container is an instance of a Docker image. It is a running instance of an application that is isolated from the host operating system.
Docker Hub: Docker Hub is a public registry where Docker images can be stored and shared.
What is a Docker registry?
A Docker registry is a repository where Docker images are stored and shared. Docker Hub is a public Docker registry, but there are also many private registries.
What are some common Docker networking concepts?
Some common Docker networking concepts include:
Bridge network: The default network for Docker containers. It connects containers to each other and to the host machine.
Overlay network: A network that can span multiple Docker hosts.
External network: A network that connects containers to external resources, such as the internet.
What are some security considerations for using Docker?
Some security considerations for using Docker include:
Image scanning: Scanning Docker images for vulnerabilities before running them.
User isolation: Using different user accounts for different Docker containers.
Network isolation: Restricting network access to Docker containers.
How can you troubleshoot Docker problems?
Some common ways to troubleshoot Docker problems include:
Checking the Docker logs: The Docker logs can provide valuable information about errors and warnings.
Inspecting Docker containers: You can inspect Docker containers to get more information about their state and configuration.
Using the Docker CLI tools: There are a number of Docker CLI tools that can be used to troubleshoot problems, such as docker ps, docker logs, and docker info.
Advanced Docker Interview Questions and Answers
What are some advanced Docker features?
Some advanced Docker features include:
Docker Swarm: A tool for managing clusters of Docker hosts.
Docker Compose: A tool for defining and running multi-container applications.
Dockerfiles with multiple stages: A way to build Docker images with multiple stages, which can improve image efficiency.
Docker Secrets: A way to store sensitive information securely in Docker containers.
Docker BuildKit: A tool for building Docker images that can improve build performance.
How do you keep your Docker images up to date?
There are a number of ways to keep your Docker images up to date, including:
Using a continuous integration (CI) pipeline: A CI pipeline can automatically build and test Docker images.
Using a vulnerability scanner: A vulnerability scanner can scan Docker images for vulnerabilities.
Scenario Based Docker Interview Questions and Answers
Scenario 1: Deploying a web application
You are tasked with deploying a new web application to production. The application is a microservices-based architecture and consists of several different containers. How would you approach this task using Docker?
Answer:
Create Dockerfiles for each microservice: This will ensure that each microservice is always built in the same way and that the development environment is consistent with the production environment.
Build Docker images for each microservice: This will create a portable and reproducible image for each microservice that can be run on any machine with Docker installed.
Store Docker images in a registry: This will make it easy to deploy and manage the microservices in production.
Use a container orchestration tool, such as Kubernetes, to manage the microservices in production: This will make it easy to scale the microservices up or down and to manage their health and availability.
Scenario 2: Troubleshooting a Docker container
A Docker container is not starting. How would you troubleshoot this problem?
Answer:
Check the Docker logs: The Docker logs can provide valuable information about errors and warnings that may be preventing the container from starting.
Inspect the Docker container: You can inspect the Docker container to get more information about its state and configuration. This can help you to identify any problems with the container’s configuration or with the application that is running in the container.
Use the Docker CLI tools: There are a number of Docker CLI tools that can be used to troubleshoot problems, such as docker ps, docker logs, and docker info.
Scenario 3: Securing Docker containers
What are some security considerations for using Docker?
Answer:
Image scanning: Scanning Docker images for vulnerabilities before running them.
User isolation: Using different user accounts for different Docker containers.
Network isolation: Restricting network access to Docker containers.
Least privilege: Running Docker containers with the least amount of privilege necessary.
Regular updates: Regularly updating Docker images and the Docker daemon to the latest version.
Scenario 4: Migrating to Docker
You are tasked with migrating a legacy application to Docker. What are some of the challenges that you may face?
Answer:
Packaging the application: The application may not be designed to be run in a container, so you may need to make changes to the application code to make it compatible with Docker.
Managing dependencies: The application may have a number of dependencies that need to be installed and managed in the Docker container.
Testing the application: You need to make sure that the application works correctly when running in a container.
Deploying the application: You need to decide how to deploy the application to production, such as using Docker Swarm or a container orchestration tool like Kubernetes.
Scenario 5: You have a multi-container application, and one of the containers is not communicating with the others. How would you troubleshoot and resolve the issue?
Answer:
Check container logs: Use docker logs to inspect the logs of the misbehaving container for any error messages.
Inspect network connectivity: Ensure containers can reach each other by checking network configurations (docker network inspect) and using tools like ping or telnet between container IP addresses.
Verify container dependencies: Ensure that dependencies specified in your application (e.g., environment variables, network configurations) are correctly set.
Update container configurations: If necessary, modify container configurations, such as environment variables or network settings.
Scenario 6: You want to scale a service horizontally by adding more instances of a container. How would you achieve this using Docker?
Answer:
Use Docker Compose: Define the service configuration in a docker-compose.yml file and then use the docker-compose up --scale = command to scale the service.
Use Docker Swarm: If working with a swarm, deploy the service with docker service create --replicas .
Use Docker Compose with Swarm mode: If using both Docker Compose and Swarm, define the services in docker-compose.yml and deploy them to the swarm with docker stack deploy.
Scenario 7: Your Docker host is running out of disk space. How would you identify and clean up unused Docker resources?
Answer:
Use docker system df: This command shows the disk usage of Docker components. Look for items consuming the most space.
Remove unused containers: docker container prune removes stopped containers.
Remove unused images: docker image prune removes dangling (untagged) images.
Remove unused volumes: docker volume prune removes unused volumes.
Remove unused networks: docker network prune removes unused networks.
Scenario 8: You need to deploy a multi-container application on a remote server. How would you securely manage the deployment process?
Answer:
Use Docker Swarm or Kubernetes: These orchestration tools provide built-in security features for managing and deploying multi-container applications.
Use SSH for remote access: Connect to the remote server securely using SSH for deploying Docker containers manually.
Set up TLS for Docker: Configure Docker daemon to use TLS to encrypt communication between Docker clients and the daemon for a secure remote connection.
Scenario 9: You want to update a running container with a new version of the application without downtime. How can you achieve zero-downtime deployments in Docker?
Answer:
Use Blue-Green Deployments: Start a new set of containers (Green) with the updated version, divert traffic to them, and then stop the old containers (Blue).
Use Rolling Deployments: Update one container at a time while maintaining a specified number of instances running.
Use Orchestration Tools: Tools like Docker Swarm or Kubernetes can automatically manage rolling updates with minimal downtime.
Scenario 10: You are experiencing performance issues with your Dockerized application. How would you profile and optimize the performance of Docker containers?
Answer:
Use Docker Stats: Monitor container resource usage using docker stats .
Profile Application: Use tools like strace, top, or htop inside the container to identify resource-intensive processes.
Optimize Dockerfile: Ensure your Dockerfile is efficient, avoiding unnecessary layers and optimizing dependencies.
Adjust Resource Limits: Use Docker Compose or Docker Swarm to set resource limits for individual containers.
How will you run multiple Docker containers in one single host?
Answer: Docker Compose is the best way to run multiple containers as a single service by defining them in a docker-compose.yml file.
If you delete a running container, what happens to the data stored in that container?
Answer: When a running container is deleted, all data in its file system also goes away. However, we can use Docker Data Volumes to persist data even if the container is deleted.
How do you manage sensitive security data like passwords in Docker?
Answer: Docker Secrets and Docker Environment Variables can be used to manage sensitive data.
What is the difference between Docker Image and a Docker Container?
Answer: Docker Image is a template that contains the application, libraries, and dependencies required to run an application, whereas a Docker Container is the running instance of a Docker Image.
How do you handle persistent storage in Docker?
Answer: Docker Volumes and Docker Bind Mounts are used to handle persistent storage in Docker.
What is the process to create a Docker Container from a Dockerfile?
Answer: Docker Build command is used to create Docker images from a Dockerfile and then Docker Run command is used to create Containers from Docker images.
How will you scale Docker containers based on traffic to your application?
Answer: Docker Swarm or Kubernetes can be used to auto-scale Docker Containers based on traffic load.
What are Best Practices to consider while building a Docker image
1. Use multistage dockerfile: Writing the dockerfile in multistage will help to reduce the size of the docker images.
2. Start with the appropriate base image: while building docker images we should look for minimal appropriate base images.
3. Use specific tags while building the docker images: Rather than using generic tags like latest we should mention the special tags like v1.1.1. This can help us operations like rollback.
4. Leverage Caching: Order your commands wisely to take advantage of Docker’s caching mechanism. Place less frequently changing commands at start in the Dockerfile.
5.Use COPY Instead of ADD: Prefer the COPY instruction over ADD unless you specifically need the additional functionality provided by ADD.
6. Minimize the Number of Layers: Minimize the number of layers in your image by combining multiple commands into a single RUN instruction.
7. Avoid Running Processes as Root: Whenever possible, run your application processes as a non-root user to enhance security.
8. Use .dockerignore: Create a .dockerignore file to exclude unnecessary files and directories from being copied into the image.
9. Label Your Images: Add metadata to your image using labels to provide additional information about the image, such as version, maintainer, or description
10.Document Your Dockerfile: Include comments to explain complex or critical parts of your Dockerfile for better understanding by others.
1. What is Docker?
Docker is an open-source platform that allows you to automate the deployment and management of applications within containers. Containers are lightweight and isolated environments that provide consistent and reproducible application execution.
2. What is a Dockerfile?
A Dockerfile is a text file that contains instructions on how to build a Docker image. It specifies the base image, adds dependencies, sets environment variables, copies files, and defines commands to run within the container.
Example Dockerfile:
# Use a base image
FROM ubuntu:latest
# Set the working directory
WORKDIR /app
# Copy application files
COPY . /app
# Install dependencies
RUN apt-get update && apt-get install -y
# Set environment variables
ENV PORT=8080
# Define the command to run the application
CMD ["python", "app.py"]
3. How do you build a Docker image from a Dockerfile?
To build a Docker image from a Dockerfile, you can use the docker build command followed by the directory containing the Dockerfile.
Example command:
docker build -t myimage:latest .
4. How do you run a Docker container?
To run a Docker container, you can use the docker run command followed by the image name.
Example command:
docker run myimage:latest
5. What is a Docker registry?
A Docker registry is a storage system for Docker images. It can be public or private and allows you to store and distribute Docker images across multiple systems.
6. How do you push a Docker image to a Docker registry?
To push a Docker image to a Docker registry, you need to tag the image with the registry URL and then use the docker push command.
Example commands:
docker tag myimage:latest registry.example.com/myimage:latest
docker push registry.example.com/myimage:latest
7. How do you scale Docker containers in a cluster using Docker Compose?
To scale Docker containers in a cluster, you can use Docker Compose and specify the desired number of replicas for a service. Example Docker Compose file:
version: '3'
services:
app:
image: myimage:latest
deploy:
replicas: 3
8. How do you persist data in Docker containers?
To persist data in Docker containers, you can use Docker volumes or bind mounts. Volumes are managed by Docker and provide better isolation, while bind mounts reference a specific directory on the host system.
Example Docker volume:
docker run -v myvolume:/data myimage:latest
9. What is Docker Compose?
Docker Compose is a tool that allows you to define and manage multi-container Docker applications. It uses YAML files to configure the services, networks, and volumes required by the application.
10. How do you link containers in Docker?
In Docker, container linking is an older method for connecting containers together. However, the preferred method is to use Docker networks, which provide better isolation and functionality.
11. How do you remove Docker images and containers?
To remove Docker images and containers, you can use the docker rm and docker rmi commands.
Example commands:
docker rm container_id
docker rmi image_id
12. What is the difference between a Docker container and an image?
A Docker image is a template that contains all the dependencies and configuration required to run an application. A Docker container is an instance of an image that is running as a process on a host.
What is Docker, and how is it different from virtual machines?
Docker is a containerization platform that simplifies application deployment by ensuring software and its dependencies run uniformly on any infrastructure, from laptops to servers to the cloud.
Using Docker allows you tobundle code and dependencies into a container image you can then run on any Docker-compatible environment. This approach is a significant improvement over traditional virtual machines, which are less efficient and come with higher overheads.
Key Docker Components
Docker Daemon: A persistent background process that manages and executes containers.
Docker Engine: The CLI and API for interacting with the daemon.
Docker Registry: A repository for Docker images.
Core Building Blocks
Dockerfile: A text document containing commands that assemble a container image.
Image: A standalone, executable package containing everything required to run a piece of software.
Container: A runtime instance of an image.
Virtual Machines vs. Docker Containers
Virtual Machines
Advantages:
Isolation: VMs run separate operating systems, providing strict application isolation.
Inefficiencies:
Resource Overhead: Each VM requires its operating system, consuming RAM, storage, and CPU. Running multiple VMs can lead to redundant resource use.
Slow Boot Times: Booting a VM involves starting an entire OS, slowing down deployment.
Containers
Efficiencies:
Resource Optimizations: As containers share the host OS kernel, they are exceptionally lightweight, requiring minimal RAM and storage.
Rapid Deployment: Containers start almost instantaneously, accelerating both development and production.
Isolation Caveats:
Application-Level Isolation: While Docker ensures the separation of containers from the host and other containers, it relies on the host OS for underlying resources.
Code Example: Dockerfile
Here is the Dockerfile:
FROM python:3.8
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Core Unique Features of Docker
Layered File System: Docker images are composed of layers, each representing a set of file changes. This structure aids in minimizing image size and optimizing builds.
Container Orchestration: Technologies such as Kubernetes and Docker Swarm enable the management of clusters of containers, providing features like load balancing, scaling, and automated rollouts and rollbacks.
Interoperability: Docker containers are portable, running consistently across diverse environments. Additionally, Docker complements numerous other tools and platforms, including Jenkins for CI/CD pipelines and AWS for cloud services.
2. Can you explain what a Docker image is?
A Docker image is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and configuration files.
It provides consistency across environments by ensuring that each instance of an image is identical, a key principle of Docker's build-once-run-anywhere philosophy.
Image vs. Container
Image: A static package that encompasses everything the application requires to run.
Container: An operating instance of an image, running as a process on the host machine.
Layered File System
Docker images comprise multiple layers, each representing a distinct file system modification. Layers are read-only, and the final container layer is read/write, which allows for efficiency and flexibility.
Key Components
Operating System: Traditional images have a full or bespoke OS tailored for the application's needs. Recent developments like "distroless" images, however, focus solely on application dependencies.
Application Code: Your code and files, which are specified during the image build.
Image Registries
Images are stored in Docker image registries like Docker Hub, which provides a central location for image management and sharing. You can download existing images, modify them, and upload the modified versions, allowing teams to collaborate efficiently.
How to Build an Image
Dockerfile: Describes the steps and actions required to set up the image, from selecting the base OS to copying the application code.
Build Command: Docker's build command uses the Dockerfile as a blueprint to create the image.
Advantages of Docker Images
Portability: Docker images ensure consistent behavior across different environments, from development to production.
Reproducibility: If you're using the same image, you can expect the same application behavior.
Efficiency: The layered filesystem reduces redundancy and accelerates deployment.
Security: Distinct layers permit granular security control.
Code Example: Dockerfile
Here is the Dockerfile:
# Use a base image
FROM ubuntu:latest
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Specify the command to run on container start
CMD ["/bin/bash"]
Best Practices for Dockerfiles
Use the official base image if possible.
Aim for minimal layers for better efficiency.
Regularly update the base image to ensure security and feature updates.
Reduce the number of packages installed to minimize security risks.
3. How does a Docker container differ from a Docker image?
Docker images serve as templates for containers, whereas Docker containers are running instances of those images.
Key Distinctions
State: Containers encapsulate both the application code and its runtime environment in a stable and consistent state. In contrast, images are passive and don't change once created.
Mutable vs Immutable: Containers, like any running process, can modify their state. In contrast, images are immutable and do not change once built.
Disk Usage: Containers have both writable layers (such as logs or configuration files) and read-only layers (the image layers), potentially leading to increased disk usage over time. Docker's use of layered storage, however, limits this growth.
Images, on the other hand, are solely read-only, meaning each instance based on the same image doesn't consume additional disk space.
Docker Image vs Container
Practical Demonstration
Here is the code:
Dockerfile - Defines the image:
# Set the base image
FROM python:3.8
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
Building an Image - Use the docker build command to create the image.
docker build -t myapp .
Instantiating Containers - Run the built image with docker run to spawn a container.
# Run a single command within a new container
docker run myapp python my_script.py
# Run a container in detached mode and enter it to explore the environment
docker run -d -it --name mycontainer myapp /bin/bash
Viewing Containers - The docker container ls or docker ps commands display active containers.
Modifying Containers - As an example, you can change the content of a container by entering in via docker exec.
docker exec -it mycontainer /bin/bash
Stopping and Removing Containers - This can be done using the docker stop and docker rm commands or combined with the -f flag.
docker stop mycontainer
docker rm mycontainer
Cleaning Up Images - Remove any unused images to save storage space.
docker image prune -a
4. What is the Docker Hub, and what is it used for?
The Docker Hub is a public cloud-based registry for Docker images. It's a central hub where you can find, manage, and share your Docker container images. Essentially, it is a version control system for Docker containers.
Key Functions
Image Storage: As a centralized repository, the Hub stores your Docker images, making them easily accessible.
Versioning: It maintains a record of different versions of your images, enabling you to revert to previous iterations if necessary.
Collaboration: It's a collaborative platform where multiple developers can work on a project, each contributing to and pulling from the same image.
Link to GitHub: Docker Hub integrates with the popular code-hosting platform GitHub, allowing you to automatically build images using pre-defined build contexts.
Automation: With automated builds, you can rest assured that your images are up-to-date and built to the latest specifications.
Webhooks: These enable you to trigger external actions, like CI/CD pipelines, when certain events occur, enhancing the automation capabilities of your workflow.
Security Scanning: Docker Hub includes security features to safeguard your containerized applications. It can scan your images for vulnerabilities and security concerns.
Cost and Pricing
Free Tier: Offers one private repository and unlimited public repositories.
Pro and Team Tiers: Both come with advanced features. The Team tier provides collaboration capabilities for organizations.
Use Cases
Public Repositories: These are ideal for sharing your open-source applications with the community. Docker Hub is home to a multitude of public repositories, each extending the functionality of Docker.
Private Repositories: For situations requiring confidentiality, or to ensure compliance in regulated environments, Docker Hub allows you to maintain private repositories.
Key Benefits and Limitations
Benefits:
Centralized Container Distribution
Security Features
Integration with CI/CD Tools
Multi-Architecture Support
Limitations:
Limited Private Repositories in the Free Plan
Might Require Additional Security Measures for Sensitive Workloads
5. Explain the Dockerfile and its significance in Docker.
One of the defining features of Docker is its use of Dockerfiles to automate the creation of container images. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Common Commands
FROM: Sets the base image for subsequent build stages.
RUN: Executes commands within the image and then commits the changes.
EXPOSE: Informs Docker that the container listens on a specific port.
ENV: Sets environment variables.
ADD/COPY: Adds files from the build context into the image.
CMD/ENTRYPOINT: Specifies what command to run when the container starts.
Multi-Stage Builds
FROM: Allows for multiple build stages in a single Dockerfile.
COPY --from=source: Enables copying from another build stage, useful for extracting build artifacts.
Image Caching
Docker uses caching to speed up build processes. If a layer changes, Docker rebuilds it and all those that depend on it. Often, this results in fortuitous cache misses, making builds slower than anticipated.
To optimize, place commands that change frequently (such as file copying or package installation) toward the end of the file.
Docker Build Accesses a remote repository, the Docker Cloud. The build context is the absolute path or URL to the directory containing the Dockerfile.
Tips for Writing Efficient Dockerfiles
Use Specific Base Images: Start from the most lightweight, appropriate image to keep your build lean.
Combine Commands: Chaining commands with && (where viable) reduces layer count, enhancing efficiency.
Remove Unneeded Files: Eliminate files your application doesn't require, especially temporary build files or cached resources.
Code Example: Dockerfile for a Node.js Web Server
Here is the Dockerfile:
# Use a specific version of Node.js as the base
FROM node:14-alpine
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json first to leverage caching when the
# dependencies haven't changed
COPY package*.json ./
# Install NPM dependencies
RUN npm install --only=production
# Copy the rest of the application files
COPY . .
# Expose port 3000
EXPOSE 3000
# Start the Node.js application
CMD ["node", "app.js"]
6. How does Docker use layers to build images?
Docker follows a Layered File System approach, employing Union File Systems like AUFS, OverlayFS, and Device Mapper to stack image layers.
This structure enhances modularity, storage efficiency, and image-building speed. It also offers read-only layers for image consistency and integrity.
Union File Systems
Union File Systems permit stacking multiple directories or file systems, presenting them coherently as a single unit. While several such systems are in use, AUFS and OverlayFS are notably popular.
AUFS: A front-runner for a long time, AUFS offers versatile compatibility but is not part of the Linux kernel.
OverlayFS: Now integrated into the Linux kernel, OverlayFS is lightweight and provides backward compatibility with ext4 and XFS.
Image Layering in Docker
When stacking Docker image layers, it's akin to a file system with read-only layers superimposed by a writable layer, the container layer. This setup ensures separation and persistence:
Base Image Layer: This is the foundation, often comprising the operating system and core utilities. It's mostly read-only to safeguard uniformity.
Intermediate Layers: These are interchangeable and encapsulate discrete modifications. Consequently, they are also mostly read-only.
Topmost or Container Layer: This layer records real-time alterations made within the container and is mutable.
Code Overlayers
Here is the code:
Each layer is defined by a Dockerfile instruction.
The base image is ubuntu:latest, and the application code is stored in a file named app.py.
# Layer 1: Start from base image
FROM ubuntu:latest
# Layer 2: Set the working directory
WORKDIR /app
# Layer 3: Copy the application code
COPY app.py /app
# Placeholder for Dockerfile
# ...
7. What's the difference between the COPY and ADD commands in a Dockerfile?
Let's look at the subtle distinctions between the COPY and ADD commands within a Dockerfile.
Purpose
COPY: Designed for straightforward file and directory copying. It's the preferred choice for most use-cases.
ADD: Offers additional features such as URI support. However, since it's more powerful, it's often recommended to stick with COPY unless you specifically need the extra capabilities.
Key Distinctions
URI and TAR Extraction: Only ADD allows you to use URIs (including HTTP URLs) as well as automatically extract local .tar resources. For simple file transfers, COPY is the appropriate choice.
Cache Considerations: Unlike COPY, which respects image build cache, ADD bypasses cache for any resources that differ even slightly from their cache entries. This can lead to slower builds.
Security Implications: Since ADD permits downloading files at build-time, it introduces a potential security risk point. In scenarios where the URL isn't controlled, and the file isn't carefully validated, prefer COPY.
File Ownership: While both COPY and ADD maintain file ownership and permissions during the build process, there might be OS-specific deviations. Consistent behavior is often a critical consideration, making COPY the safer choice.
Simplicity and Transparency: Using COPY exclusively, when possible, ensures clarity and simplifies Dockerfile management. For instance, it's easier for another developer or a CI/CD system to comprehend a straightforward COPY command than to ascertain the intricate details of an ADD command that incorporates URL-based file retrieval or TAR extraction.
Best Practices
Avoid Web-Based Transfers: Steer clear of resource retrieval from untrusted URLs within Dockerfiles. It's safer to copy these resources into your build context, ensuring security and reproducibility.
Cache Management: Because ADD can bypass caching for resources that are even minimally different from their cached versions, it can inadvertently lead to slowed build processes. To avoid this, prefer the deterministic, cache-friendly behavior of COPY whenever plausible.
8. What’s the purpose of the .dockerignore file?
The .dockerignore file, much like gitignore, is a list of patterns indicating which files and directories should be excluded from image builds.
Using this file, you can optimize the build context, which is the set of files and directories sent to the Docker daemon for image creation.
By excluding unnecessary files, such as build or data files, you can reduce the build duration and optimize the size of the final Docker image. This is important for minimizing container footprint and enhancing overall Docker efficiency.
9. How would you go about creating a Docker image from an existing container?
Let's look at each of the two main methods:
docker container commit Method:
For simple use cases or quick image creation, this method can be ideal.
It uses the following command:
docker container commit
Here's a detailed example:
Say you have a running container derived from the ubuntu image and nicknamed 'my-ubuntu'.
Start the container:
docker run --interactive --tty --name my-ubuntu ubuntu
For instance, you decide to customize the my-ubuntu container by adding a package.
Make the package change (for this example):
docker exec -it my-ubuntu bash # Enter the shell of your 'my-ubuntu' container
apt update
apt install -y neofetch # Install `neofetch` or another package for illustration
exit # Exit the container's shell
Take note of the "Container ID" using docker ps command:
docker ps
You will see output resembling:
CONTAINER ID IMAGE COMMAND ... NAMES
f2cb54bf4059 ubuntu "/bin/bash" ... my-ubuntu
In this output, "f2cb54bf4059" is the Container ID for 'my-ubuntu'.
Use the docker container commit command to create a new image based on changes in the 'my-ubuntu' container:
docker container commit f2cb54bf4059 my-ubuntu:with-neofetch
Now, you have a modified image based on your updated container. You can verify it by running:
docker run --rm -it my-ubuntu:with-neofetch neofetch
Here, "f2cb54bf4059" is the Container ID that you can find using docker ps.
Image Build Process Method:
This method provides more control, especially in intricate scenarios. It generally involves a two-step process where you start by creating a Dockerfile and then build the image using docker build.
Steps:
Create A Dockerfile: Begin by preparing a Dockerfile that includes all your customizations and adjustments.
For our 'my-ubuntu' example, the Dockerfile can be as simple as:
```Dockerfile
FROM my-ubuntu:latest
RUN apt update && apt install -y neofetch
```
Build the Image: Enter the directory where your Dockerfile resides and start the build using the following command:
docker build -t my-ubuntu:with-neofetch .
Subsequently, you can run a container using this new image and verify your modifications:
docker run --rm -it my-ubuntu:with-neofetch neofetch
10. In practice, how do you reduce the size of Docker images?
Reducing Docker image sizes is crucial for efficient resource deployment. You can achieve this through various strategies.
Multi-Stage Builds
Multi-Stage Builds allow you to use multiple Dockerfile stages, segregating different aspects of your build process. This enables a cleaner separation between build-time and run-time libraries, ultimately leading to smaller images.
Here is the dockerfile with the multi-stage build.
# Use an official Node.js runtime as the base image
FROM node:current-slim AS build
# Set the working directory in the container
WORKDIR /app
# Copy the package.json and package-lock.json files to the workspace
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the entire project into the container
COPY . .
# Build the app
RUN npm run build
# Use a smaller base image for the final stage
FROM node:alpine AS runtime
# Set the working directory in the container
WORKDIR /app
# Copy built files and dependency manifest
COPY --from=build /app/package*.json ./
COPY --from=build /app/dist ./dist
# Install production dependencies
RUN npm install --only=production
# Specify the command to start the app
CMD ["node", "dist/main.js"]
The --from flag in the COPY and RUN instructions is key here, as it allows you to select artifacts from a previous build stage.
.dockerignore File
Similar to .gitignore, the .dockerignore file excludes files and folders from the Docker build context. This can significantly reduce the size of your build context, leading to slimmer images.
Here is an example of a .dockerignore file:
node_modules
npm-debug.log
Using Smaller Base Images
Selecting a minimalistic base image can lead to significantly smaller containers. For node.js, you can choose a smaller base image such as node:alpine, especially for production use. The alpine version is particularly lightweight as it's built on the Alpine Linux distribution.
Here are images with different sizes:
node:current-slim (about 200MB)
node:alpine (about 90MB)
node:current (about 900MB)
One-Time Execution Commands
Using RUN and multi-line COPY commands within the same Dockerfile layer can lead to image bloat. To mitigate this, leverage a single RUN command that packages multiple operations. This approach reduces additional layer creation, resulting in smaller images.
Here is an example:
RUN apt-get update && apt-get install -y nginx && apt-get clean
Ensure that you always combine such commands in a single RUN instruction, separated by logical operators like &&, and clean up any temporary files or caches to keep the layer minimal.
Package Managers and Caching
When using package managers like npm and pip in your images, it's important to use a --production flag.
For npm, running the following command prevents the installation of development dependencies:
RUN npm install --only=production
For pip, you can achieve the same with:
RUN pip install --no-cache-dir -r requirements.txt
This practice significantly reduces the image size by only including necessary runtime dependencies.
Utilize Glob Patterns for COPY
When using the COPY command in your Dockerfile, it's best to introduce .dockerignore syntax to ensure only essential files are copied.
Here is an example:
COPY ["*.json", "*.sh", "config/", "./"]
11. What command is used to run a Docker container from an image?
The lean, transformed and updated version of the answer includes all the essential points.
To run a Docker container from an image, you can use the docker run command:
docker run
The command docker run combines several actions:
Creating: If the container matching the input name already exists, it will stop and then start again.
Running: Activates the container, starting its process.
Linking: Connects to the necessary network, storage, and system resources.
Basic Usage
Here is the generic structure:
docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
Practical Example
docker run -d -p 5000:5000 --name myapp myimage:latest
In this example:
-d: The container is detached, running in the background.
-p 5000:5000: The host port 5000 is mapped to the container port 5000.
--name myapp: The container is named myapp.
myimage:latest: The image used is myimage with the latest tag.
Additional Options and Example
Here is an alternative command:
docker run --rm -it -v /host/path:/container/path myimage:1.2.3 /bin/bash
This:
Deletes the container after it stops.
Opens an interactive terminal.
Mounts the host's /host/path to the container's /container/path.
Uses the command /bin/bash when starting the container.
12. Can you explain what a Docker namespace is and its benefits?
A Docker namespace uniquely identifies Docker objects like containers, images, and volumes. Namespaces streamline resource organization and data isolation, supporting your security and operational requirements.
Advantages of Docker Namespaces
Isolated Environment: Ensures separation, vital for multi-tenant systems, in-house CI/CD, and staging environments.
Resource Segregation: Every workspace allocates distinct processes, network ports, and filesystem mounts.
Multi-Container Management: You can track related containers across various environments thoroughly.
Improved Debugging and Error Control: Dockers namespace, keep your workstations clean and facilitate accurate error tracking.
Enhanced Security: Reduces the risk of data breaches and system interdependencies.
Portability and Adaptability: Supports a consistent operational model, irrespective of the environment.
Key Namespace Types
Image IDs: Unique identifiers for Docker images.
Container Names: Provides friendly readability to Docker containers.
Volume Names: Simplified references in managing persistent data volumes.
Code Example: Working with Docker Namespaces
Here is the Python code:
import docker
# Establish connection with Docker daemon
client = docker.from_env()
# Pull a Docker image
client.images.pull('ubuntu:latest')
# List existing Docker images
images = client.images.list()
print(images)
# Note: In a practical Docker environment, you would see more detailed output related to the images.
# Retrieve a container by its name
event_container = client.containers.get('event-container')
# Inspect a specific container to gather detailed information
inspect_data = event_container.attrs
print(inspect_data)
# Create a new Docker volume
client.volumes.create('my-named-volume')
13. What is a Docker volume, and when would you use it?
A Docker volume is a directory or file within a Docker Host's writable layer that isn't tied to a specific container. This decoupling allows data persistence, even after containers have been stopped or removed.
Volume Types
Host-Mounted Volumes: These link a directory on the host machine to the container.
Named Volumes: They have a specific name and are managed by Docker.
Anonymous Volumes: These are generated by Docker and not tied to a specific container or its data.
Use Cases
Docker volumes are fundamental for data storage and sharing, which is especially beneficial in microservice and stateful applications.
File Sharing: Volume remaps between containers, facilitating file sharing without needing to commit volumes to an image or set up additional systems like NFS.
Database Management: Ensures database consistency by isolating database files within volumes. This makes it simpler to back up and restore databases.
Stateful Container Handling: Volumes assist in preserving stateful container data, like logs or configuration files, ensuring uninterrupted service data delivery and persistence, even in case of container updates or failures.
Configuration and Secret Management: Volumes provide an excellent way to mount configuration files and secrets. This can help you secure sensitive data and reduces the need to build it into the application.
Backup and Restore: By using volumes, you can separate your data from the lifecycle of the container. It becomes easier to back them up and restore them in the event of data loss.
14. Explain the use and significance of the docker-compose tool.
Docker Compose, a command-line tool, facilitates multi-container Docker applications, using a YAML file to define their architecture and how they interconnect. This is incredibly useful for setting up multi-container environments and facilitates a "one command" startup for all relevant components. For instance, a web application might require a backend database, a message queue, and more. While you can launch these components individually, using docker-compose makes it a seamless single-command operation.
Core Advantages
Simplified Multi-Container Management: With one predefined configuration, launch and manage multi-container apps effortlessly.
Streamlined Environment Sharing: Consistent setups between teams and environments simplify testing, staging, and development.
Automatic Inter-Container Networking: Defines network configurations such as volume sharing and service linking without added commands.
Parallel Service Startup: Efficiently starts services in parallel, making boot-ups faster.
Core Components
Services: Containers that build off the same image, defined in the compose file. Each is an independent component (e.g., web server, database).
Volumes: For persistent data, decoupled from container lifespan. Useful for databases, among others.
Networks: Virtual networks for isolating different applications or services, keeping them separate or aiding in communication.
YAML Configuration Example
Here is the YAML configuration:
version: '3.3'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- "/path/to/html:/usr/share/nginx/html"
depends_on:
- db
db:
image: postgres:latest
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: dbname
volumes:
- /my/own/datadir:/var/lib/postgresql/data
networks:
backend:
driver: bridge
Services: web and db are the components mentioned. They define an image to be used, port settings, volumes for data persistence, and dependency structures (like how web depends on db).
Volumes: The db service has a volume specified for persistent storage.
Networks: The web and db services are part of the backend network, defined at the bottom. This assures consistent networking, even when services get linked or containers restarted.
15. Can Docker containers running on the same host communicate with each other by default? If so, how?
Yes, Docker containers on the same host can communicate with each other by default. This is because, when you run a Docker container, it's on a single network namespace of the host, and Docker uses that network namespace to manage communication between containers.
Default Network Configuration
By default, Docker provides each container with its own network stack. The configuration includes:
IP Address: Obtained from the Docker network.
Network Interfaces: Namespaced within the container.
Default Docker Bridge Network
A Docker bridge network, such as docker0, serves as the default network type. Containers within the same bridge network can communicate with each other by their container names or IP addresses.
Custom Networks
Containers can also be part of user-defined bridge networks or other network types. In such configurations, containers belonging to the same network can communicate with each other.
Configuring Communication
Direct container-to-container communication is straightforward. Once a container knows the other's IP address, it can initiate communication.
Here are two key methods to configure container communication:
1. By Container IP
docker inspect -f '{{.NetworkSettings.IPAddress}}'
2. By Container Name
Containers within the same Docker network can reach each other by their names. Use docker network inspect to see container IP addresses and ensure proper network setup.
Explore all 55 answers here 👉 Devinterview.io - Docker
# Docker Interview Questions
Docker is getting a lot of traction in the industry because of its performance-savvy and universal replicability architecture, while providing the following four cornerstones of modern application development: autonomy, decentralization, parallelism & isolation.
Below are top 50 interview questions for candidates who want to prepare on Docker Container Technology:
# What are 5 similarities between Docker & Virtual Machine?
Docker is not quite like a VM. It uses the host kernel & can’t boot a different operating system. Below are 5 similarities between Docker & VIrtual Machine:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/Picture1.png)
# How is Docker different from Virtual Machine?
Figure: Docker Vs VM
Below are list of 6 difference between Docker container & VM:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview2.png)
# What is the difference between Container Networking & VM Networking?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-3.png)
# Is it possible to run multiple process inside Docker container?
Yes, you can run multiple processes inside Docker container. This approach is discouraged for most use cases. For maximum efficiency and isolation, each container should address one specific area of concern. However, if you need to run multiple services within a single container, you can use tools like supervisor.
Supervisor is a moderately heavy-weight approach that requires you to package supervisord and its configuration in your image (or base your image on one that includes supervisord), along with the different applications it manages. Then you start supervisord, which manages your processes for you.
Example: Here is a Dockerfile using this approach, that assumes the pre-written supervisord.conf, my_first_process, and my_second_process files all exist in the same directory as your Dockerfile.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-4.png)
# Does Docker run on Linux, macOS and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64). Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
# What is DockerHub?
DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.
# What is Dockerfile?
Docker builds images automatically by reading the instructions from a text file called Dockerfile. It contains all commands, in order, needed to build a given image. A Dockerfile adheres to a specific format and set of instructions which you can find here.
# How is Dockerfile different from Docker Compose?
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications. Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.
Docker compose uses the Dockerfile if one add the build command to your project's docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
# Can I use JSON instead of YAML for my Docker Compose file?
Yes. Yaml is a superset of json so any JSON file should be valid Yaml. To use a JSON file with Compose, specify the filename to use, for example:
docker-compose -f docker-compose.json up
You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg:
docker-compose -f docker-compose.json up
# How to create Docker container?
We can use Docker image to create Docker container by using the below command:
```
$ docker run -t -i command name
```
This command will create and start a container.If you want to verify the list of all running container with the status on a host use the below command:
```
$ docker ps -a
```
# What is maximum number of container you can run per host?
This really depends on your environment. The size of your applications as well as the amount of available resources (i.e like CPU) will all affect the number of containers that can be run in your environment. Containers unfortunately are not magical. They can’t create new CPU from scratch. They do, however, provide a more efficient way of utilizing your resources. The containers themselves are super lightweight (remember, shared OS vs individual OS per container) and only last as long as the process they are running.
# Is it possible to have my own private Docker registry?
Yes, it is possible today using Docker own registry server. if you want to use 3rd party tool, see Portus.
TBA
# Does Docker container package up the entire OS?
Docker containers do not package up the OS. They package up the applications with everything that the application needs to run. The engine is installed on top of the OS running on a host. Containers share the OS kernel allowing a single host to run multiple containers.
# Describe how many ways are available to configure Docker daemon?
There are two ways to configure the Docker daemon:
- Using a JSON configuration file.
This is the preferred option, since it keeps all configurations in a single place.
- Using flags when starting dockerd.
You can use both of these options together as long as you don’t specify the same option both as a flag and in the JSON file. If that happens, the Docker daemon won’t start and prints an error message.
$ dockerd --debug --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem \
--host tcp://:2376
15. Can you list reasons why Container Networking is so important?
Below are top 5 reasons why we need container networking:
- Containers need to talk to external world.
- Reach Containers from external world to use the service that Containers provides.
- Allows Containers to talk to host machine.
- Inter-container connectivity in same host and across hosts.
- Discover services provided by containers automatically.
- Load balance traffic between different containers in a service.
- Provide secure multi-tenant services.
# What does CNM refers to? What are its components? ![img](
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-5.png)
CNM refers to Container Networking Model. The Container Network Model (CNM) is a standard or specification from Docker, Inc. that forms the basis of container networking in a Docker environment.It is Docker’s approach to providing container networking with support for multiple network drivers. The CNM provides the following contract between networks and containers:
- All containers on the same network can communicate freely with each other
- Multiple networks are the way to segment traffic between containers and should be supported by all drivers
- Multiple endpoints per container are the way to join a container to multiple networks
- An endpoint is added to a network sandbox to provide it with network connectivity
The major components of the CNM are:
- Network,
- Sandbox and
- Endpoint.
Sandbox is a generic term that refers to OS specific technologies used to isolate networks stacks on a Docker host. Docker on Linux uses kernel namespaces to provide this sandbox functionality. Networks “stacks” inside of sandboxes include interfaces, routing tables, DNS etc. A network in CNM terms is one or more endpoints that can communicate.All endpoints on the same network can communicate with each other.Endpoints on different networks cannot communicate without external routing.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-6.png)
# What are different types of Docker Networking drivers?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-7.png)
Docker’s networking subsystem is pluggable using drivers. Several drivers exist by default, and provide core networking functionality. Below is the snapshot of difference of various Docker networking drivers.
Below are details of Docker networking drivers:
Bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
Host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher.
Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
MacVLAN: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
None: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.
# What features are possible only under Docker Enterprise Edition in comparison to Docker Community Edition?
The following two features are only possible when using Docker EE and managing your Docker services using Universal Control Plane (UCP):
The HTTP routing mesh allows you to share the same network IP address and port among multiple services. UCP routes the traffic to the appropriate service using the combination of hostname and port, as requested from the client.
Session stickiness allows you to specify information in the HTTP header which UCP uses to route subsequent requests to the same service task, for applications which require stateful sessions.
# How is Docker Bridge network different from traditional Linux bridge ?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-8.png)
In terms of networking, a bridge network is a Link Layer device which forwards traffic between network segments. A bridge can be a hardware device or a software device running within a host machine’s kernel.
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
# How to create a user-defined Bridge network ?
To create a user-defined bridge network, one can use the docker network create command -
```$ docker network create mynet```
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-9.png)
You can specify the subnet, the IP address range, the gateway, and other options. See the docker network create reference or the output of docker network create --help for details.
# How to delete a user-defined Bridge network ?
Use the docker network rm command to remove a user-defined bridge network. If containers are currently connected to the network, disconnect them first.
```$ docker network rm mynet```
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-10.png)
# How to connect Docker container to user-defined bridge network?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-11.png)
When you create a new container, you can specify one or more --network flags. This example connects a Nginx container to the my-net network. It also publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the my-net network has access to all ports on the my-nginx container, and vice versa.
```
$ docker create --name my-nginx \
--network my-net \
--publish 8080:80 \
nginx:latest
```
To connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:
```
$ docker network connect my-net my-nginx
```
# Does Docker support IPv6?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-12.png)
Yes, Docker support IPv6. IPv6 networking is only supported on Docker daemons running on Linux hosts.Support for IPv6 address has been there since Docker Engine 1.5 release.To enable IPv6 support in the Docker daemon, you need to edit ```/etc/docker/daemon.json ```and set the ipv6 key to true.
```
{
"ipv6": true
}
```
Ensure that you reload the Docker configuration file.
```
$ systemctl reload docker
```
You can now create networks with the` --ipv6 `flag and assign containers IPv6 addresses using the `--ip6` flag.
# Does Docker Compose file format support IPv6 protocol?
Yes.
# How is overlay network different from bridge network?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-13.png)
Bridge networks connect two networks while creating a single aggregate network from multiple communication networks or network segments, hence the name bridge.
Overlay networks are usually used to create a virtual network between two separate hosts. Virtual, since the network is build over an existing network.
Bridge networks can cater to single host, while overlay networks are for multiple hosts.
26. What networks are affected when you join a Docker host to an existing Swarm?
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
- an overlay network called ingress, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
- a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
# How shall you disable the networking stack on a container?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-14.png)
If you want to completely disable the networking stack on a container, you can use the --network none flag when starting the container. Within the container, only the loopback device is created. The following example illustrates this.
# How can one create MacVLAN network for Docker container?
To create a Macvlan network which bridges with a given physical network interface, once can use --driver macvlan with the docker network create command. You also need to specify the parent, which is the interface the traffic will physically go through on the Docker host.
```
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 collabnet
```
# Is it possible to exclude IP address from being used in MacVLAN network?
If you need to exclude IP addresses from being used in the Macvlan network, such as when a given IP address is already in use, use ```--aux-addresses```:
```
$ docker network create -d macvlan \
--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
--aux-address="my-router=192.168.32.129" \
-o parent=eth0 collabnet32
```
# Do I lose my data when the container exits?
Not at all! Any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.
# Does Docker Enterprise Edition support Kubernetes?
Yes, Docker Enterprise Edition(rightly called EE) support Kubernetes. EE 2.0 allows users to choose either Kubernetes or Swarm at the orchestration layer.
# What is Docker Swarm?
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
# What is `--memory-swap` flag?
`--memory-swap` is a modifier flag that only has meaning if `--memory `is also set. Using swap allows the container to write excess memory requirements to disk when the container has exhausted all the RAM that is available to it. There is a performance penalty for applications that swap memory to disk often.
# Can you explain different volume mount types available in Docker?
There are three mount types available in Docker
· Volumes are stored in a part of the host filesystem which is managed by Docker (`/var/lib/docker/volumes/` on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
· Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.
· tmpfs mounts are stored in the host system’s memory only, and are never written to the host system’s filesystem.
# How to share data among DockerHost?
Ways to achieve this when developing your applications. One is to add logic to your application to store files on a cloud object storage system like Amazon S3. Another is to create volumes with a driver that supports writing files to an external storage system like NFS or Amazon S3.
Volume drivers allow you to abstract the underlying storage system from the application logic. For example, if your services use a volume with an NFS driver, you can update the services to use a different driver, as an example to store data in the cloud, without changing the application logic.
# How to Backup, Restore, or Migrate data volumes under Docker container?
Steps to Backup a container
1) Launch a new container and mount the volume from the dbstore container
2) Mount a local host directory as /backup
3) Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
`$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata`
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
`$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash`
Then un-tar the backup file in the new container`s data volume:
```
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1
```
# How to Configure Automated Builds on DockerHub
You can build your images automatically from a build context stored in a repository. A build context is a Dockerfile and any files at a specific location. For an automated build, the build context is a repository containing a Dockerfile.
# How to configure the default logging driver under Docker?
To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts. The default logging driver is json-file.
# Why do my services take 10 seconds to recreate or stop?
Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to forcefully kill it. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive the SIGTERM signal.
# How do I run multiple copies of a Compose file on the same host?
Compose uses the project name to create unique identifiers for all of a project’s containers and other resources. To run multiple copies of a project, set a custom project name using the -command line option or the COMPOSE_PROJECT_NAME environment variable.
# What’s the difference between up, run, and start under Docker Compose?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
# What is Docker Trusted Registry?
Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You install it behind your firewall so that you can securely store and manage the Docker images you use in your applications.
# How to declare default environment variables under Docker Compose?
Compose supports declaring default environment variables in an environment file named .env placed in the folder where the docker-compose command is executed (current working directory).
Example: The below example demonstrate how to declare default environmental variable for Docker Compose.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-16.png)
When you run docker-compose up, the web service defined above uses the image alpine:v3.4. You can verify this with the `docker-compose config` command which prints your resolved application config to the terminal:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-17.png)
# Can you list out ways to share Compose configurations between files and projects under Docker Compose?
Compose supports two methods of sharing common configuration:
1. Extending an entire Compose file by using multiple Compose files
2. Extending individual services with the extends field
# What is the role of .dockerignore file?
To understand the role of .dockerignore file, let us take a practical example. You may have noticed that if you put a Dockerfile in your home directory and launch a docker build you will see a message uploading context. Right? This means docker creates a .tar with all the files in your home and in all the subdirectories, and uploads this tar to the docker daemon. If you have some huge files, this may take a long time.
In order to avoid this, you might need to create a specific directory, where you put your Dockerfile, and all what is needed for your build. It becomes necessary to tell docker to ignore some files during the build. Hence, you need to put in the .dockerignore all the files not needed for your build
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY.
# What is the purpose of EXPOSE command in Dockerfile?
When writing your Dockerfiles, the instruction EXPOSE tells Docker the running container listens on specific network ports. This acts as a kind of port mapping documentation that can then be used when publishing the ports.
`EXPOSE [/...]`
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-18.png)
You can also specify this within a docker run command, such as:
`docker run --expose=1234 my_app`
Please note that EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.
# How is ENTRYPOINT instruction under Dockerfile different from RUN instruction?
ENTRYPOINT is meant to provide the executable while CMD is to pass the default arguments to the executable.
To understand it clearly, let us consider the below Dockerfile:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-19.png)
If you try building this Docker image using `docker build command` -
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-21.png)
Let us run this image without any argument.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-22.png)
Let's run it passing a command line argument
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-23.png)
This clearly state that ENTRYPOINT is meant to provide the executable while CMD is to pass the default arguments to the executable.
# Why Build cache in Docker is so important?
If the objects on the file system that Docker is about to produce are unchanged between builds, reusing a cache of a previous build on the host is a great time-saver. It makes building a new container really, really fast. None of those file structures have to be created and written to disk this time — the reference to them is sufficient to locate and reuse the previously built structures.
# Why Docker Monitoring is necessary?
● Monitoring helps to identify issues proactively that would help to avoid system outages.
● The monitoring time-series data provide insights to fine-tune applications for better performance and robustness.
● With full monitoring in place, changes could be rolled out safely as issues will be caught early on and be resolved quickly before that transforms into root-cause for an outage.
● The changes are inherent in container based environments and impact of that too gets monitored indirectly.
# Difference between Windows Containers and Hyper-V Containers
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-24.png)
Underlying is the architecture laid out by the Microsoft for the Windows and Hyper-V Containers
Here are few of the differences between them,
Differences:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-25.png)
# What are main difference between Swarm & Kubernetes?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). In the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks
Below are the major list of differences between Docker Swarm & Kubernetes:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-26.png)
Applications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app. Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).
Autoscaling feature is not available either in Docker Swarm (Classical) or Docker Swarm Auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.
Docker Swarm support rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time. Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify maximum number of pods unavailable or maximum number running during the process.
Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host only Docker bridge network for containers.
By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.
Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.
Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.
Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.
Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)
Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.
# Is it possible to run Kubernetes on Docker EE 2.0 Platform?
Yes, it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.
# Can you use Docker Compose to build up Swarm/Kubernetes Cluster?
Yes, one can deploy a stack on Kubernetes with docker stack deploy command, the docker-compose.yml file, and the name of the stack.
Example:
$docker stack deploy --compose-file /path/to/docker-compose.yml mystack
$docker stack services mystack
You can see the service deployed with the kubectl get services command
$kubectl get svc,po,deploy
# What is 'docker stack deploy' command meant for?
The ‘docker stack deploy’ is a command to deploy a new stack or update an existing stack. A stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format that defines one or more services, similar to a docker-compose.yml file for Docker Compose but with a few extensions.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-27.png)
# List down major components of Docker EE 2.0?
Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated to the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.
Docker EE 2.0 GA consists of 3 major components which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.
● Universal Control Plane 3.0.0 (application and cluster management) – Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.
● Docker Trusted Registry 2.5.0 – The production-grade image storage solution from Docker &
● EE Engine 17.06.2- The commercially supported Docker engine for creating images and running them in Docker containers.
# Explain the concept of HA under Swarm Mode?
HA refers to High Availability. High Availability is a feature where you have multiple instances of your applications running in parallel to handle increased load or failures. These two paradigms fit perfectly into Docker Swarm, the built-in orchestrator that comes with Docker. Deploying your applications like this will improve your uptime which translates to happy users.
For creating a high availability container in the Docker Swarm, we need to deploy a docker service to the swarm with nginx image. This can be done by using docker swarm create command as specified above.
# docker service create --name nginx --publish 80:80 nginx
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-29.png)
# Can you explain what is Routing Mesh under Docker Swarm Mode?
Routing Mesh is a feature which make use of Load Balancer concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.
Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm. All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-30.png)
# Is Routing Mesh a Load Balancer?
Routing Mesh is not Load-Balancer. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.
In simple words, if you had 3 swarm nodes, A, B and C, and a service which is running on nodes A and C and assigned node port 30000, this would be accessible via any of the 3 swarm nodes on port 30000 regardless of whether the service is running on that machine and automatically load balanced between the 2 running containers. I will talk about Routing Mesh in separate blog if time permits.
# Is it possible to run MacVLAN under Docker Swarm Mode? What features does it offer?
Starting Docker CE 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridge, host, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-31.png)
MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10 representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.
# What are Docker secrets and why is it necessary
In Docker there are three key components to container security and together they result in inherently safer apps.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-32.png)
Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles.
The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-33.png)
.
# Serverless Interview Questions
## What is Serverless and why is it important?
Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. You can build them for nearly any type of application or backend service, and everything required to run and scale your application with high availability is handled for you.
## Why use serverless?
Serverless enables you to build modern applications with increased agility and lower total cost of ownership. Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable.
## What are the benefits of serverless?
- NO SERVER MANAGEMENT
There is no need to provision or maintain any servers. There is no software or runtime to install, maintain, or administer
- FLEXIBLE SCALING
Your application can be scaled automatically or by adjusting its capacity through toggling the units of consumption (e.g. throughput, memory) rather than units of individual servers.
- PAY FOR VALUE
Pay for consistent throughput or execution duration rather than by server unit.
- AUTOMATED HIGH AVAILABILITY
Serverless provides built-in availability and fault tolerance. You don't need to architect for these capabilities since the services running the application provide them by default.
# Tell something about the AWS Serverless Platform?
AWS provides a set of fully managed services that you can use to build and run serverless applications. Serverless applications don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you. This allows you to focus on product innovation while enjoying faster time-to-market.
## COMPUTE
### AWS Lambda
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.
### AWS Fargate
AWS Fargate is a purpose-built serverless compute engine for containers. Fargate scales and manages the infrastructure required to run your containers.
## STORAGE
Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3 is easy to use, with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
## Amazon Elastic File System (Amazon EFS)
It provides simple, scalable, elastic file storage. It is built to elastically scale on demand, growing and shrinking automatically as you add and remove files.
## DATA STORES
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
## API PROXY
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It offers a comprehensive platform for API management. API Gateway allows you to process hundreds of thousands of concurrent API calls and handles traffic management, authorization and access control, monitoring, and API version management.
## APPLICATION INTEGRATION
Amazon SNS is a fully managed pub/sub messaging service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications.
## ORCHESTRATION
AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application.
## ANALYTICS
Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.
## DEVELOPER TOOLING
AWS provides tools and services that aid developers in the serverless application development process. AWS and its partner ecosystem offer tools for continuous integration and delivery, testing, deployments, monitoring and diagnostics, SDKs, frameworks, and integrated development environment (IDE) plugins.
# DCA Mock questions
## 1. How can we limit the number of CPUs provided to a container?
a) Using `--cap-add CPU` .
b) Using` --cpuset-cpus` .
c) Using` --cpus `.
d) It is not possible to specify the number of CPUs;we have to use `--cpu-shares` and define the CPU slices.
## 2. How can we limit the amount of memory available to a container?
a) It is not possible to limit the amount of memory available to a container.
b) Using `--cap-drop MEM `.
c) Using `--memory` .
d) Using `--memory-reservation` .
## 3.What environment variables should be exported to start using a trusted environment with the Docker client?
a) `export DOCKER_TRUSTED_ENVIRONMENT=1 `
b) `export DOCKER_CONTENT_TRUST=1`
c) `export DOCKER_TRUST=1`
d) `export DOCKER_TRUSTED=1`
-------------------
#### What is Hypervisor?
A hypervisor is a software that makes virtualization possible. It is also called Virtual Machine Monitor. It divides the host system and allocates the resources to each divided virtual environment. You can basically have multiple OS on a single host system. There are two types of Hypervisors:
* **Type 1:** It’s also called Native Hypervisor or Bare metal Hypervisor. It runs directly on the underlying host system. It has direct access to your host’s system hardware and hence does not require a base server operating system.
* **Type 2:** This kind of hypervisor makes use of the underlying host operating system. It’s also called Hosted Hypervisor.
#### What is virtualization?
Virtualization is the process of creating a software-based, virtual version of something(compute storage, servers, application, etc.). These virtual versions or environments are created from a single physical hardware system. Virtualization lets you split one system into many different sections which act like separate, distinct individual systems. A software called Hypervisor makes this kind of splitting possible. The virtual environment created by the hypervisor is called Virtual Machine.
#### What is containerization?
Usually, in the software development process, code developed on one machine might not work perfectly fine on any other machine because of the dependencies. This problem was solved by the containerization concept. So basically, an application that is being developed and deployed is bundled and wrapped together with all its configuration files and dependencies. This bundle is called a container. Now when you wish to run the application on another system, the container is deployed which will give a bug-free environment as all the dependencies and libraries are wrapped together. Most famous containerization environments are Docker and Kubernetes.
#### Difference between virtualization and containerization
Containers provide an isolated environment for running the application. The entire user space is explicitly dedicated to the application. Any changes made inside the container is never reflected on the host or even other containers running on the same host. Containers are an abstraction of the application layer. Each container is a different application.
Whereas in Virtualization, hypervisors provide an entire virtual machine to the guest(including Kernal). Virtual machines are an abstraction of the hardware layer. Each VM is a physical machine. VM is more isolated and heavy and takes a lot time to start.
[https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-virtual-machine](https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-virtual-machine)
#### What is Docker?
Docker is a containerization platform which packages your application and all its dependencies together in the form of containers so as to ensure that your application works seamlessly in any environment, be it development, test or production.
Docker containers, wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries, etc.
It wraps basically anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
#### What is a Docker Container?
Docker containers include the application and all of its dependencies. It shares the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. Docker containers are basically runtime instances of Docker images.
#### What are Docker Images?
Docker image is the source of Docker container. In other words, Docker images are used to create containers. When a user runs a Docker image, an instance of a container is created. These docker images can be deployed to any Docker environment.
#### What is Docker Hub?
Docker images create docker containers. There has to be a registry where these docker images live. This registry is Docker Hub. Users can pick up images from Docker Hub and use them to create customized images and containers. Currently, the Docker Hub is the world’s largest public repository of image containers.
#### Explain Docker Architecture?
The Docker works on client-server architecture. The Docker client establishes communication with the Docker Daemon. The Docker client and Daemon can run on the same system. A Docket client can also be connected to a remote Docker Daemon. The different types of Docker components in a Docker architecture are–
* Docker Client: This performs Docker build pull and run operations to establish communication with the Docker Host. The Docker command uses Docker API to call the queries to be run.
* Docker Host: This component contains Docker Daemon, Containers and its images. The images will be the kind of metadata for the applications which are containerized in the containers. The Docker Daemon establishes a connection with Registry.
* Registry: This component will be storing the Docker images. The public registries are Docker Hub and Docker Cloud which can be s used by anyone.
#### What is a Dockerfile?
Docker can build images automatically by reading the instructions from a file called Dockerfile.
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Using docker build, users can create an automated build that executes several command-line instructions in succession.
#### Tell us something about Docker Compose.
Docker Compose is a YAML file which contains details about the services, networks, and volumes for setting up the Docker application. So, you can use Docker Compose to create separate containers, host them and get them to communicate with each other. Each container will expose a port for communicating with other containers.
#### What is Docker Swarm?
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
#### What is a Docker Namespace?
A namespace is one of the Linux features and an important concept of containers. Namespace adds a layer of isolation in containers. Docker provides various namespaces in order to stay portable and not affect the underlying host system. Few namespace types supported by Docker – PID, Mount, IPC, User, Network
#### What is the lifecycle of a Docker Container?
Docker containers have the following lifecycle:
1. Create a container
2. Run the container
3. Pause the container(optional)
4. Un-pause the container(optional)
5. Start the container
6. Stop the container
7. Restart the container
8. Kill the container
9. Destroy the container
#### What is Docker Machine?
Docker machine is a tool that lets you install Docker Engine on virtual hosts. These hosts can now be managed using the docker-machine commands. Docker machine also lets you provision Docker Swarm Clusters.
#### Tell some important Docker commands?
We run docker commands using `$ docker `
* `dockerd`: To launch Docker daemon.
* `build `: To build an image file for docker.
* `create`: To create a new container.
* `kill `: To kill a container.
* `commit`: To create a new image from container changes.
* `start `: to start
Other imp commands
* `version`: to check for Docker Client and Docker Server version?
* `info`: to get the number of containers running, paused, stopped, the number of images and a lot more.
* `login`: to login into hub.docker.com
* `pull`: to pull an base image from docker hub onto your local system
* `run -it -d `: create a docker container from an image
`-d` means the container needs to start in the detached mode.
* `ps`: get list all the running containers
* `exec`: lets you get inside a container and work with it
* `stop`: to stop
* `commit ` : you can use a container, edit it and update it
* `push `: push it to docker hub
* `rm `: delete a stopped container
* `rmi `: delete an image from the local system
* `system prune`: to remove all the stopped containers, all the networks that are not used, all dangling images and all build caches.
#### Suppose you have 3 containers running and out of these, you wish to access one of them. How do you access a running container?
The following command lets us access a running container:
$ docker exec -it bash
The exec command lets you get inside a container and work with it.
#### Will you lose your data, when a docker container exits?
No, you won’t lose any data when Docker container exits. Any data that your application writes to the container gets preserved on the disk until you explicitly delete the container. The file system for the container persists even after the container halts.
#### Where all do you think Docker is being used?
When asked such a question, respond by talking about applications of Docker. Docker is being used in the following areas:
* **Simplifying configuration:** Docker lets you put your environment and configuration into code and deploy it.
* **Code Pipeline Management:** There are different systems used for development and production. As the code travels from development to testing to production, it goes through a difference in the environment. Docker helps in maintaining the code pipeline consistency.
* **Developer Productivity:** Using Docker for development gives us two things – We’re closer to production and development environment is built faster.
* **Application Isolation:** As containers are applications wrapped together with all dependencies, your apps are isolated. They can work by themselves on any hardware that supports Docker.
* **Debugging Capabilities:** Docker supports various debugging tools that are not specific to containers but work well with containers.
* **Multi-tenancy:** Docker lets you have multi-tenant applications avoiding redundancy in your codes and deployments.
* **Rapid Deployment:** Docker eliminates the need to boost an entire OS from scratch, reducing the deployment time.
#### How is Docker different from other containerization methods?
Docker containers are very easy to deploy in any cloud platform. It can get more applications running on the same hardware when compared to other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
#### Can I use JSON instead of YAML for my compose file in Docker?
You can use JSON instead of YAML for your compose file, to use JSON file with compose, specify the JSON filename to use, for eg:
`$ docker-compose -f docker-compose.json up`
#### How have you used Docker in your previous position?
Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used it with other tools like Puppet, Chef or Jenkins. If you have no past practical experience in Docker and instead have experience with other tools in a similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality.
#### How far do Docker containers scale? Are there any requirements for the same?
Large web deployments like Google and Twitter and platform providers such as Heroku and dotCloud, all run on container technology. Containers can be scaled to hundreds of thousands or even millions of them running in parallel. Talking about requirements, containers require the memory and the OS at all the times and a way to use this memory efficiently when scaled.
#### What platforms does docker run on?
This is a very straightforward question but can get tricky. Do some company research before going for the interview and find out how the company is using Docker. Make sure you mention the platform company is using in this answer.
Docker runs on various Linux administration:
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- CentOS 6+
- Gentoo
- ArchLinux
- openSUSE 12.3+
- CRUX 3.0+
It can also be used in production with Cloud platforms with the following services:
- Amazon EC2
- Amazon ECS
- Google Compute Engine
- Microsoft Azure
- Rackspace
#### Is there a way to identify the status of a Docker container?
There are **six possible states** a container can be at any given point –
1. Created
2. Running
3. Paused
4. Restarting
5. Exited
6. Dead.
Use the following command to check for docker state at any given point: `$ docker ps`
The above command lists down only running containers by default. To look for all containers, use the following command: `$ docker ps -a`
#### Can you remove a paused container from Docker?
The answer is **no**. You cannot remove a paused container. The container has to be in the stopped state before it can be removed.
#### Can a container restart by itself?
No, it’s not possible for a container to restart by itself. By default the flag -restart is set to false.
#### Is it better to directly remove the container using the rm command or stop the container followed by remove container?
Its always better to stop the container and then remove it using the remove command.
`$ docker stop `
`$ docker rm -f `
Stopping the container and then removing it will allow sending SIG_HUP signal to recipients. This will ensure that all the containers have enough time to clean up their tasks. This method is considered a good practice, avoiding unwanted errors.
#### Will cloud overtake the use of Containerization?
Docker containers are gaining popularity but at the same time, Cloud services are giving a good fight. In my personal opinion, Docker will never be replaced by Cloud. Using cloud services with containerization will definitely hype the game. Organizations need to take their requirements and dependencies into consideration into the picture and decide what’s best for them. Most of the companies have integrated Docker with the cloud. This way they can make the best out of both the technologies.
#### How many containers can run per host?
There can be as many containers as you wish per host. Docker does not put any restrictions on it. But you need to consider every container needs storage space, CPU and memory which the hardware needs to support. You also need to consider the application size. Containers are considered to be lightweight but very dependant on the host OS.
#### Is it a good practice to run stateful applications on Docker? or What type of applications - Stateless or Stateful are more suitable for Docker Container?
The concept behind stateful applications is that they store their data onto the local file system. You need to decide to move the application to another machine, retrieving data becomes painful.
I honestly would not prefer running stateful applications on Docker.
#### Suppose you have an application that has many dependant services. Will docker compose wait for the current container to be ready to move to the running of the next service?
The answer is yes. Docker compose always runs in the dependency order. These dependencies are specifications like depends_on, links, volumes_from, etc.
#### How will you monitor Docker in production?
Docker provides functionalities like docker stats and docker events to monitor docker in production. Docker stats provides CPU and memory usage of the container. Docker events provide information about the activities taking place in the docker daemon.
#### Is it a good practice to run Docker compose in production?
Yes, using docker compose in production is the best practical application of docker compose. When you define applications with compose, you can use this compose definition in various production stages like CI, staging, testing, etc.
#### What changes are expected in your docker compose file while moving it to production?
These are the following changes you need make to your compose file before migrating your application to the production environment:
* Remove volume bindings, so the code stays inside the container and cannot be changed from outside the container.
* Binding to different ports on the host.
* Specify a restart policy
* Add extra services like log aggregator
#### Have you used Kubernetes? If you have, which one would you prefer amongst Docker and Kubernetes?
Be very honest in such questions. If you have used Kubernetes, talk about your experience with Kubernetes and Docker Swarm. Point out the key areas where you thought docker swarm was more efficient and vice versa. Have a look at this blog for understanding differences between Docker and Kubernetes.
#### Are you aware of load balancing across containers and hosts? How does it work?
While using docker service with multiple containers across different hosts, you come across the need to load balance the incoming traffic. Load balancing and HAProxy is basically used to balance the incoming traffic across different available(healthy) containers. If one container crashes, another container should automatically start running and the traffic should be re-routed to this new running container. Load balancing and HAProxy works around this concept.
#### What is a Docker Registry?
A Docker Registry is a place where all the Docker Images will be stored and Docker Cloud and Docker Hub are the public registries where these images can be hosted upon. The Docker hub is the default storage for the Docker Images. An own registry can also be set up as per the requirement. Docker Data Center (DDC) can also be used which includes DTR (Docker Trusted Registry). Docker store will provide the feature of buying and selling the Docker images.
#### How to build envrionment-agnostic systems with Docker?
#### When would you use ‘docker kill’ or ‘docker rm -f’?
#### How to link containers?
#### What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
#### What is the difference between CMD and ENTRYPOINT in a Dockerfile?
#### How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?
#### Do I lose my data when the Docker container exits?
#### What is Build Cache in Docker?
#### What is the difference between ‘docker run’ and ‘docker create’?
#### What’s the difference between a repository and a registry?
#### What is the default CPU limit set for a container?
#### Can you create containers without their own PID namespace
#### Explain basic Docker usage workflow?
#### What is the difference between Docker Image and Layer?
#### Could you explain what is Emulation?
#### Should I use Vagrant or Docker for creating an isolated environment?
#### What is the difference between “expose” and “publish” in Docker?
#### Docker Compose vs. Dockerfile - which is better?
#### What exactly do you mean by “Dockerized node”? Can this node be on-premises or in the cloud?
#### How can we control the startup order of services in Docker compose?
#### How will you monitor Docker in production?
#### What happens if you add more than one CMD instruction to a Dockerfile?
#### When you limit the memory for a container, does it reserve (guarantee) the memory?
#### What is an orphant volume and how to remove it?
#### How virtualization works at low level?
#### What is Paravirtualization?
#### How is Docker different from a virtual machine?
#### Is it possible to generate a Dockerfile from an image?
#### Can you explain dockerfile ONBUILD instruction?
#### Why did Docker jump from version 1.13 to 17.03?
#### How does Docker run containers in non-Linux systems?
#### How containers works at low level?
#### Name some limitations of containers vs VM
#### How to use Docker with multiple environments?
#### Why Docker compose does not wait for a container to be ready before moving on to start next service in dependency order?
References:
[https://www.edureka.co/blog/interview-questions/docker-interview-questions/](https://www.edureka.co/blog/interview-questions/docker-interview-questions/)
[https://www.educba.com/docker-interview-questions/](https://www.educba.com/docker-interview-questions/)
[https://www.fullstack.cafe/](https://www.fullstack.cafe/)
------------------------------------------------------------------
https://www.fullstack.cafe/interview-questions/devops --- IMPORTANT LINK FOR DEVOPS QUESTIONS
---------------------------------------------------------------------
Top Docker Interview Questions
Basic Docker Interview Questions
1. What is the Docker Container?
The Docker container helps applications run smoothly. Essentially, it’s a software unit that holds code and all its dependencies — system tools, libraries, settings — everything you need to run an application.
2. Explain the components of Docker Architecture.
components of Docker Architecture
The components of Docker architecture are described below:
Host: Docker Daemon, images, and containers fall under the host component
Client: This allows communication with the Docker Host
Registry: This component is used to store Docker Images. Docker Hub and Docker Cloud are public registries that anyone can use
3. Explain the Docker registry in detail.
Docker images are stored in the Docker registry, which acts as default image storage. It is a critical storage area that is regularly maintained as it holds container images. Another public registry is Docker Cloud.
4. Briefly explain the Docker container lifecycle.
The Docker container lifecycle entails the following:
Creating the container
Running the container
Pausing the container
Unpausing the container
Starting the container
Stopping the container
Restarting the container
Killing the container
Destroying the container
5. State some important Docker commands.
Some important Docker commands include the following:
Build: Builds an image file for Docker
Create: Creates a new container
Kill: Kills a container
Dockerd: Launches the Docker daemon
Commit: Creates a new image from container changes
6. What are namespaces?
Docker namespaces provide an isolated workspace in the form of a container. Docker creates namespaces for containers once they have been started, which offers a seclusion layer. Each container has its own unique namespace.
7. What is Docker Swarm?
Docker Swarm is an important tool that is used to cluster and schedule Docker containers. This makes it easy to create and maintain nodes in Docker or a solitary VS.
8. How do you identify the status of a Docker container?
Use the following command to return the status of every Docker container:
“docker ps-a.”
This command will return a list of all available Docker containers along with their status on the host. From the list, one can easily find the desired container to check its status.
9. What are Docker images and run commands?
Docker images are groups of files that enable instances to be created and run in distinct containers. Each instance is a separate process.
Images are created using the information required to run an executable application.
You can use the Docker run command to create and initialize a container instance using a Docker image. If an image is running, it can be linked to any number of instances (or containers).
Docker images and run commands
10. What are Docker’s functionalities and applications?
The following are some of Docker’s functionalities and applications:
Easy configuration at the infrastructure level
Helps the developer concentrate exclusively on business logic as it reduces development time, thereby increasing productivity
Amplifies debugging capabilities
Allows an application to be isolated
Containerization decreases the need for multiple servers.
Facilitates rapid deployment at the OS level
11. What are Docker objects?
Docker images, services, and containers are termed Docker objects.
Image: Contains instructions to create a Docker container
Containers: A runnable instance of an image
Service: Container scaling across various Docker Daemons as a swarm
Other Docker objects include networks and volumes.
12. Which is more suitable for a Docker container: a stateless or stateful application?
Stateless applications are more suitable for Docker containers because we can create a single, reusable container for our application, which allows us to detach state configuration from the app and the container.
By doing this, we can run multiple instances of the same container with varying production parameters, etc.
So, stateless applications give us the freedom to use the same image in a range of scenarios, including various prod and test environments.
13. What is Dockerfile used for?
Dockerfile contains a range of instructions to build an image that can then be used to initialize a container. This text document includes every command you would enter at the command line to create an image.
14. Which networks are available by default in Docker?
Default available networks include:
Bridge: Default network that containers will connect to if the network has not been otherwise specified
None: Connects to a container-specific network stack that doesn’t have a network interface
Host: Connects to the host’s network stack
15. How is Docker monitored in production?
When running Docker in production, essential statistics can be gathered and examined by using tools like Docker Stats and Docker Events.
Docker Stats can be called from within a container to return data relating to the container’s CPU and memory usage. This is similar to the Linux top command which can be used to examine all running processes and their current computational load.
Docker Events represent a list of commands that can be used to analyze any ongoing activities that are being processed by the Docker Daemon. These commands include attached, commit, rename, destroy, and die.
16. Shed some light on Docker’s workflow.
Here’s a quick run through of Docker’s workflow:
Everything starts with the Dockerfile, the image’s source code
Once created, the Dockerfile helps build the container’s image, which is a compiled version of the Dockerfile
After, it is redistributed through the registry and used to run containers
17. What is the difference between Docker run and Docker create?
If you use Docker create, the container is created in a ‘stopped’ state. This allows you to store and output the container ID later. If you use ‘docker run’ with —cidfile FILE_NAME, it won’t allow you to overwrite the file.
Docker Certified Associate: 2024 Master Course
18. What is virtualization?
Initially, virtualization enabled the logical division of mainframe systems to enable several applications to run at the same time on a system. As technology progressed, the meaning of this term evolved to represent the running of multiple (and possibly varying) operating systems (OS) on an individual x86 system.
These days, this term is broadly used to refer to the process of running multiple OS on the same hardware. In this scenario, the primary OS serves as the admin, and guests must follow the pre-defined procedures for bootstrapping, loading the kernel, etc. This ensures greater security and prevents guest systems from obtaining full system access, which could lead to a data breach.
Here are the three types of virtualization:
Paravirtualization
Emulation
Container-based virtualization
19. What is the difference between a registry and a repository?
The Docker Registry hosts and distributes images and the Docker Hub is the default registry. The Docker repository (or repo) allows you to store a collection of Docker image versions. This means that images will have the same names, but their tags will vary to represent the different versions.
20. What are Docker Images, Docker Hub, and Docker File?
Docker images: These multi-layer files are used to create instances of a Docker container, and they are built using terminal command instructions or a pre-defined Dockerfile which contains each of these instructions. Using an image can speed up Docker build times due to caching at each step of the build sequence.
Docker hub: This is a service provided by Docker that can be used to find and share Docker images with others in a team. In the same way that GitHub is used to provide a distributed file store (with version control), Docker hub allows you to push and pull images, access private repos that store Docker images, and auto-build Docker images from GitHub or BitBucket repositories, before pushing these to Docker hub.
Dockerfile: This is a text document that is used to store build instructions for a Docker image. When run, Docker executes the commands to automatically an image.
21. How do you check versions of the Docker Client and Docker Server?
The docker version [options] command allows us to do this. If we omit the options we simply receive all of the relevant version information about the client and server. This is the command:
$ docker version --format'{{.Server.Version}}'
22. What is the login procedure for Docker Repository?
To log in to the Docker repository, use:
dockerlogin [OPTIONS] [SERVER]
To login to a self-hosted (local) registry, just add the server name:
$ docker login localhost:8080
23. What are some basic Docker commands?
Some basic Docker commands are:
docker push: pushes a repository or image to a registry
docker run: runs a command in a new container
docker pull: pulls a repository or image from a registry
docker start: starts one or more containers
docker stop: stops one or more running containers
docker search: searches for an image in Docker hub
docker commit: commits a new image
24. How is a Docker container different from other containerization methods?
You can easily deploy Docker containers to any cloud platform. It is also possible to use ready-to-run containerized applications more quickly, as well as manage and deploy applications more easily. Docker containers can also be shared with applications, whilst other containerization methods do not have these methods.
25. What platforms does Docker run on?
Docker runs on Windows (x86-64) and Linux (on x86-64, ARM, and other CPU architectures), s390x, and ppc64le.
26. What is the memory-swap flag?
The ‘memory-swap’ flag is a modifier that can be combined with the run command to give a container access to additional virtual memory when it has utilized all of its provisioned physical memory (RAM). This command requires that the ‘memory’ flag be preset when executing the run command.
For example: –memory = “256m”; –memory-swap = “512m”; With this setup, a container is provisioned with 256MB of physical memory, and with an additional virtual swap space of 256MB (512m-256m).
27. Where are Docker volumes stored?
Volumes are stored in the Docker host filesystem: /var/lib/docker/volumes/. It is the most efficient way to ensure data persistence in Docker.
28. What is CNM?
CNM, or Container Network Model, formally defines the steps for the networking of containers, while also maintaining the abstraction used to support multiple network drivers. Sandbox, endpoint, and network are the three components.
29. What are the different kinds of mount types available in Docker?
The three types are:
Bind mounts: These can be stored anywhere on the host system
Volume mount: Stored in the host filesystem and Docker manages them
tmpfs mount: Stored in the host system's memory and they can never be written to the host's filesystem
mount types available in Docker
30. What are Docker object labels?
This is a key-value pair stored as a string. We can apply metadata using labels, which can be used for images, containers, volumes, networks, local daemons, swarm nodes, and services. Every object should have a unique label and these are static for an object’s entire lifetime.
Advanced Docker Interview Questions
31. List the steps in the deployment process for Dockerized Apps stored In a Git Repo.
The deployment process can vary as a function of the production environment, but the basic process will include the following:
Build the application with Docker Build
Test an image
Push a new image to the Docker registry
Inform the remote application server to obtain the new image from the Docker registry and then run the image
Utilize HTTP proxy for port-swapping
Stop any older containers
32. Explain how Docker is different from other container technologies.
Whilst Docker is a relatively new container technology, it has become one of the most adopted and popular. A product of the cloud computing era, Docker provides several features that were absent in older container offerings. One of Docker’s standout features is the ability to run on any type of infrastructure, both on local premises (on-prem) or in the cloud.
These days, Docker also allows the execution of more applications, the processing of these into packages, and their shipment to old servers. It can also serve as a repository for convenient containers and these can also be shared by your other applications. Finally, it is very well documented.
33. Will you lose your data if you were to exit the Docker Container?
There is no loss of data when you exit a container since it is written to the disk. This continues until the container has been entirely deleted. The container’s file system is also persisted after it is halted.
34. Can JSON be used instead of YAML for the compose file in Docker? If yes, how?
Yes, it can. To do this, specify the filename as follows:
“docker-compose -f docker-compose.json up.”
35. What are CMD and ENTRYPOINT in a Dockerfile?
Both of these instructions focus on the commands and parameters used by a container during execution. These instructions follow certain rules:
The Dockerfile should specify at least one command from CMD or ENTRYPOINT
ENTRYPOINT provides commands to determine how a container will be executed. These arguments and parameters cannot be overridden when using the run command the the command line (CLI)
CMD specifies the default image to be executed when starting a container. Any parameters passed to this command can be overridden at the CLI if the user provides an alternative argument with the run command
36. Explain the process to run an application inside a Linux Container using Docker.
In order to run an application within a Linux container using Docker, follow these steps:
Install Docker, then run it
Pull the Fedora 21 (Linux OS) base image from Docker hub
Using the Docker base image, load your application
Run a container in interactive mode by using the new image
Check the system containers
Start or stop the Docker container
Remove the image or the container
37. What is a Hypervisor?
A hypervisor is a form of management software that can be used to create and run virtual machines (VM). This enables a host system to accommodate multiple guest VMs and these will share computational resources including RAM and CPU. By allocating the required computational resources to each VM, it is possible to reduce physical hardware requirements and the maintenance of these.
The two types of hypervisors are:
Type I: This is a lightweight OS that is run on the host system
Type II: This runs like any other piece of software within an existing OS
38. Explain containerization.
Docker containers contain things including code, system tools, libraries, runtime, and the settings required for application execution, with the app existing on top of the Docker engine layer. This concept is called containerization
39. What is the main difference between containerization and virtualization?
Virtualization allows you to run multiple OS on a single physical server. The OS also handles containerization, which takes place under a single server or VM.
40. Is it possible for a container to restart by itself?
Yes, this is possible. However, Docker provides a range of behaviors that can be configured to control this:
Off: If a container fails or stops, it will not be restarted
On-failure: If the container fails due to non-user error, the container will restart
Unless-stopped: The container will restart if the user stops it manually
Always: Regardless of the error or reason for stopping, a container will always restart
The command is:
$ docker run -dit — restart [unless-stopped|off|on-failure|always] [CONTAINER]
41. Is it possible for the cloud to overtake the use of Containerization?
With a question like this, it can only be answered subjectively, or with an opinion. As of today, many companies have come to rely on the combination of cloud computing and containerization to achieve a highly performant system design.
42. What are the various possible states of the Docker Container?
The different states of the Docker container are:
Created: The container has been created, but is not active
Restarting: The container is in the process of being restarted
Running: The container is running
Paused: The container’s processes have been paused
Exited: The container was run and it completed its process
Dead: The container is not functioning or it was partly removed, but is still using resources. The daemon will try to remove it when it is restarted
43. Explain container orchestration. Why do we need it?
Container orchestration reduces the need for developers to manually manage container-related activities via automation of the following:
Provisioning and deploying containers
Network load-balancing
Allocation of resources for current containers
Health monitoring for containers and hosts
Container scaling
Failure prevention via the transfer of a container to a new host if the current host becomes unresponsive or lacks computational resources
44. What are some Advanced Docker Commands?
Some advanced Docker commands are:
docker -version: See what version of docker is installed. Syntax: Docker --version
docker ps: Lists all the docker containers that are running along with container details. Syntax: docker ps
docker ps -a: Lists all the containers, including those that are running, stopped, or exited, along with relevant details. Syntax: docker ps -a
docker exec: Access a container and run commands inside that container. Syntax: docker exec [options]
docker build: Builds an image from a Dockerfile. Syntax: docker build [options] path|URL
docker rm: Removes a container with the given container id. Syntax: docker rm
docker rmi: Removes a docker image with the given image id. Syntax: docker rmi
docker info: Returns detailed information about Docker installed on the system including number of images; containers running, paused, or stopped; server version; volume; runtimes; kernel version; and total memory etc. Syntax: docker info
docker cp: Copies a file from a docker container to the local system. Syntax: docker cp
docker history: Displays the history of the docker image with the given image name. Syntax: docker history
45. What are the commands to control Docker with Systemd?
Docker Daemon can be started using the system:
$ sudo systemctl start docker
Use systemct1 to start services. If this is not available, use the service command:
$ sudo service docker start
To enable and disable a Daemon during boot time, use:
$ sudo systemctl enable docker
$ sudo systemctl disable docker
To modify Daemon options, use:
$ sudo systemctl edit docker
To view logs related to Docker service:
$ journalctl -u docker;
46. What is the process of scaling your Docker containers?
The docker-compose command can be used to horizontally scale the number of Docker containers that you require by starting the required number of additional instances. The syntax to achieve this is:
$] docker-compose --file docker-compose-run-srvr.yml scale =
In the command above, we are passing the docker-compose-run-srver.yml YAML file as the service name, and we must provide an integer value, ‘n’ , to represent the number of additional instances we require to scale horizontally. Lastly, we can check the details of these new containers with:
$] docker ps -a
47. What are the major actions in the Docker container life cycle?
Here are the steps:
Create container: docker create --name
Run docker container: docker run -it -d --name bash
Pause container: docker pause
Unpause container: docker unpause
Start container: docker start
Stop container: docker stop
Restart container: docker restart
Kill container: docker kill
Destroy container: docker rm
48. What is the Docker Trusted Registry?
The Docker Trusted Registry is used to store and manage Docker images. It is available both locally and on the cloud. It can also be used during CI/CD processes for building, delivering, and running applications. It is readily available, efficient, and has built-in access control.
49. What is the purpose of Docker_Host?
Docker_host specifies the URL or Unix socket path used to connect to the Docker API. The default value is: UNIX://var/run/docker.sock
To connect to the remote host, provide the TCP connection string as: TCP://192.0.1.20:3230
50. Is it possible to run multiple copies of a Compose file on the same host? If so, How?
Yes, this can be done by using the docker-compose up command with a YAML file that has been written to configure the application’s services.
To do this, complete the following steps:
Create a Dockerfile to configure an app environment, thus allowing it to be replicated anywhere
Create a docker-compose.yml (YAML) file to define the services for the application
Run the docker-compose up command to create and start the app
51. What is Docker Push?
This allows us to push or share a local Docker image or a repository to a central repository.
Will AI Replace Programmers?
Start Preparing for Your Docker Interview Today
That ends our list of the top 50 Docker interview questions. Use them to prepare for your interview — and don’t forget to get some practical experience under your belt.
If you want to learn more about Docker, check out the best Docker tutorials, submitted and recommended by the community. You should also check out courses like the Docker Crash Course from Udemy. Your final step to ace your Docker interview?
Explore these Cracking the Coding Interview: 189 Programming Questions and Solutions.
Frequently Asked Questions
1. What are Docker Swarm Interview Questions?
Docker swarm is a container orchestration tool that allows multiple containers across multiple host machines. The above interview questions on Docker contain related questions.
2. What is the Main Use of Docker?
Docker is a containerization platform that allows developers to build applications in containers. These are standalone executables that can run on any Operating System.
-------------------------------------------------------------------------------
Docker Lifecycle Commands
Below are some Docker Lifecycle commands for every Docker Container
Docker create
docker run
docker pause
docker unpause
docker stop
docker start
docker restart
docker attach
docker wait
docker rm
docker kill
Docker Basic commands
Docker Basic Commands
Below are some commonly used Docker Basic commands you will use frequently.
1) docker – To check all available Docker Commands
Example:
docker [option] [command] [arguments]
2) docker version – To show Docker version
Example:
docker version
3) docker info – Displays system wide information
Example
docker info
4) docker pull – To pull the docker Images from Docker Hub Repository
Example:
docker pull ubuntu
5) docker build – To Docker Image from Dockerfile
Example:
docker build OR
if you want to include files and folder from current/same directory then use below commands
docker build .
6) docker run – Run a container from a docker image.
Example:
docker run -i -t ubuntu /bin/bash
-i – To start an interactive session.
-t – Allocates a tty and attaches stdin and stdout.
ubuntu– Docker image that is used to create the container.
bash (or /bin/bash)– command that is running inside the Ubuntu container.
Note- The container will stop when you leave it with the command exit. If you like to have a container that is running in the background, you just need to add the -d option in the command
OR
To exit from docker container type CTRL + P + Q. This will leave container running in background an provide you host system console.
Now Run Docker Container in background.
docker run -i -t --name=Ubuntu-Linux -d ubuntu /bin/bash
7) docker commit – To commit a changes in container file OR create new Docker Image
Example:
docker commit [options] [REPOSITORY[:TAG]]
Lets commit to existing docker container (023828e786e0) and create new Docker Image (Ubuntu-apache) OR Docker commit to Same Image
docker commit 023828e786e0 ubuntu-apache
8) docker ps – List all the running containers. Add the -a flag to list all the containers.
Examples:
docker ps
To list all Docker Containers including stopped
docker ps -a
9) docker start – To start a docker container
docker start
10) docker stop– To stop a docker container
docker stop
11) docker logs -To view Logs for a Docker Container
$ docker logs
12) docker rename – To rename Docker Container
docker rename
13) docker rm – To remove the Docker Container, stop it first and then remove it
docker rm
Run below command to remove all stopped containers
sudo docker rm -f $(sudo docker ps -a -q)
To remove untagged docker images
sudo docker images | grep none | awk '{ print $3; }' | xargs sudo docker rmi
We have covered docker basic commands which you should know.
#1. Docker Image Commands
Docker Image is a application template including binaries and libraries needed to run a Docker container.
Below are some commonly used Docker Image commands while working with Docker.
1) docker build – To build Docker Image from Dockerfile
Example:
docker build OR
if you want to include files and folder from current/same directory then use below commands
docker build .
To add a tag for Docker Image
docker build -t fosstechnix/nodejs:1.0
2. docker pull – To pull Docker Image from Docker Hub Registry
docker pull [OPTIONS] Image_Name[:TAG]
Examples:
docker pull ubuntu
Docker pull Image from Private Registry
First login to your Docker Private Registry URL, UserName and Password
docker login docker-fosstechnix.com --username=USERNAME
Pull the Docker Image
docker pull docker-fosstechnix.com/nodejs
To specify a Tag while pulling Docker Image
docker pull "repoName"/"image_name"[:tag]
To pull docker image from private IP based repository
docker image pull 192.168.100.50:5000 /ubuntu:latest
3. docker tag – To add Tag to Docker Image
docker tag IMAGE ID image/TAG
Examples:
docker tag nodejsdocker fosstechnix/nodejsdocker:v1.0
4. docker images – To list Docker Images
Docker command to list images
Examples
docker images
You can use ” docker image” command with “ls” argument also
docker image ls
To list out locally stored Docker Images
docker images list
To filter Docker Images list
docker images --filter "="
Below are some “–filter” options
dangling– Images are not used
label– List the Docker Images those you added a label
before– List the Docker Images which is created in specific time
since– Created in specific time with another image creation
reference – List Docker Images which has name or Tag
To list Docker Images which is not used or Dangling docker images
docker images -a
5. docker push – To push Docker Images to repository
docker push [OPTIONS] NAME[:TAG]
Examples:
docker tag nodejs my_docker_registry.com/nodejs:v1.0
To push a Docker Image to Private Registry
docker login docker-fosstechnix.com --username=USERNAME
docker tag ubuntu docker.fosstechnix.com/linux/ubuntu:latest
6. docker history – To show history of Docker Image
docker image history [OPTIONS] IMAGE
Examples:
docker history --no-trunc
Get full history in tabular format:
docker history --format "table{{.ID}}, {{.CreatedBy}}" --no-trunc
7. docker inspect – To show complete information in JSON format
docker inspect IMAGE_ID OR CONTAINER_ID
8. docker save – To save an existing Docker Image
docker save ubuntu_image:tag | gzip > ubuntu_image.tar.gz
9. docker import – Create Docker Image from Tarball
docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
Examples:
docker import ./ubuntu_image.tar.gz ubuntu:latest
this will create “ubuntu:latest” images from compressed imported image
Import a Docker container as an image from file
cat docker_container.tar.gz | docker import - my_image:tag
10. docker export – To export existing Docker container
docker export container_id | gzip > new_container.tar.gz
11. docker load – To load Docker Image from file or archives
docker load < ubuntu_image.tar.gz
12. docker rmi– To remove docker images
docker rmi IMAGE_ID
To remove all Docker Images
docker rmi $(docker images -q)
To remove All Docker Images forcefully
docker rmi -f $(docker images -q)
To clean docker images, builds , ..etc
docker system prune
We have covered Docker Basic commands for Docker Image.
#2. Docker Container Commands
Docker container is a virtualized runtime environment created from Image.
Below are list of Docker container commands which will be useful for you.
1) docker start – To start a Docker container
docker start [OPTIONS] CONTAINER [CONTAINER]
Examples:
docker start container_id
if you want to see output of your command
docker start -ai container_id
2) docker stop – To stop a running docker container
docker stop [-t|--time[=10]] CONTAINER [CONTAINER]
[-t|–time[=10]– wait before stopping the container
Examples:
docker stop container_id
To stop all the containers
docker stop 'docker ps -q'
To stop all Docker containers
docker stop $(docker ps -a -q)
3) docker restart – To restart docker container
docker restart container_id
4) docker pause – To pause a running container
docker pause container_id
Docker pause vs stop ?
Docker pause suspends all processes in the defined container .
Docker stop sends SIGTERM to the container’s main process to stop and stops the container.
5) docker unpause – To unpause a running container
Syntax:
docker unpause CONTAINER [CONTAINER…]
Example:
docker unpause CONTAINER_ID
6) docker run – Creates a docker container from docker image
docker run [OPTIONS] IMAGE [COMMAND] [ARG]
docker container run command is used to create a docker container from docker images. Below are the example of docker run container with commands
To run Docker container in foreground
docker run ubuntu
you will see output of ubuntu docker container on your terminal, To stop the container type “CTRL + C”.
To run Docker container in detached mode/in background OR you want to keep the docker container running when terminal exit , use option “-d“
docker container run -d ubuntu
To run Docker container under specific name
Syntax:
docker container run --name [CONTAINER_NAME] [DOCKER_IMAGE]
Example:
docker run -i -t --name=Ubuntu-Linux -d ubuntu
To run Docker container in interactive mode/ you can enter commands inside docker container while it is runnning
docker run -i -t --name=Ubuntu-Linux -d ubuntu /bin/bash
Expose Docker Container ports and access Apache outside
docker run -p 81:80 -itd 4e5021d210f6
-p – Exposes the host port to container port
To verify Apache is accessing from outside, Open your favourite browser , type the IP address of your system IP followed by port 81
http://SystemIP:81/
7) docker ps – To list Docker containers
To verify Docker Container running in background
docker ps
To list all Docker Containers including stopped
docker ps -a
8) docker exec – To Access the shell of Docker Container
Access the shell of Docker Container that runs in the background mode using “CONTAINER ID”
docker exec -i -t 023828e786e0 /bin/bash
Access the shell of Docker Container that runs in the background mode using “NAMES”
docker exec -i -t Ubuntu-Linux /bin/bash
Type “Exit” to exit from Docker Container shell.
To update the System Packages of Docker Container
docker exec 023828e786e0 apt-get update
Let’s install Apache2 in docker container
docker exec 023828e786e0 apt-get install apache2 -y
To check apache2 service status inside Docker Container
docker exec 023828e786e0 service apache2 status
Start Apache2 service inside Docker Container
docker exec 023828e786e0 service apache2 start
9) docker logs – To view logs of Docker container
To view Logs for a Docker Container
docker logs
10) docker rename – To rename Docker container
To rename Docker Container
docker rename
11) docker rm – To remove Docker container
To remove the Docker Container, stop it first and then remove it
docker rm
Run below command to remove all stopped containers
sudo docker rm -f $(sudo docker ps -a -q)
To remove untagged docker images
sudo docker images | grep none | awk '{ print $3; }' | xargs sudo docker rmi
12) docker inspect – Docker container info command
Syntax:
docker inspect [OPTIONS] NAME|ID [NAME|ID...]
OR
docker container inspect [OPTIONS] CONTAINER [CONTAINER...]
Example:
docker inspect 023828e786e0
To get Docker container IP Address
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $DOCKER_CONTAINER_NAME
To get list of all ports binds to Docker container
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $DOCKER_INSTANCE_NAME
12) docker attach – Attach Terminal to Running container
Docker attach command is used to attach your terminal to running container to control Input/Output/Error operations.
Syntax:
docker attach [OPTIONS] CONTAINER_ID / CONTAINER_NAME
Example:
docker attach nodejs
12) docker kill – To stop and remove Docker containers
Syntax:
docker kill [OPTIONS] CONTAINER [CONTAINER…]
Example:
To stop all docker containers
docker kill $(docker ps -q)
To remove all docker containers
docker rm $(docker ps -a -q)
To remove all docker containers forcefully
docker rm -f $(docker ps -a -q)
13) docker cp – To copy files or folders between a container and from local filesystem.
Syntax:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Examples:
To copy directory from Docker host to container
sudo docker cp ./directory_path 023828e786e0:/home/ubuntu
To copy directory from docker container to host
sudo docker cp 023828e786e0:/etc/apache2/sites-enabled .
To copy files from Docker container to host
sudo docker cp 023828e786e0:/etc/apache2 .
To copy files from Host to Docker container
Syntax:
docker cp SOURCE_HOST_PATH CONTAINER:DESTINATION_PATH
Example:
sudo docker cp ./test.fosstechnix.com.conf 023828e786e0:/etc/apache2/sites-enabled
We have covered Docker Basic commands for Docker Container.
#3. Docker Compose Commands
Docker compose is used to run multiple containers in a single application.
Below are some commonly used docker compose command line you should know
1) docker-compose build – To build docker compose file
Example:
docker-compose build
2) docker-compose up – To run docker compose file
docker-compose up
To run docker compose in background
docker-compose up -d
To re-run containers which has stopped in states
docker-compose up --no-recreate
3) docker-compose ls – To list docker images declared inside docker compose file
docker-compose ps
4) docker-compose start – To start containers which are already created using docker compose file
docker-compose start
What is difference between docker-compose up and docker-compose start ?
docker-compose up – It creates new docker containers which are defined docker-compose file.
docker-compose start– used to only to restart docker containers which are already created using docker-compose file, never created new containers.
5) docker-compose run – To run one one of application inside docker-compose.yml
docker-compose run nodejs
6) docker-compose rm – To remove docker containers from docker compose
docker-compose rm -f
To auto remove docker containers with docker-compose.yml
docker-compose up && docker-compose rm -fsv
To stop docker containers and then remove
docker-compose stop && docker-compose rm -f
To stop a specific docker container from docker compose
docker-compose stop nodejs
To remove docker containers data inside docker compose
docker-compose rm -f nodejs
To remove volume which is attached to docker container
docker-compose rm -v
7) docker-compose ps – To check docker container status from docker compose
docker-compose ps
We have covered docker basic commands for Docker Compose.
#4. Docker Volume Commands
1) docker volume create – To create docker volume
docker volume create
2) docker volume inspect – To inspect docker volume
docker volume create
3) docker volume rm – To remove docker volume
First remove the docker container
docker rm –f $(docker ps -aq)
then remove the docker volume
docker volume rm
To delete all docker volumes at once
docker volume prune
We have covered Docker Basic Commands for Docker Volume.
#5. Docker Networking Commands
1) docker network create – To create docker network
docker network create --driver=bridge --subnet=192.168.100.0/24 br0
2) docker network ls – To list docker networls
docker network ls
3) docker network inspect – To view network configuration details
docker network inspect bridge
We have covered Docker Basic Commands for Docker Networking.
#6. Docker Logs and Monitoring Commands
1) docker ps -a – To show running and stopped containers
docker ps -a
2) docker logs – To show Docker container logs
docker logs
3) docker events – To get all events of docker container
docker events
4) docker top – To show running process in docker container
docker top
5) docker stats – To check cpu, memory and network I/O usage
docker stats
6) docker port – To show docker containers public ports
docker port
We have covered Docker Basic Commands for Docker Logs and Monitoring.
#7. Docker Prune Commands
Using Docker prune we can delete unused or dangling containers, Images , volumes and networks
To clean all resources which are dangling or not associated with any docker containers
docker system prune
To remove unused and stopped docker images
docker system prune -a
To remove Dangling Docker images
docker image prune
docker image prune -a
To remove all unused docker containers
docker container prune
To remove all unused docker volumes
docker volume prune
To remove all unused docker networks
docker network prune
We have covered Docker Basic Commands for Docker Prune.
#8. Docker Hub Commands
To search docker image
docker search ubuntu
To pull image from docker hub
docker pull ubuntu
Pushing Docker Image to Docker Hub Repository
If you want to push the docker image to Docker Hub Registery. First login to https://hub.docker.com with ID and password using command line
docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: fosstechnix
Password:
Login Succeeded
Now push Docker Image to Docker Hub Repository
docker push nodejsdocker
Error: denied: requested access to the resource is denied:docker
If you are getting above error while pushing docker images to docker hub repository first time then first tag the Docker Image and try to push again
docker tag nodejsdocker fosstechnix/nodejsdocker
Push the Docker Image again
docker push fosstechnix/nodejsdocker
To logout from Docker Hub Registry
docker logout
We have covered Docker Basic Commands for Docker Hub/repository.
Conclusion
In this article, We have covered Docker lifecycle commands, Docker Basic Commands with examples, Docker Image commands, Docker Container Commands, Docker compose commands, Docker volume commands, Docker Networking Commands, Docker logs and monitoring commands and Docker Hub commands
Related Articles
Docker Installation
How to Install Docker on Ubuntu 19.10/18.04/16.04 LTS
How to Install Docker on Windows 10
Dockerfile Instructions
Dockerfile Instructions with Examples
Docker Image
How to Create Docker Image for Node JS Application [2 Steps]
Shell Script to Build Docker Image [2 Steps]
Docker Compose
How to Create Docker Image for Node JS Application [2 Steps]
Docker Commands
81 Docker Command Cheat Sheet in Image and PDF Format
-----------------------------------------------------------------
# Docker Interview Questions
Docker is getting a lot of traction in the industry because of its performance-savvy and universal replicability architecture, while providing the following four cornerstones of modern application development: autonomy, decentralization, parallelism & isolation.
Below are top 50 interview questions for candidates who want to prepare on Docker Container Technology:
# What are 5 similarities between Docker & Virtual Machine?
Docker is not quite like a VM. It uses the host kernel & can’t boot a different operating system. Below are 5 similarities between Docker & VIrtual Machine:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/Picture1.png)
# How is Docker different from Virtual Machine?
Figure: Docker Vs VM
Below are list of 6 difference between Docker container & VM:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview2.png)
# What is the difference between Container Networking & VM Networking?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-3.png)
# Is it possible to run multiple process inside Docker container?
Yes, you can run multiple processes inside Docker container. This approach is discouraged for most use cases. For maximum efficiency and isolation, each container should address one specific area of concern. However, if you need to run multiple services within a single container, you can use tools like supervisor.
Supervisor is a moderately heavy-weight approach that requires you to package supervisord and its configuration in your image (or base your image on one that includes supervisord), along with the different applications it manages. Then you start supervisord, which manages your processes for you.
Example: Here is a Dockerfile using this approach, that assumes the pre-written supervisord.conf, my_first_process, and my_second_process files all exist in the same directory as your Dockerfile.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-4.png)
# Does Docker run on Linux, macOS and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64). Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
# What is DockerHub?
DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.
# What is Dockerfile?
Docker builds images automatically by reading the instructions from a text file called Dockerfile. It contains all commands, in order, needed to build a given image. A Dockerfile adheres to a specific format and set of instructions which you can find here.
# How is Dockerfile different from Docker Compose?
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications. Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.
Docker compose uses the Dockerfile if one add the build command to your project's docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
# Can I use JSON instead of YAML for my Docker Compose file?
Yes. Yaml is a superset of json so any JSON file should be valid Yaml. To use a JSON file with Compose, specify the filename to use, for example:
docker-compose -f docker-compose.json up
You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg:
docker-compose -f docker-compose.json up
# How to create Docker container?
We can use Docker image to create Docker container by using the below command:
```
$ docker run -t -i command name
```
This command will create and start a container.If you want to verify the list of all running container with the status on a host use the below command:
```
$ docker ps -a
```
# What is maximum number of container you can run per host?
This really depends on your environment. The size of your applications as well as the amount of available resources (i.e like CPU) will all affect the number of containers that can be run in your environment. Containers unfortunately are not magical. They can’t create new CPU from scratch. They do, however, provide a more efficient way of utilizing your resources. The containers themselves are super lightweight (remember, shared OS vs individual OS per container) and only last as long as the process they are running.
# Is it possible to have my own private Docker registry?
Yes, it is possible today using Docker own registry server. if you want to use 3rd party tool, see Portus.
TBA
# Does Docker container package up the entire OS?
Docker containers do not package up the OS. They package up the applications with everything that the application needs to run. The engine is installed on top of the OS running on a host. Containers share the OS kernel allowing a single host to run multiple containers.
# Describe how many ways are available to configure Docker daemon?
There are two ways to configure the Docker daemon:
- Using a JSON configuration file.
This is the preferred option, since it keeps all configurations in a single place.
- Using flags when starting dockerd.
You can use both of these options together as long as you don’t specify the same option both as a flag and in the JSON file. If that happens, the Docker daemon won’t start and prints an error message.
$ dockerd --debug --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem \
--host tcp://:2376
15. Can you list reasons why Container Networking is so important?
Below are top 5 reasons why we need container networking:
- Containers need to talk to external world.
- Reach Containers from external world to use the service that Containers provides.
- Allows Containers to talk to host machine.
- Inter-container connectivity in same host and across hosts.
- Discover services provided by containers automatically.
- Load balance traffic between different containers in a service.
- Provide secure multi-tenant services.
# What does CNM refers to? What are its components? ![img](
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-5.png)
CNM refers to Container Networking Model. The Container Network Model (CNM) is a standard or specification from Docker, Inc. that forms the basis of container networking in a Docker environment.It is Docker’s approach to providing container networking with support for multiple network drivers. The CNM provides the following contract between networks and containers:
- All containers on the same network can communicate freely with each other
- Multiple networks are the way to segment traffic between containers and should be supported by all drivers
- Multiple endpoints per container are the way to join a container to multiple networks
- An endpoint is added to a network sandbox to provide it with network connectivity
The major components of the CNM are:
- Network,
- Sandbox and
- Endpoint.
Sandbox is a generic term that refers to OS specific technologies used to isolate networks stacks on a Docker host. Docker on Linux uses kernel namespaces to provide this sandbox functionality. Networks “stacks” inside of sandboxes include interfaces, routing tables, DNS etc. A network in CNM terms is one or more endpoints that can communicate.All endpoints on the same network can communicate with each other.Endpoints on different networks cannot communicate without external routing.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-6.png)
# What are different types of Docker Networking drivers?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-7.png)
Docker’s networking subsystem is pluggable using drivers. Several drivers exist by default, and provide core networking functionality. Below is the snapshot of difference of various Docker networking drivers.
Below are details of Docker networking drivers:
Bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
Host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher.
Overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
MacVLAN: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
None: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.
# What features are possible only under Docker Enterprise Edition in comparison to Docker Community Edition?
The following two features are only possible when using Docker EE and managing your Docker services using Universal Control Plane (UCP):
The HTTP routing mesh allows you to share the same network IP address and port among multiple services. UCP routes the traffic to the appropriate service using the combination of hostname and port, as requested from the client.
Session stickiness allows you to specify information in the HTTP header which UCP uses to route subsequent requests to the same service task, for applications which require stateful sessions.
# How is Docker Bridge network different from traditional Linux bridge ?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-8.png)
In terms of networking, a bridge network is a Link Layer device which forwards traffic between network segments. A bridge can be a hardware device or a software device running within a host machine’s kernel.
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
# How to create a user-defined Bridge network ?
To create a user-defined bridge network, one can use the docker network create command -
```$ docker network create mynet```
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-9.png)
You can specify the subnet, the IP address range, the gateway, and other options. See the docker network create reference or the output of docker network create --help for details.
# How to delete a user-defined Bridge network ?
Use the docker network rm command to remove a user-defined bridge network. If containers are currently connected to the network, disconnect them first.
```$ docker network rm mynet```
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-10.png)
# How to connect Docker container to user-defined bridge network?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-11.png)
When you create a new container, you can specify one or more --network flags. This example connects a Nginx container to the my-net network. It also publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the my-net network has access to all ports on the my-nginx container, and vice versa.
```
$ docker create --name my-nginx \
--network my-net \
--publish 8080:80 \
nginx:latest
```
To connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:
```
$ docker network connect my-net my-nginx
```
# Does Docker support IPv6?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-12.png)
Yes, Docker support IPv6. IPv6 networking is only supported on Docker daemons running on Linux hosts.Support for IPv6 address has been there since Docker Engine 1.5 release.To enable IPv6 support in the Docker daemon, you need to edit ```/etc/docker/daemon.json ```and set the ipv6 key to true.
```
{
"ipv6": true
}
```
Ensure that you reload the Docker configuration file.
```
$ systemctl reload docker
```
You can now create networks with the` --ipv6 `flag and assign containers IPv6 addresses using the `--ip6` flag.
# Does Docker Compose file format support IPv6 protocol?
Yes.
# How is overlay network different from bridge network?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-13.png)
Bridge networks connect two networks while creating a single aggregate network from multiple communication networks or network segments, hence the name bridge.
Overlay networks are usually used to create a virtual network between two separate hosts. Virtual, since the network is build over an existing network.
Bridge networks can cater to single host, while overlay networks are for multiple hosts.
26. What networks are affected when you join a Docker host to an existing Swarm?
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
- an overlay network called ingress, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
- a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
# How shall you disable the networking stack on a container?
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-14.png)
If you want to completely disable the networking stack on a container, you can use the --network none flag when starting the container. Within the container, only the loopback device is created. The following example illustrates this.
# How can one create MacVLAN network for Docker container?
To create a Macvlan network which bridges with a given physical network interface, once can use --driver macvlan with the docker network create command. You also need to specify the parent, which is the interface the traffic will physically go through on the Docker host.
```
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 collabnet
```
# Is it possible to exclude IP address from being used in MacVLAN network?
If you need to exclude IP addresses from being used in the Macvlan network, such as when a given IP address is already in use, use ```--aux-addresses```:
```
$ docker network create -d macvlan \
--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
--aux-address="my-router=192.168.32.129" \
-o parent=eth0 collabnet32
```
# Do I lose my data when the container exits?
Not at all! Any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.
# Does Docker Enterprise Edition support Kubernetes?
Yes, Docker Enterprise Edition(rightly called EE) support Kubernetes. EE 2.0 allows users to choose either Kubernetes or Swarm at the orchestration layer.
# What is Docker Swarm?
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
# What is `--memory-swap` flag?
`--memory-swap` is a modifier flag that only has meaning if `--memory `is also set. Using swap allows the container to write excess memory requirements to disk when the container has exhausted all the RAM that is available to it. There is a performance penalty for applications that swap memory to disk often.
# Can you explain different volume mount types available in Docker?
There are three mount types available in Docker
· Volumes are stored in a part of the host filesystem which is managed by Docker (`/var/lib/docker/volumes/` on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
· Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.
· tmpfs mounts are stored in the host system’s memory only, and are never written to the host system’s filesystem.
# How to share data among DockerHost?
Ways to achieve this when developing your applications. One is to add logic to your application to store files on a cloud object storage system like Amazon S3. Another is to create volumes with a driver that supports writing files to an external storage system like NFS or Amazon S3.
Volume drivers allow you to abstract the underlying storage system from the application logic. For example, if your services use a volume with an NFS driver, you can update the services to use a different driver, as an example to store data in the cloud, without changing the application logic.
# How to Backup, Restore, or Migrate data volumes under Docker container?
Steps to Backup a container
1) Launch a new container and mount the volume from the dbstore container
2) Mount a local host directory as /backup
3) Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
`$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata`
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
`$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash`
Then un-tar the backup file in the new container`s data volume:
```
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1
```
# How to Configure Automated Builds on DockerHub
You can build your images automatically from a build context stored in a repository. A build context is a Dockerfile and any files at a specific location. For an automated build, the build context is a repository containing a Dockerfile.
# How to configure the default logging driver under Docker?
To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts. The default logging driver is json-file.
# Why do my services take 10 seconds to recreate or stop?
Compose stop attempts to stop a container by sending a SIGTERM. It then waits for a default timeout of 10 seconds. After the timeout, a SIGKILL is sent to the container to forcefully kill it. If you are waiting for this timeout, it means that your containers aren’t shutting down when they receive the SIGTERM signal.
# How do I run multiple copies of a Compose file on the same host?
Compose uses the project name to create unique identifiers for all of a project’s containers and other resources. To run multiple copies of a project, set a custom project name using the -command line option or the COMPOSE_PROJECT_NAME environment variable.
# What’s the difference between up, run, and start under Docker Compose?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
# What is Docker Trusted Registry?
Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You install it behind your firewall so that you can securely store and manage the Docker images you use in your applications.
# How to declare default environment variables under Docker Compose?
Compose supports declaring default environment variables in an environment file named .env placed in the folder where the docker-compose command is executed (current working directory).
Example: The below example demonstrate how to declare default environmental variable for Docker Compose.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-16.png)
When you run docker-compose up, the web service defined above uses the image alpine:v3.4. You can verify this with the `docker-compose config` command which prints your resolved application config to the terminal:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-17.png)
# Can you list out ways to share Compose configurations between files and projects under Docker Compose?
Compose supports two methods of sharing common configuration:
1. Extending an entire Compose file by using multiple Compose files
2. Extending individual services with the extends field
# What is the role of .dockerignore file?
To understand the role of .dockerignore file, let us take a practical example. You may have noticed that if you put a Dockerfile in your home directory and launch a docker build you will see a message uploading context. Right? This means docker creates a .tar with all the files in your home and in all the subdirectories, and uploads this tar to the docker daemon. If you have some huge files, this may take a long time.
In order to avoid this, you might need to create a specific directory, where you put your Dockerfile, and all what is needed for your build. It becomes necessary to tell docker to ignore some files during the build. Hence, you need to put in the .dockerignore all the files not needed for your build
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY.
# What is the purpose of EXPOSE command in Dockerfile?
When writing your Dockerfiles, the instruction EXPOSE tells Docker the running container listens on specific network ports. This acts as a kind of port mapping documentation that can then be used when publishing the ports.
`EXPOSE [/...]`
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-18.png)
You can also specify this within a docker run command, such as:
`docker run --expose=1234 my_app`
Please note that EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.
# How is ENTRYPOINT instruction under Dockerfile different from RUN instruction?
ENTRYPOINT is meant to provide the executable while CMD is to pass the default arguments to the executable.
To understand it clearly, let us consider the below Dockerfile:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-19.png)
If you try building this Docker image using `docker build command` -
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-21.png)
Let us run this image without any argument.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-22.png)
Let's run it passing a command line argument
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-23.png)
This clearly state that ENTRYPOINT is meant to provide the executable while CMD is to pass the default arguments to the executable.
# Why Build cache in Docker is so important?
If the objects on the file system that Docker is about to produce are unchanged between builds, reusing a cache of a previous build on the host is a great time-saver. It makes building a new container really, really fast. None of those file structures have to be created and written to disk this time — the reference to them is sufficient to locate and reuse the previously built structures.
# Why Docker Monitoring is necessary?
● Monitoring helps to identify issues proactively that would help to avoid system outages.
● The monitoring time-series data provide insights to fine-tune applications for better performance and robustness.
● With full monitoring in place, changes could be rolled out safely as issues will be caught early on and be resolved quickly before that transforms into root-cause for an outage.
● The changes are inherent in container based environments and impact of that too gets monitored indirectly.
# Difference between Windows Containers and Hyper-V Containers
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-24.png)
Underlying is the architecture laid out by the Microsoft for the Windows and Hyper-V Containers
Here are few of the differences between them,
Differences:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-25.png)
# What are main difference between Swarm & Kubernetes?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It was built by Google based on their experience running containers in production using an internal cluster management system called Borg (sometimes referred to as Omega). In the other hand, a Swarm cluster consists of Docker Engine deployed on multiple nodes. Manager nodes perform orchestration and cluster management. Worker nodes receive and execute tasks
Below are the major list of differences between Docker Swarm & Kubernetes:
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-26.png)
Applications are deployed in the form of services (or “microservices”) in a Swarm cluster. Docker Compose is a tool which is majorly used to deploy the app. Applications are deployed in the form of a combination of pods, deployments, and services (or “microservices”).
Autoscaling feature is not available either in Docker Swarm (Classical) or Docker Swarm Auto-scaling feature is available under K8s. It uses a simple number-of-pods target which is defined declaratively using deployments. CPU-utilization-per-pod target is available.
Docker Swarm support rolling updates features. At rollout time, you can apply rolling updates to services. The Swarm manager lets you control the delay between service deployment to different sets of nodes, thereby updating only 1 task at a time. Under kubernetes, the deployment controller supports both “rolling-update” and “recreate” strategies. Rolling updates can specify maximum number of pods unavailable or maximum number running during the process.
Under Docker Swarm Mode, the node joining a Docker Swarm cluster creates an overlay network for services that span all of the hosts in the Swarm and a host only Docker bridge network for containers.
By default, nodes in the Swarm cluster encrypt overlay control and management traffic between themselves. Users can choose to encrypt container data traffic when creating an overlay network by themselves.
Under K8s, the networking model is a flat network, enabling all pods to communicate with one another. Network policies specify how pods communicate with each other. The flat network is typically implemented as an overlay.
Docker Swarm health checks are limited to services. If a container backing the service does not come up (running state), a new container is kicked off.
Users can embed health check functionality into their Docker images using the HEALTHCHECK instruction.
Under K8s, the health checks are of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve)
Out-of-the-box K8S provides a basic logging mechanism to pull aggregate logs for a set of containers that make up a pod.
# Is it possible to run Kubernetes on Docker EE 2.0 Platform?
Yes, it is possible to run Kubernetes under Docker EE 2.0 platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions.
# Can you use Docker Compose to build up Swarm/Kubernetes Cluster?
Yes, one can deploy a stack on Kubernetes with docker stack deploy command, the docker-compose.yml file, and the name of the stack.
Example:
$docker stack deploy --compose-file /path/to/docker-compose.yml mystack
$docker stack services mystack
You can see the service deployed with the kubectl get services command
$kubectl get svc,po,deploy
# What is 'docker stack deploy' command meant for?
The ‘docker stack deploy’ is a command to deploy a new stack or update an existing stack. A stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format that defines one or more services, similar to a docker-compose.yml file for Docker Compose but with a few extensions.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-27.png)
# List down major components of Docker EE 2.0?
Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated to the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.
Docker EE 2.0 GA consists of 3 major components which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.
● Universal Control Plane 3.0.0 (application and cluster management) – Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.
● Docker Trusted Registry 2.5.0 – The production-grade image storage solution from Docker &
● EE Engine 17.06.2- The commercially supported Docker engine for creating images and running them in Docker containers.
# Explain the concept of HA under Swarm Mode?
HA refers to High Availability. High Availability is a feature where you have multiple instances of your applications running in parallel to handle increased load or failures. These two paradigms fit perfectly into Docker Swarm, the built-in orchestrator that comes with Docker. Deploying your applications like this will improve your uptime which translates to happy users.
For creating a high availability container in the Docker Swarm, we need to deploy a docker service to the swarm with nginx image. This can be done by using docker swarm create command as specified above.
# docker service create --name nginx --publish 80:80 nginx
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-29.png)
# Can you explain what is Routing Mesh under Docker Swarm Mode?
Routing Mesh is a feature which make use of Load Balancer concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.
Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm. All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-30.png)
# Is Routing Mesh a Load Balancer?
Routing Mesh is not Load-Balancer. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.
In simple words, if you had 3 swarm nodes, A, B and C, and a service which is running on nodes A and C and assigned node port 30000, this would be accessible via any of the 3 swarm nodes on port 30000 regardless of whether the service is running on that machine and automatically load balanced between the 2 running containers. I will talk about Routing Mesh in separate blog if time permits.
# Is it possible to run MacVLAN under Docker Swarm Mode? What features does it offer?
Starting Docker CE 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridge, host, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-31.png)
MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10 representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.
# What are Docker secrets and why is it necessary
In Docker there are three key components to container security and together they result in inherently safer apps.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-32.png)
Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles.
The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object.
![img](https://raw.githubusercontent.com/collabnix/dockerlabs/master/docker/img/docker-interview-33.png)
# Serverless Interview Questions
## What is Serverless and why is it important?
Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. You can build them for nearly any type of application or backend service, and everything required to run and scale your application with high availability is handled for you.
## Why use serverless?
Serverless enables you to build modern applications with increased agility and lower total cost of ownership. Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable.
## What are the benefits of serverless?
- NO SERVER MANAGEMENT
There is no need to provision or maintain any servers. There is no software or runtime to install, maintain, or administer
- FLEXIBLE SCALING
Your application can be scaled automatically or by adjusting its capacity through toggling the units of consumption (e.g. throughput, memory) rather than units of individual servers.
- PAY FOR VALUE
Pay for consistent throughput or execution duration rather than by server unit.
- AUTOMATED HIGH AVAILABILITY
Serverless provides built-in availability and fault tolerance. You don't need to architect for these capabilities since the services running the application provide them by default.
# Tell something about the AWS Serverless Platform?
AWS provides a set of fully managed services that you can use to build and run serverless applications. Serverless applications don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you. This allows you to focus on product innovation while enjoying faster time-to-market.
## COMPUTE
### AWS Lambda
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.
### AWS Fargate
AWS Fargate is a purpose-built serverless compute engine for containers. Fargate scales and manages the infrastructure required to run your containers.
## STORAGE
Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3 is easy to use, with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
## Amazon Elastic File System (Amazon EFS)
It provides simple, scalable, elastic file storage. It is built to elastically scale on demand, growing and shrinking automatically as you add and remove files.
## DATA STORES
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.
## API PROXY
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It offers a comprehensive platform for API management. API Gateway allows you to process hundreds of thousands of concurrent API calls and handles traffic management, authorization and access control, monitoring, and API version management.
## APPLICATION INTEGRATION
Amazon SNS is a fully managed pub/sub messaging service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications.
## ORCHESTRATION
AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application.
## ANALYTICS
Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.
## DEVELOPER TOOLING
AWS provides tools and services that aid developers in the serverless application development process. AWS and its partner ecosystem offer tools for continuous integration and delivery, testing, deployments, monitoring and diagnostics, SDKs, frameworks, and integrated development environment (IDE) plugins.
# DCA Mock questions
## 1. How can we limit the number of CPUs provided to a container?
a) Using `--cap-add CPU` .
b) Using` --cpuset-cpus` .
c) Using` --cpus `.
d) It is not possible to specify the number of CPUs;we have to use `--cpu-shares` and define the CPU slices.
## 2. How can we limit the amount of memory available to a container?
a) It is not possible to limit the amount of memory available to a container.
b) Using `--cap-drop MEM `.
c) Using `--memory` .
d) Using `--memory-reservation` .
## 3.What environment variables should be exported to start using a trusted environment with the Docker client?
a) `export DOCKER_TRUSTED_ENVIRONMENT=1 `
b) `export DOCKER_CONTENT_TRUST=1`
c) `export DOCKER_TRUST=1`
d) `export DOCKER_TRUSTED=1`
-------------------------------------
VVVVIMP
Question 1 — What is Docker?
Docker is an open-source containerization platform. It enables developers to package applications into containers.
Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. Docker can package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer.
Question 2 — Why and when do we use Docker?
Here are some reasons why and when to use Docker:
1. Isolation: Docker containers provide isolation for applications and their dependencies, making it easier to deploy and run the same application consistently across different environments.
2. Portability: Docker containers can run on any host with the Docker engine installed, making it easy to move an application between development, staging, and production environments.
3. Scalability: Docker allows you to easily scale up or down the number of containers running an application, making it easy to handle changes in load or traffic.
4. Versioning: Docker allows you to version control your application, as well as the dependencies and configuration.
5. Efficient resource usage: Docker containers use fewer resources than traditional virtual machines, making them more efficient to run on the same host.
6. Microservices: Docker makes it easier to implement and manage a microservices architecture, allowing you to break down a monolithic application into smaller, loosely-coupled services.
Question 3— How containers are different from Virtual Machines?
Containers and virtual machines are both technologies used to isolate applications and their dependencies, but they have some key differences:
1. Resource Utilization: Containers share the host operating system kernel, making them lighter and faster than VMs. VMs have a full-fledged OS and hypervisor, making them more resource-intensive.
2. Portability: Containers are designed to be portable and can run on any system with a compatible host operating system. VMs are less portable as they need a compatible hypervisor to run.
3. Security: VMs provide a higher level of security as each VM has its own operating system and can be isolated from the host and other VMs. Containers provide less isolation, as they share the host operating system.
4. Management: Managing containers is typically easier than managing VMs, as containers are designed to be lightweight and fast-moving.
Question 4— What is Docker Life Cycle?
There are three important things,
docker build -> builds docker images from Dockerfile
docker run -> runs container from docker images
docker push -> push the container image to public/private registries to share the docker images.
Docker Life Cycle
Question 5—What is the difference between an Image, Container, and Engine?
An image in Docker is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
A container is a running instance of an image. It is a lightweight, standalone, and executable software package that includes everything needed to run the software in an isolated environment.
A Docker engine is a background service that manages and runs Docker containers. It is responsible for creating, starting, stopping, and deleting containers, as well as managing their networking and storage. The Docker engine is the underlying technology that runs and manages the containers.
Question 6— What is the difference between the Docker command COPY vs ADD?
The COPY command is used to copy local files from the host machine to the Docker image. It only copies the files, and does not support any other functions such as decompressing files or fetching files from a remote location.
The ADD command, on the other hand, supports additional functionality beyond just copying files. In addition to copying local files, ADD also supports fetching files from a remote URL, and automatically decompressing files that are archived in a supported format (such as tar or gzip).
Question 7— What is the difference between the Docker command CMD vs RUN?
The CMD and RUN commands in Docker are used to specify commands that should be executed when a container is started from a given image.
The CMD command, is used to specify the default command that should be executed when a container is started from an image. This command can be overridden when starting a container, which means that it does not have to be executed every time a container is started.
The RUN command, on the other hand, is used to execute a command during the image-building process. It will run command(s) in a new layer on top of the current image and commit the results. The command(s) in a RUN instruction will always be executed when the image is being built.
Question 8— How Will you reduce the size of the Docker image?
Using the official node alpine image as a base image, is a simple solution to reduce the overall size of the image, because even the base alpine image is a lot smaller compared to the base ubuntu image.
Question 9— Why and when to use Docker?
Docker is a containerization platform that allows you to package, deploy, and run applications in a container. It provides a way to isolate an application and its dependencies from the underlying host system, making it easier to deploy and run the same application consistently across different environments.
Here are some reasons why and when to use Docker:
1. Isolation: Docker containers provide isolation for applications and their dependencies, making it easier to deploy and run the same application consistently across different environments.
2. Portability: Docker containers can run on any host with the Docker engine installed, making it easy to move an application between development, staging, and production environments.
3. Scalability: Docker allows you to easily scale up or down the number of containers running an application, making it easy to handle changes in load or traffic.
4. Versioning: Docker allows you to version control your application and the dependencies and configuration.
5. Efficient resource usage: Docker containers use fewer resources than traditional virtual machines, making them more efficient to run on the same host.
6. Microservices: Docker makes it easier to implement and manage a microservices architecture, allowing you to break down a monolithic application into smaller, loosely-coupled services.
Question 10— Explain the Docker components and how they interact with each other.
Docker has several components that work together to provide a platform for packaging, deploying, and running applications in containers. These components include:
1. Docker Engine: The Docker Engine is the underlying technology that runs and manages the containers. It is responsible for creating, starting, stopping, and deleting containers, as well as managing their networking and storage.
2. Docker Daemon: The Docker Daemon is the background service that communicates with the Docker Engine. It receives commands from the Docker CLI and performs the corresponding actions on the Docker Engine.
3. Docker CLI: The Docker Command Line Interface (CLI) is a command-line tool that allows users to interact with the Docker Daemon to create, start, stop, and delete containers, as well as manage images, networks, and volumes.
4. Docker Registries: A Docker Registry is a place where images are stored and can be accessed by the Docker Daemon. Docker Hub is the default public registry, but you can also use private registries like those provided by Google or AWS.
5. Docker Images: A Docker Image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
6. Docker Containers: A Docker Container is a running instance of an image. It is a lightweight, standalone, and executable software package that includes everything needed to run the software in an isolated environment.
Question 11 — Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container?
1. Docker Compose: Docker Compose is a tool for defining and running multi-container applications. It allows you to define the services that make up your application in a single docker- compose.yml file, and then start, stop, and manage those services with a single command.
2. Dockerfile: A Dockerfile is a script that contains instructions for building a Docker image. It specifies the base image to use, any additional software to install, and any configuration files or environment variables that need to be set.
3. Docker Image: A Docker image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
4. Docker Container: A Docker container is a running instance of an image. It is a lightweight, standalone, and executable software package that includes everything needed to run the software in an isolated environment. Each container runs in its own namespace and has its own set of processes and network interfaces.
Question 12 — In what real scenarios have you used Docker?
Examples of scenarios where Docker can be used:
1. Developing and testing applications: Docker can be used to set up a consistent development environment for an application, including all of the dependencies and runtime environments. This makes it easy to test an application on different operating systems or configurations.
2. Deploying applications in production: Docker allows you to package an application and its dependencies in a container, making it easy to deploy the same application consistently across different environments. This can be especially useful when deploying to cloud environments, where resources are shared among multiple tenants.
3. Building and deploying microservices: Docker can be used to implement a microservices architecture, allowing you to break down a monolithic application into smaller, loosely-coupled services. This can make it easier to scale and update individual components of an application.
4. Continuous integration and delivery: Docker can be used as part of a continuous integration and delivery (CI/CD) pipeline, allowing developers to automatically build and test their code in a containerized environment.
5. Automated testing: Docker can be used to set up automated testing environments by spinning up multiple containers for different versions of dependencies, database and other services.
6. Serverless architecture: Docker can be used to package a function and
Question 13 — What is a Docker namespace?
A Docker namespace is a feature of the Linux kernel that allows for the isolation of resources, such as network, process, and file system, among different groups of users and processes. Docker uses namespaces to provide isolation between containers running on the same host.
When a container is created, the Docker daemon creates a new namespace for the container’s processes. This namespace is used to isolate the container’s resources, such as network interfaces, file systems, and process IDs, from the host and other containers.
Question 14— What is a Docker registry?
A Docker registry is a service that stores and distributes Docker images, and it allows users to upload, download, and manage their own images. It can be either public or private, and it can support different authentication and authorization mechanisms to control access to the images.
Question 15 — What is an entry point?
An entry point in Docker is a command or script that is executed
when a container is started from an image. It’s defined in the
image’s Dockerfile using the ENTRYPOINT command, and it’s used to specify the default command that should be run when a container is created from the image. It can be overridden when starting a container using the — entrypoint option.
Question 16 — How to implement CI/CD in Docker?
CI/CD (Continuous Integration/Continuous Deployment) in Docker can be implemented by following these steps:
1. Build: Use a continuous integration tool, such as Jenkins, Travis CI or CircleCI, to automatically build a Docker image from the source code whenever there are changes to the codebase. This can be done by using a Dockerfile to define the image and using the CI tool to run the docker build command.
2. Test: Run automated tests on the built image to ensure that the application is functioning correctly. This can be done by starting a container from the image and running the tests inside the container.
3. Push: Push the built image to a Docker registry, such as Docker Hub, so that it can be easily accessed by the deployment environment.
4. Deploy: Use a continuous deployment tool, such as Kubernetes or Docker Swarm, to deploy the image to the production environment. This can be done by pulling the image from the registry and starting a container from it.
5. Monitor: Monitor the deployed containers to ensure that they are running correctly and that there are no issues with the application. This can be done by using tools such as Prometheus or Elasticsearch, Logstash and others.
Question 17 — What is a Docker swarm?
Docker Swarm is a native clustering and orchestration solution for Docker. It allows you to create and manage a cluster of Docker nodes, and deploy and scale applications on the cluster.
A swarm is a group of Docker engines that are running in swarm mode and joined together. Once the engines are joined in a swarm, they can be used to deploy services. A service is a group of containers that are running a specific image and are deployed on the swarm.
Question 18 — What are the docker commands for the following:
view running containers
$ docker ps
command to run the container under a specific name
$ docker run —name [name] [image]
command to export a docker
$ docker export [OPTIONS] CONTAINER
command to import an already existing docker image
$ docker import [file_name.tar] [image_name:tag]
commands to delete a container
$ docker rm [container_name]
command to remove all stopped containers, unused networks, build caches, and dangling images
$ : docker system prune
Question 19 — What is the difference between Docker and a Hypervisor
Docker and hypervisors are both technologies used to create and manage virtual environments, but they work in different ways and are suited for different use cases.
A Hypervisor is a software that allows multiple virtual machines (VMs) to share a single physical host. Each VM runs its own operating system and has its own resources, such as CPU, memory, and storage.
Docker, on the other hand, is a containerization platform that allows you to package, deploy, and run applications in a container.
20 more interview questions to explore
Explain Docker architecture.
What’s the difference between CMD and ENTRYPOINT?
What is the purpose of the volume parameter in a Docker run command?
Is it a good practice to run stateful applications on Docker?
What are Docker Namespaces?
Explain the implementation method of continuous integration and continuous deployment in Docker.
What is the process for stopping and restarting a Docker container?
How do you give your Docker image an image name?
What does the docker service command do?
Can you lose data when the container exits?
How do Jenkins and Docker work together?
How far do Docker containers scale?
Describe the differences between daemon logging and container logging.
Explain the purposes of up, run, and start commands of Docker compose.
Where are Docker volumes stored?
Explain the difference between Docker image and layer.
Can a paused container be removed from Docker?
How do you use the docker save and docker load commands?
What is the default Docker network driver? How can you change it when running a Docker image?
What does the docker system prune command do?
No comments:
Post a Comment