Please enable JavaScript.
Coggle requires JavaScript to display documents.
Container - Coggle Diagram
Container
what is container
process
process directory
namespace
def
inode locations on disk
allows for processes to shared/reused the same namespace
allowing them to view and interact.
variable
user id
net
pid
IPC
mnt-mount
UTS
unshare tool
1 process có thể thực thi trong namespace của chúng ta
share namespace
allow process
view all & interact
shared/resused same namespace
Another tool: nsenter
attach process -> existing Namespaces
cgroup
limit the amount of resources a process can consume
define in /proc
xem những ánh xạ: cat /proc/$pid/cgroup
những file được ánh xạ nằm ở: /sys/fs/cgroup
configure cgroups
control memory limit
default: container -> no limit on memory:
docker stats db --no-stream
memory.limit_in_bytes
lưu cấu hình về memory
chỉnh sửa => set limit memory
path:
/sys/fs/cgroup/memory/docker/$PID/memory.limit_in_bytes
chroot
có nhiều file độc lập với host
có nhiều docker images trên nhiều host khác nhau
CPU stat for process
được lưu trong 1 tệp:
/sys/fs/cgroup/cpu,cpuacct/docker/$DBID/cpuacct.stat
Docker cgroups cho cấu hình container memory được lưu:
/sys/fs/cgroup/memory/docker
directory -> group by
id assigned by docker
Seccomp/AppArmor
AppArmor
application
define profile
part of system a process can access
path:
/proc/$PID/attr/current
default is:
docker-default (enforce)
trước docker 1.13
/etc/apparmor.d/docker-default
overwritten when docker started
user -> couldn't modify
sau docker 1.13
tmpfs
apparmor_parser -> load default -> kernel -> delete configre in tmpfs
assign to process
process
limit to subset of ability system calls
call blocked system calls
receive
"Operation Not Allowed"
SecComp
is defined in file
/proc/$DBPID/status
cat /proc/$DBPID/status | grep Seccomp
flag
0: disabled
1: strict
2: filtering
Capabilities
are groupings
what process/user has permission todo
cover
multiple system calls or actions
status file
containers Capabilities flags
cat /proc/$DBPID/status | grep ^Cap
flag stored as a bitmask
decoded:
capsh
capsh --decode=00000000a80425fb
Container Images
tar file containg tar files
each tar file -> layer
all tar files -> extract into same location -> container's filesystem
images
include metadata
create empty image
tar cv --files-from /dev/null | docker import - empty
without dockerfile
BusyBox
foundation linux command
rootfs
Foundation
Deloying first docker container
What is docker?
Run docker
access docker
get port
docker ps
docker port container_id port_number
persisting data
Deploy Static HTML Website as Container
create Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
line1: define base images
line2: copy content of current dir to particular loc in container
dockerfile
A Dockerfile defines all the steps required to create a Docker image with your application configured and ready to be run as a container.
allows for images to be composable, enabling users to extend existing images instead of building from scratch. By building on an existing image, you only need to define the steps to setup your application. The base images can be basic operating system installations or configured systems which simply need some additional customisations.
Build Docker image
docker build
docker build -t <build directory>
-t: define image and a tag
run
Building Container Images
Base images
same images from the Docker Registry which are used to start containers
define a base image we use the instruction
FROM <image-name>:<tag>
Exposing Ports
EXPOSE <port>
tell Docker which ports should be open and can be bound to
can define multiple ports on the single command
Buildings Container
docker build
takes in a directory containing the Dockerfile
executes the steps and stores the image in your local Docker Engine
If one fails because of an error then the build stops.
docker build -t my-nginx-image:latest .
Running commands
we need to run various commands to configure our image
main commands two are COPY and RUN
RUN <command>
COPY <src> <dest>
copy files from the directory containing the Dockerfile to the container's image
Launching New Image
Default Commands
The
CMD
line in a Dockerfile defines the default command to run when a container is launched
Ex:
CMD ["cmd", "-a", "arga value", "-b", "argb-value"]
Run <= Combine
cmd -a "arga value" -b "argb-value"
can be overridden when the container starts
ENTRYPOINT
defines a command which can have arguments passed to it when the container launches.
Dockerizing Node.js applications
Configuring Application
After installed our dependencies
copy over the rest of our application's source code
If we copied our code before running npm install
it would run every time as our code would have changed
copying just package.json
can be sure that the cache is invalidated only when our package contents have changed.
Building & Launching Container
NPM Install
step: install the dependencies required to run the application - npm
To keep build times to a minimum
Docker caches the results of executing a line in the Dockerfile for use in a future build
something has changed
Docker will invalidate the current and all following lines
ensure everything is up-to-date
don't want to use the cache as part of the build
set the option
--no-cache=true
part of the docker build command
Environment Variables
Docker images
should be designed
can be transferred from one environment to the other without making any changes or requiring to be rebuilt
can be defined when you launch the container
-e
option
Base image
define a working directory
WORKDIR <directory>
to ensure that all future commands are executed from the directory relative to our application.
Optimise Builds With Docker OnBuild
Step 2 - Application Dockerfile
The advantage of creating OnBuild images
our Dockerfile
much simpler
easily re-used across multiple projects without having to re-run the same steps improving build times.
Step 3 - Building & Launching Container
Step 1 - OnBuild
Dockerfile's are executed in order from top to bottom
you can trigger an instruction to be executed at a later time when the image is used as the base for another image.
can delay your execution
dependent on the application which you're building
we can build this image
application specific commands won't be executed
until the built image is used as a base image
They'll then be executed as part of the base image's build
Ignoring Files During Build
Step 2 - Docker Build Context
.dockerignore
ensure that sensitive details are not included in a Docker Image
also be used to improve the build time of images.
Step 3 - Optimised Build
use
.dockerignore
to exclude files which we don't want to be sent to the Docker Build
Step 1 - Docker Ignore
add a file named
.dockerignore
to prevent
sensitive files or directories
would be stored in source control
share with the team to ensure that everyone is consistent.
Create Data Containers
Step 2 - Copy Files
copy files into a container
docker cp
ex:
docker cp config.conf dataContainer:/config/
Step 3 - Mount Volumes From
--volumes-from <container> option
use the mount volumes from other containers inside the container being launched
ex:
docker run --volumes-from dataContainer ubuntu ls /config
If a /config directory already existed
volumes-from would override and be the directory used
can map multiple volumes to a container
Step 1 - Create Container
Data Containers
containers whose sole responsibility is to be a place to store/manage data
To create a Data Container
first create a container with a well-known name for future reference
use busybox as the base
provide a
-v option
to define where other containers will be reading/saving data.
Step 4 - Export / Import Containers
wanted to move the Data Container to another machine
we can export it to a .tar file
docker export dataContainer > dataContainer.tar
import data
docker import dataContainer.tar
Creating Networks Between Containers using Links
Step 2 - Create Link
To connect to a source container
--link <container-name|id>:<alias>
when launching a new container
setting an alias
separate how our application is configured to how the infrastructure is called
How links work
When a link is created, Docker will do two things
set some environment variables based on the linked to the container
output all the environment variables
docker run --link redis-server:redis alpine env
update the HOSTS file of the container
with an entry for our source container
with three names, the original, the alias and the hash-id
output the containers host entry
cat /etc/hosts
Step 3 - Connect To App
With a link created
applications can connect and communicate with the source container
Step 1 - Start Redis
Step 4 - Connect to Redis CLI
Launching Redis CLI
docker run -it --link redis-server:redis redis redis-cli -h redis
Creating Networks Between Containers using Networks
Step 2 - Network Communication
Explore
Step 1 - Create Network
Create network
docker network create backend-network
Connect To Network
docker run -d --name=redis --net=backend-network redis