Please enable JavaScript.
Coggle requires JavaScript to display documents.
KUBERNETES - Coggle Diagram
KUBERNETES
KUBECTL
config lies in $home/.kube/configConsists:
- target cluster name
- credentials (gcloud container clusters get-credentials [cluster] --zone [zone]
View it: kubectl config view
- kubectl version
- kubectl cluster-info
- kubectl config current-context (current context)
- kubectl config get-contexts
- kubectl config use-context
- kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
- kubectl get
- kubectl get nodes
- kubectl get pods
- kubectl get service
- kubectl get service my-service --watch
- kubectl get deployment
- kubectl get rs (ReplicaSet)
- kubectl describe (kubectl decribe pods | nodes | deployments)
- kubectl logs [POD]
- kubectl logs [POD] -c [container]
- kubectl exec
- kubectl exec $POD_NAME -- env (launch ENV)
- kubectl exec -it $POD_NAME -- bash (launch BASH) - !!!interactive
- kubectl proxy
- kubectl label
- kubectl scale deployment/kubernetes-bootcamp --replicas=3
- kubectl autoscale deployment --min=3 --max=5 --cpu-percent=75
- kubectl rollout status deployments/kubernetes-bootcamp (confirm update status)
- kubectl rollout undo deployments/kubernetes-bootcamp
- kubectl top nodes (utilization)
- source <(kubectl completion bash)
Update deployment
- kubectl set image (update image)
- kubectl edit deployment [deployment_name]
MASTER
CONTROL PLANE
-
NODE
Always runs:
- KUBELET (agent managed by control plane - Master via API). Manages PODs and containers in it.
- KubeProxy (maintains network connectivity among PODs in a cluster)
- ContainerD - a container runtime (like Docker)
- pulling the container image from a registry
- unpacking the container
- running the application
-
MASTER Components
- kube APIServer
- scheduler (schedules PODs on NODEs)
- ETCd (config database, cluster config)
- kube ControllerManager (monitor state of cluster and changes to the desired state)
- kube CloudManager (launches cloud provides features like loadbalancer)
POD
A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers:
- Shared storage, as Volumes
- Networking, as a unique cluster IP address
- Information about how to run each container, such as the container image version or specific ports to use
- number of PODs i specifed in replica number
Phases:
- Pending (scheduled), not created yet
- Running (POD is runnign, but containers might be starting)
- Succeded - all containers in POD started successfully
- Failed (container failed and won't start)
- Unknown - state uknown, no communication probably
- CrashLoopBackOff - POD isn't configured correctly
-
Deployment Controller: after submitting YAML file to control plane, deployment controller is created to covert desired state to reality and keeping this desired state (loop task)
Rolling Updates
- max unavailable (max unavailable PODs total old RS+newRS)
- max surge (max new PODs in newRS)
Controlling POD placement:
- (in POD manifest) NodeSelector:
DiskType: SSD
- in NODE manifest:
- Labels:
DiskType: SSD
Place PODs on desired Nodes/NodePool
-
Controller Object Types
ReplicaSet
- ensures that a population of Pods, all identical to one another, are running at the same time.
- drive the cluster back to desired state via creation of new Pods
-
Deployment
Deployment instructs Kubernetes how to create and update instances of your application. Deployment creates Pods with containers inside them
- a set of PODs
- continously monitored by kube deployment controller to check current state vs desired dtate
- creates ReplicaSets
- lets you create, update, roll back, and scale Pods, using ReplicaSets
YAML file
- kind: type - deployment or service
- matedata: specifies name
- spec: define desired state (ie replicas)
-
Rollback
- kubectl rollout undo deployment
- kubectl rollout undo deployment --to-revision=2
- kubectl rollout history deployment --revision=2
- kubectl rollout pause | resume | status
StatefulSet
- to deploy apps that maintain local state
- have unique persistent identities with stable network identity and
persistent disk storage
- almost the same like Deployments but with stable states
it's like Deployment, except that the Pods are given unique identifiers:
- ordinal index (unique number, sequential names)
- stable hostname
- stably identified storage
DaemonSet
- to run certain Pods on all the nodes within the cluster or on a selection of nodes
- A Kubernetes cluster might use a DaemonSet to ensure that a logging agent like fluentd is running on all nodes in the cluster
Job
- Job controller creates one or more Pods required to run a task.
- When the task is completed,
JobController will then terminate all those Pods
- If job fails, JobController reschedules job on another node until job is completed
CronJobs
- schduled jobs in unix time format
-
CONTAINERS
Cloud Build
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/quickstart-image .
or:
gcloud builds submit --config cloudbuild.yaml .
Resource quotas (CPU/MEM)
- Requests - min reserverd for container (if not available, schedule on different node)
- Limits - max resources container can use
-
Namespace
- object needs to have unique name within namespace
- let you implement resource quotas across your cluster.
Default namespaces in system
- Default
- Kube-system (ConfigMap, Controller, Secrets, Deployments)
- Kube-Public
MIGRATE FOR ANTHOS
- move and convert workloads into containers
- VMs into containers
Steps:
- 1) Create processing cluster
- 2) Install Migrate for Anthos components onto cluster
- 3) Add migration source (VMware, AZWS, Azure, GCP)
- 4) Generate a migration object with a plan in YAML
- 5) Generate artifats (application images and YAML files for the deployment)
- 6) Test images and deployment
- 7) Deploy to production
Installation
-
-
-
4 - Migration plan
migctl migration create test-migration --source my-ce-src --vm-id my-id --intent Image
5 - Generate artifacts
migctl migration generate-artifacts my-migration
- docker file, YAML file => in cloud storage
- image => in Container Registry
migctl migration get-artifacts test-migration
-
NODES
Taints
Prevent the scheduler from running a Pod on the selected nodes:
kubectl taint node -l temp=true nodetype=preemptible:NoExecuteTo allow application Pods to execute on these tainted nodes, you must add a tolerations key to the deployment configuration
tolerations:
- key: "nodetype"
operator: Equal
value: "preemptible"
NETWORKING
IP Addressing
- SERVICES alias range /20 (~4k IPs)
- PODs alias range /14 (~250k IPs) (/24 per NODE)
KUBEDNS
Service
- KubeDNS watches KubeAPI for new services
- when discovered, A record and SRV records are created
Example: LAB service in DEMO namespace:
FQDN: lab.demo.svc.cluster.localFor named ports (HTTP+TCP) SRV record is created: FQDN: _http._tcp.lab.demo.svc.cluster.local value: 80
-
PERSISTENT STORAGE
VOLUMES
- are attached to PODs (not containers)
- temporary
emptyDirectory
- allows containers within POD read and write to it
- created when POD is assigned to a NODE
- removed when PODs is removed/deleted
- from NODEs local disks or memorybacked FS
ConfigMaps
- application config data, parameters
Secret
- for sensitive informations like passwords, tokens, keys, SSH files - encrypted
- in memory FS (violate)
- Generic
- TLS (public/private keys .PEM)
- credentials to Docker Registry for Kubelet
DownwardAPI
- publish PODs environment data to containers
PERSISTENT VOLUMES (PV) - storage for K8 cluster
- independent of PODs lifecycle
- durable for cluster
- managed by K8
- use PersistentDisks
kind: PersistentVolume
1) Persistent Disk create
gcloud compute disks create --size=100GB --zone=us-central1-a demo-disk
2) Use it in manifest
gcePersistentDisk:
pdName: demo-disk
fsType: ext4
Partition: 0
PersistentVolumeClaim (PVC) - storage for PODs - defines:
- Volume size
- Storage Class
- Access type
kind:PersistentVolumeClaim
claim must match PV Class/accessModes/capacity