Please enable JavaScript.
Coggle requires JavaScript to display documents.
GCP_GKE and Container registry - Coggle Diagram
GCP_GKE and Container registry
Kubernetes basic
Provides cluster management
each cluster can have different mashine types
provides Orchestration features
Load Balancer
Self healing
Service Discovery
Zero downtime deployment
AutoScalling
can be sscalled on diff params: CPU, memory, metrics from other services, like PubSub - undelivered messages
Most popular Container Orchestrator
GKE basic points
Auto-upgrade - provide last updated K8S
provides pods and cluster autoscaling
Auto-repair - repair failed nodes
enable cloud logging and monitoring
Manage Kubernates service on GCP
Connected to persistence disk and Local SSD
we can configure deployment(workloads) via YAML
Use Container optimised OS
commands
gcloud container clusters ...
- GCP specific command to manage cluster, nodes, connect, etc
kubectl ...
- cloud agnostic Ctl for deployment
we can create LB internally - once create service with type "LoadBalancer"
Autopilot
Fully automated, Google is our SRE
use minimum of resources
provisioning based on workload definition (definition of pod)
best practises automatically applied
security best practises
SLA 99,9
Efficient billing per-pod: per vCPU, scale up and down
In Standart GKE: Worker node management and node pool management is on the Admin side
Glossary
Pod [SOFTWARE] - pods running as a part of Deployments, single instance, a fw pods make Service
can contain one or more containers. But usually has 1. If there are multi containers - they share network, disk, volume, etc
has ephemeral IP
smallest deployable unit
kubectl get pods
- command
POD statuses: Running, Pending, Succeeded(when job is done), Failed, Unknown
Service [SOFTWARE] - set of pods with entry point which provide load balancing. . A service is responsible for enabling network access to a set of pods. We could use a deployment without a service to keep a set of identical pods running in the Kubernetes cluster.
External users are not impacted by internal changes - by Services
Node [Hardware]
master node - control plane - we sent kubectl to this master node
API server - for alll, outside and inside
Scheduler - decide where to put Pod
Control manage - manage deployment and replicaSets
etcd - distributed DB for K8S needs
worker nodes - runs workloads, keep pods. Keep kubelets(manage communication with master nodes)
node - real instances where pods are located
node pool - group of nodes with the same config
Deployment [SOFTWARE] - A deployment is responsible for keeping a set of pods running.
deployment represent a Microservices with all its relises
deployment manage release rollout with zero downtime
Cluster [Hardware] - place when we run the workflow. group of compute instance
LoadBalancer - also create Cloud LB - create individual LB for service. So for each service we will get own LB. It exposes service via LB (not recommended )
NodePort - expose service by static IP. then expose it via Ingress to the world (recommended) - as created 1 LB to redirect mServices
ClusterIP - internal service to service communication
replicaSets - ensures, that specific number of pods running for specific microservice version. for V1 and V2 we will have different replica sets
kubectl get replicasets
workloads - deployable units. can have type - deployment
Ingres - set of rules to route traffic to service via LB. Collection of rules to expose services to the worlds
It is recommended appoprach to expose services
provide LB and SSL termination
recommended: use Service type NodePort: and expose it by ingress
belongings: Deployment -> ReplicaSets -> Pods -> Containers
13 steps to play with Kubertnate
set up autoscalling for cluster - scales pod, nodes are the same
kubectl autoscale deployment hello-world-rest-api --max=4 --cpu-percent=70
also called horizontal pod scalling - change replicaSet config
set up autoscalling for Cluster level - scale nodes, pods are the same
gcloud container clusters update --enable-autoscaling --min_node = X --max_node = Y
increase number of nodes
gcloud container clusters resize my-cluster --node-pool default-pool --num-nodes=2 --zone=us-central1-c
we are not really happy to do it manually - autoscaling is the best
9, add config for mServices - level - deployment/Service
kubectl create configmap hello-world-config --from-literal=RDS_DB_NAME=todos
Increase number of instances for my mService
kubectl scale deployment hello-world-rest-api --replicas=3
Create secret - like config, but keep it securely(encrypted)
kubectl create secret generic hello-world-secrets-1 --from-literal=RDS_PASSWORD=dummytodos
Deploy mService to Cluster
kubectl expose deployment hello-world-rest-api --type=LoadBalancer --port=8080
- expose port and create Service with type LoadBalancer
created also Cloud Load Balancer in GCP
kubectl get services
- get services
kubectl get services --watch
- get and watch the diff
Create Deployment and Service:
kubectlt create deployment NAME image=-IMAGE_PATH
kubectl is available on cluster shell
Below we use imperative style - create all by commands. We also have declarative style - we can describe Deploymen and Service and all other in YAML and set to Workload group
Connect to Kubernates cluster -
gcloud container clusters get credentials ....
- command to connect to cluster ( used to connect to a Kubernetes Cluster)
Deploy mService with GPU attached
gcloud containers node-pool create NAME cluster-name = CLUSTER NAME
need to change nodeSelector:
Login to cloud Shell
Delete all Service, Deployment, Clusters(gcloud)
kubectl delete service hello-world-rest-api
kubectl delete deployment hello-world-rest-api
gcloud container clusters delete my-cluster --zone us-central1-c
create K8S with default pool nodes
need to enable API
when we create cluster, we have two modes
Standart - i am responsible for management
Autopilot - goal is to reduce operation costs
no need to be hands-on, cluster infra is managed by GCP
"
gcloud container clusters create
" or via UI console. "container" - means K8S.
Once we use Standard - it created cluster with 3 nodes, total vCPU - 6, Total Mem - 12 Gb
We can create a zonal or regional cluster. Here described usage of standard mode
GKE cluster types
Zonal
Single zone - single master, single region for nodes
Multy-zone - single master, but nodes in different zones
Regional - a few masters in the different zones with some replication mechanism. Region is still the same. Nodes runs in the same zones where Master runs.
It provides HA in case Master fails
Private cluster - VPC private cluster. Node have only internal clusters
Alpha cluster - used to test new K8S features
Container Registry
Integrated with CloudBuild
there are set of rules: policy, security and vulnerabilities checks
container registry on GCP - alternative to Docker registry. But it is private by default
Naming: hostname/projectid/image:tag
how to create images - best practises
create Dockerfile
don't copy unnessasery points
move points which are not changed to the top
use lightweight FROM
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE) or Cloud Run. With Binary Authorization, you can require images to be signed by trusted authorities during the development process and then enforce signature validation when deploying.
UseCases and scenarios
need autoscalling and efficient
horizontal pod autoscalling
cluster autoscalling for nodes
Execute untrusted 3-th party code
best option - create separate node-pool with GCE Sandbox. for untrusted code
make it cheaper
E2 machines are cheaper than N1
Chose right env - like setup differ node pools
presentable VMs, right region
mServices only with internal communication
create ClusterIp Services
pod issues
pod waiting - need failure to put image
pod pending - need more resources, need new node