Please enable JavaScript.
Coggle requires JavaScript to display documents.
Kubernetes Patterns Summary - Coggle Diagram
Kubernetes Patterns Summary
Managed Lifecycle
postStart
- exec/httpGet run alongside container process at-least-once, must succeed to enter Ready state
preStop
- exec/httpGet run to initiate graceful shutdown when SIGTERM isn't enough
Ensure container CMD is a process that passes signals like SIGTERM to children e.g. Alpine LInux shell does not, bash does
terminationGracePeriodSeconds
defaults to 30, max delay before SIGKILL is sent
Predictable Demands
Storage dependency
ConfigMap / Secret dependency
Resource dependency
cpu / memory requests and limits
pod priority
pod QoS best-effort, burstable, guaranteed
namespaced ResourceQuota
namespaced LimitRange
Declarative Deployment
RollingUpdate
- maxSurge>=1 and maxUnavailable>=1
Fixed
- use Recreate strategy,
not
zero downtime
Blue-Green
- not built-in, fully populate second replicaset with a different label, and switch Service selector when ready
Canary
- not built-in, single instance second replicaset, serve some traffic to canary before fully populating second replicaset and destroying old
Automated Placement
Node resources
Storage dependency
HostPort
NodeSelector labels
NodeAffinity rules
podAffinity and podAntiAffinity
Node taint and toleration
Kubernetes optional descheduler Job
PodDisruptionBudget to control
voluntary
disruptions
Custom scheduler
Behavioural Patterns
Job
- finite compute task that is meant to end in success
CronJob
- scheduled repeating Job
DaemonSet
- one pod per node, good for log collectors, metric exports, kube-proxy etc.
Singleton
- deployment replicas=1 with appropriate update strategy OR
strict
singleton with StatefulSet
StatefulSet
- TBD
Service Discovery and Ingress
Inside Cluster
Service object is static cluster IP and routes via label matching on other pods
Pods auto receive all service addresses as environment variables
BUT
that doesn't cover services created later
Services get a cluster DNS entry e.g.
my-service.default.svc.cluster.local
sessionAffinity for request stickiness (L3 IP, not app layer cookie-based stickiness)
Endpoint resource for each pod matching a Service
Pod cluster IP is not enough - hard for other pods to discover
Headless service (clusterIP=None) to get no service IP and a list of Endpoint DNS entries for each matching pod
Outside Cluster
Service NodePort - requires firewall rules, client-side load-balancing / retry, no single entrypoint etc.
Service LoadBalancer - provisions cloud LB which takes care of LB, node health, no need for complex FW rules etc. still uses NodePort for the LB to connect
Ingress - app-layer functions like path-based routing, TLS termination and wildcard domains, weighted routing etc.
Initialisation
init containers
run in sequence prior to app containers for cleanly separated startup tasks, should be idempotent as pod restart causes them to be rerun
Admission controllers
built into the control plane and activated by apiserver flags mutate/validate admissions
e.g. LimitRanger and ResourceQuota controllers
Admission webhooks
for flexible runtime and external admission mutate/validate
PodPreset
for injecting repeated sections of pod specs at creation time