KUBERNETES
ARCHITECTURE

Client/Outiside

kubectl / APIs / Dashboard

etcd

kube Api Server

Controller

Scheduler

Kubernetes MAster
The master node manages the Kubernetes cluster, and it is the entry point for all the administrative tasks. You can talk to the master node via the CLI, GUI, or API.

Worker 1

Master and Worker Node
Example > Minikube
Minikube uses VirtB to run Kubernetes node on VM

Pod 1
Containers

Pod 2
Containers

kube proxy

kubelet

API Server performs all the administrative tasks on the master node. A user sends the rest commands to the API server, which then validates the requests, then processes and executes them.

Non-terminating control loops that regulate the state of the Kubernetes cluster is managed by the Control Manager. Now, each one of these control loops knows about the desired state of the object it manages, and then they look at their current state through the API servers.

saves the resulting state of the cluster as a distributed key-value store.

schedules the work to different worker nodes. It has the resource usage information for each worker node. Then the scheduler schedules the work in terms of pods and services.

A worker node is a virtual or physical server that runs the applications and is controlled by the master node.

The pods are scheduled on the worker nodes, which have the necessary tools to run and connect them
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster

It listens to the API server for each service point creation or deletion. For each service point, kube-proxy sets the routes so that it can reach to it.

Kubelet is basically an agent that runs on each worker node and communicates with the master node making sure the containers are running in a pod

Vulnerability in the configuration of a pod can let the attacker get into a container and probe for weaknesses in a network, process, controls or file system.

Kubelets expose HTTPS endpoints which grant powerful control over the node and containers and by default Kubelets allow unauthenticated access to this API.

apply access control policies to your Kube API
define what type of permission should be given to the users through RBAC

kubectl

Services

kubeDNS

Kubernetes-dashboard

running on the same address as the kubernetes master
has a proxy namespace specifier (kube-dns)
The domain name service (DNS) is typically used for scheduling purposes (e.g. at Pods or Service level) within the cluster.

runs on the same address as the master but with a fully qualified namespace (kubernetes-dashboard) to uniquely distinguish it from the other services currently running.

Minikube creates a VM on your local machine and deploys a simple cluster containing only one node.
The minikube CLI provides basic bootstrapping operations such as: start, stop, status and delete.

commands

minikube start
minikube dashboard
minikube dashboard --url

getting informations

On minikube, the LoadBalancer type (which is the type that we need to expose a container as a kubernetes service) makes the Service accessible through the minikube service command.
minikube service podName

ways of deployments
deploying pods

Deployments

Managing

Creating

Updating

One way to create a Deployment using a .yaml file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument.

LifeCycle

Completed state

failed state

progressing state

kubectl apply -f dep-file.yaml

Statefulset

Daemonsets

It is a Kubernetes controller that matches the current state of your cluster to the desired state mentioned in the Deployment manifest

Persistence

used to manage stateful applications. It manages the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods.

it doesn’t create ReplicaSet rather itself creates the Pod with a unique naming convention. e.g. If you create a StatefulSet with name counter, it will create a pod with name counter-0, and for multiple replicas of a statefulset, their names will increment like counter-0, counter-1, counter-2, etc

Persistence

A DaemonSet is a controller that ensures that the pod runs on all the nodes of the cluster. If a node is added/removed from a cluster, DaemonSet automatically adds/deletes the pod.

typecal use case

monitoring exporter

logs collection daemon

Persistence

use case

useful in case of Databases especially when we need Highly Available Databases in production as we create a cluster of Database replicas with one being the primary replica and others being the secondary replicas. The primary will be responsible for read/write operations and secondary for read only operations and they will be syncing data with the primary one.

use case

usually used for stateless applications. However, you can save the state of deployment by attaching a Persistent Volume to it and make it stateful, but all the pods of a deployment will be sharing the same Volume and data across all of them will be same.

Create a service

Create a service
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
to make the pod/container accessible from outside the kubernetes virtual net > we have to expose as kubernetes service

kubectl expose deployment hello-node --type=LoadBalancer --port=8080
type=LoadBlanacer indicates that we want to expose our service outside of the cluster

kubectl cluster-info


create a deployment that manages a pod
example >
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
kubectl get pods
kubectl get events
kubectl config view


retrieving yaml files from deployments/statefulets..
example
kubectl get statefulset
kubectl describe statefulset deplymentName > file.yaml
kubectl get configType configName -o yaml > file.yaml

ways of confugurations

using Helm chart

most efficient as he the does the initial setup,and operator will then manage the running prometheus setup
no config is needed out of the box deployment

helm install prometheus/operator

Deps

Deploymeny

Daemonset

node_exporter being a daemonset
is component which runs on every worker node of kubernetes

using operator

using an operator (kubernetes operatorss )
we can think of it as manager of ts prometheus componenents
for example as we know statfulsets and and deploymenys will manage their pod replicas (restarting when they die. making them accessible .. etc ) in the same way operator manage combination of statefulsets and deploymets.. that make prometheus as one unit

we just need to find a prometheus operatr and deploy it as one unit

Deployment

creating all configs in yaml files ourself

  • prometheus
  • alertmanager
  • grafana
    configs maps and secrets that we need in the right order for the reason of dependencies
    not so many use cases and we need to know that we are doing

in a produnction cluster setop we usually have 2 master nodes and many workers
but for minikube is a one node cluster
\both master abd worker processer work at the same node which represent the master and worker node

Worker Processes

Worker 2

Master Processes

Api Server

To interact and create kubernetes components with minikube we use kubectl create ressources, objects and the wayto do it is through kubectl and this is through the main entry point which is the Apiserver

The main entry point into the kubernetes cluster

Intreaction

Api

UI

kubectl

Worker Nodes

Overlay Network

conatainer 1
:6666

container 2
:7777 of the pod localhost

container 1
:5555

container 2
:8888

localhost | pod IP 10.0.30.40

localhost | pod IP 10.0.30.50

Pod Network

container inherits the IP given to the pod that belongs to

  • containers within same pod communicate with each other through the localhost
  • container accross diffrent pods communicate using the IP they inherited within through pod network

Load Blanacer service

Service

pod

Pod

is the standard way to expose a service1 to the internet. willl generate a load balancer and give us an IP address and forward all traffic to this service
spec:type: loadbalancer
spec:ports:-name:http/port:80/targetport:8080/protocol:TCP

Ingress

/service1

/service3

/service2

Kubernetes Accounts

User Account

Service Account

to authenticate users to the given kubernetes cluster. accounts are authenticated by the API server

authenticate machine lvel processes running in the pod to get authenticated ro get access to the kubernetes cluster and are also authenticated by the API server

Client/Outiside

kubectl / APIs / Dashboard

etcd

kube Api Server

Controller

Non-terminating control loops that regulate the state of the Kubernetes cluster is managed by the Control Manager. Now, each one of these control loops knows about the desired state of the object it manages, and then they look at their current state through the API servers.

Scheduler

schedules the work to different worker nodes. It has the resource usage information for each worker node. Then the scheduler schedules the work in terms of pods and services.

Worker 1

Pod 1
Containers

The pods are scheduled on the worker nodes, which have the necessary tools to run and connect them
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster

Pod 2
Containers

kube proxy

It listens to the API server for each service point creation or deletion. For each service point, kube-proxy sets the routes so that it can reach to it.

kubelet

Kubelet is basically an agent that runs on each worker node and communicates with the master node making sure the containers are running in a pod

Kubelets expose HTTPS endpoints which grant powerful control over the node and containers and by default Kubelets allow unauthenticated access to this API.

A worker node is a virtual or physical server that runs the applications and is controlled by the master node.

API Server performs all the administrative tasks on the master node. A user sends the rest commands to the API server, which then validates the requests, then processes and executes them.

Worker 2

saves the resulting state of the cluster as a distributed key-value store.

Master Node

Worker Nodes

sidecar container

are the containers that should run along with the main container in the pod. This sidecar pattern extends and enhances the functionality of current containers without changing them.

Client/Outiside

kubectl / APIs / Dashboard

etcd

kube Api Server

Controller

Non-terminating control loops that regulate the state of the Kubernetes cluster is managed by the Control Manager. Now, each one of these control loops knows about the desired state of the object it manages, and then they look at their current state through the API servers.

Scheduler

schedules the work to different worker nodes. It has the resource usage information for each worker node. Then the scheduler schedules the work in terms of pods and services.

Worker 1

Pod 1
Containers

The pods are scheduled on the worker nodes, which have the necessary tools to run and connect them
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster

Pod 2
Containers

kube proxy

It listens to the API server for each service point creation or deletion. For each service point, kube-proxy sets the routes so that it can reach to it.

kubelet

Kubelet is basically an agent that runs on each worker node and communicates with the master node making sure the containers are running in a pod

Kubelets expose HTTPS endpoints which grant powerful control over the node and containers and by default Kubelets allow unauthenticated access to this API.

A worker node is a virtual or physical server that runs the applications and is controlled by the master node.

sidecar container

are the containers that should run along with the main container in the pod. This sidecar pattern extends and enhances the functionality of current containers without changing them.

API Server performs all the administrative tasks on the master node. A user sends the rest commands to the API server, which then validates the requests, then processes and executes them.

Worker 2

saves the resulting state of the cluster as a distributed key-value store.

CA is the trusted root for all certificates inside the cluster
Allows components to validate to each other.
All cluster certificates are signed by the CA.

ETCd has its own certificate.

apiserver cert

kubelet cert.

scheduler cert.

api server

Controller

Scheduler

Etcd

Node

/ kubeproxy

kubelet

Pod

Pod Net Ns

Veth

Host Net Ns

Pod

Pod Net Ns

Veth

Veth

Veth

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Kubelet

CNI

Kube-Proxy

CRI

Master Nodes

Worker Node

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Kubelet

Kube-Proxy

clusterIP
(iptables)

CNI

CRI

pods

Client

ClusterIP
(iptables)

pods

Client

Service

pods

service

pods

Client

LB

Gateway Proxy

Sidecar
proxy

Sidecar
proxy

Mixer

ControlPlane

Pilot

Gallery

Citadel

DataPlane

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Kubelet

CNI

Kube-Proxy

clusterIP
(iptables)

pods

ClusterIP
(iptables)

pods

CRI

ClusterIP
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.

NodePort

Client

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Kubelet

CNI

Kube-Proxy

clusterIP
(iptables)

pods

ClusterIP
(iptables)

pods

CRI

LoadBalancer

Client

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Kubelet

CNI

Kube-Proxy

clusterIP
(iptables)

pods

ClusterIP
(iptables)

pods

CRI

Client

Ingress

  • paths

Unlike all the above examples, Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster.
Ingress is probably the most powerful way to expose your services, but can also be the most complicated. There are many types of Ingress controllers, from the Google Cloud Load Balancer, Nginx, Contour, Istio, and more. There are also plugins for Ingress controllers, like the cert-manager, that can automatically provision SSL certificates for your services.

A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service.

A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
Basically, a NodePort service has two differences from a normal “ClusterIP” service. First, the type is “NodePort.” There is also an additional port called the nodePort that specifies which port to open on the nodes. If you don’t specify this port, it will pick a random port. Most of the time you should let Kubernetes choose the port;

application

application

Gateway Proxy

Sidecar
proxy

Sidecar
proxy

Service

Service

ControlPlane

Pilot

Gallery

Citadel

Mixer

Mixer

Gateway
Config

Virtual
Service

Destination
Rule

istio-ingressgateway

customize how traffic entering the cluster is routed to our services. By default, Istio deploys a Gateway Proxy called istio-ingressgateway in the istio-system namespace

The Gateway Configuration configures the Gateway Proxy, specifying which ports are exposed and which protocols can be used by ingress traffic. The Gateway Configuration operates only on properties of OSI layers 4–6. You can’t configure application-layer (7) routing rules here (this is what Virtual Services are for).

Layer 7

Layer 4 - 7
Istio

click to edit

defines a set of request routing rules that can be used to distribute traffic to different destinations in the service mesh. Specifically, Virtual Services define application-layer traffic routing rules,

VirtualService resources are not standalone services running on their own set of pods, instead they are simply configuration that is applied to the proxies in the mesh that actually accept and send requests. Virtual Services can be applied either to the Gateway Proxy, or to the sidecar Envoy proxies that run alongside the services for your application that are running in the mesh.

define routing policies applied to traffic that has already been routed to a particular service.

Ingress Proxy

LB

Client

ControlPlane

Pilot

Gallery

Citadel

pod >istiod

Sidecar
proxy

Sidecar
proxy

Activator

Mixer

Autoscaler

Service Ns istio-enabled

knative-serving Ns istio-enabled

istio-system

istio-system Ns

istio-system

Sidecar
proxy

Queue

application

service Ns istio-enabled

istiod needs to get the readu pods endpoints and then push to them the istio-rpoxy

Activator cannot access user pods by IP directly when mesh is enabled in strict mode.

istio ingress gateway

ISTIO

KNATIVE / ISTIO

KNATIVE / ISTIO
STRICT MODE

Cost
to much delay
initContainer(5s) + SidecarProbingInfo(2s)

user

istio Ingress gateway

device

click to edit

Route

Virtual Service

Revision (knService)

Service

Route / Configuration
Revisions

KNATIVE

eventing

serving

Patterns

Knative event source >sink

knative service

source to sink

Channel and suscription

knative event source >sink

Channel Suscriptions>

Knative service B

Knative service A

knative service event producer

Broker<Suscriptions

Knative Eventing Trigger

Knative Eventing Trigger

Knative Service

Knative Service

Brokers And Triggers

basic pattern

K8s Dep

K8s HPA

K8s service

Basic Pattern
without kn abstraction

Revisions

Kn service

K8s Dep

Kn KPA

K8s service

K8s ingress

istio
virtual service

Kn service

revision 1

K8s Dep 1

Kn KPA 1

K8s service 1

ingress gateway

revision 2

K8s Dep 2

Kn KPA 2

K8s service 2

updated
ingress gateway

istio ingress gateway

revision 2
k8s service

revision 1
k8s service

pods

pods

kn service

kn
route

kn
configuration

kn revisions

workflow
once we create a kn service

KPA

Serverless function of Knative is performed by pod autoscaling to zero and upscaling with the traffic coming to the application. So it’s a good thing that you have a clear idea about the Knative Pod Autoscaler (KPA)

serving

Autoscaling

KPA

In the early stages of Knative, it had a per revision autoscaler but in the latest versions, it replaced by a single shared autoscaler. Now, this is by default Knative Pod Autoscaler (KPA) provides more efficient request based autoscaling capabilities.

Autoscaling is the ability for the Knative Service to scale out its pods based on inbound HTTP traffic. The autoscaling feature of Knative is managed by: HAP(defaultautoscaler builtin into kubernetes) and KPA
The HPA relies on three important metrics: concurrency, requests per second, and cpu. The KPA can be thought of as an extended version of the HPA with a few tweaks to the default HPA algorithms to make it more suited to handle the more dynamic and load-driven Knative scaling requirements.

kn service

configuration

route

ingress gateway

revision

Kubeclt

Api Server

Etcd

Controller Manager

Scheduler

Worker Node 1

Worker Node 2

Kubelet

Kubelet

kube-proxy

kube-proxy

pods
containers

pods
containers

advanced pattern
Kn service

K8s Dep

Kn KPA

K8s service

istio
virtual service

Camel K operator

Kamelet

Kameletbinding

Integration

Integration Kit

Build

KnativeService / Deployment

pods

System B

System A

Kafka Broker

Kubeclt

Api Server
Api request > Authentication > Authorization > Admission Control > etcd

Etcd

Controller Manager

Scheduler

Kubelet

CNI

Kube-Proxy

CRI