Please enable JavaScript.
Coggle requires JavaScript to display documents.
Certified Kubernetes Security Specialist (CKS) - Coggle Diagram
Certified Kubernetes Security Specialist (CKS)
Cluster Setup 10%
1.Network policies
try hands-on on
https://editor.cilium.io/?id=5juQcByD6JWmwMXz
default deny
front-end to back-end
backend to database
ingress/egress
from for ingress to to egress
traffic types
i) podSelector (labels)
namespaceSelector (lables)
ipBlock (CIDR)
2.CIS Benchmark - Login to CIS website and download
download CIS benchmark policies
Kube-bench
run on master
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20
run on worker
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest node --version 1.20
3. Ingress Objects
Secure ingress
i) openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
Create new secret tls
ii) kubectl create secret tls tls-secret --cert=path/to/tls.cert
--key=path/to/tls.key
modify ingress object with following addition:
https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
K8s Ingress Docs
https://kubernetes.io/docs/concepts/services-networking/ingress
Install NGINX Ingress
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.yaml
4. Install GUI element and secure it
i) Installation
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
ii) insecure access
a) remove auto-generate-certificates
b) add insecure-port
c) remove/add liveness probes
d) change type of service to Nodeport
e) change targetPort to 9090
f) access via NodePort and Node public IP
iii) secure access
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md
5. Node Metadata protection
1 deny egress traffic on 169.254.169.254
allow ingress traffic with pod selector having labels
refer
https://github.com/killer-sh/cks-course-environment/tree/master/course-content/cluster-setup/protect-node-metadata
6. Verify platform binaries
do shasum -a 512 on downloaded binaries
Cluster Hardening 15 %
1. Role and role binding
Role and rolebinding for access restriction
cluster role and cluster role binding
2.service accounts
automount to false
3. Cluster upgrade
Master
drain the mastr node ignore daemonsets
apt-get update
kubeadm upgrade plan
kubeadm upgrade apply <version mentioned>
apt-get iupdate kubectl kubelet to given version
uncordon master node
Node
drain the node with ignore daemonsets
apt-get update
apt-get install kubelet kubectl kubeproxy with versions mentioned
Minimize Microservice vulenrabilty 20 %
container runtimes and sandboxing
what is sandbox?
Playground when implementing an API
Simulated testing environment
Development server
Security layer to reduce attack surface
gvisor (from Google)
user-space kernel from contianerd
Another layer of seperation
Not hypervsior/VM based
Simulates kernel syscalls with limited functionality :
runs in userspace separated from linux kernel
runtime called runsc
Each container willl get its own gvisor
katacontainer
additional isolation with a lightweight VM and individual kernels
strong seperation layet
runs every container in its own private VM (Hypervisor baed)
cricclt
CLI for CRI-compatible container runtimes
creating and using different runtimes
1 create object of kind Runtime class with name of runtime and handler Eg name: gvisor handler: runsc
specify runTImeClassName under POD spec
Admission controller
How to enable
add entry inside enable-admission-plugins=<Plugin-Name> inside /etc/kubernetes/manifests/kube-api-server.yaml
create cluster resource for required admission controller
create ClusterRole/Role
Create rolebinding
commonly used are
AlwaysPullImage
ImagePolicyWebhook
PodSecurityPolicy
DefaultStorageIngressClass
NamespaceAutoProvsion
NodeRestriction
MutatingAdmissionWebhook
can change request
ValidatingWEbhook
validate request
Security context
add SecuirtyContext at pod spec or container spec
container spec overrides pod spec
runAsUser/Group/ fsGroup
runAsNonRoot: true/fals
capabiliies ADD/DROP
NET_ADMIN
SYS_TIME
alllowPrivilegeEscalation --> whether process inside contianer can gain more privilege than parent
seccompProfile
privileged --> container user 0 (root) mapped to hos user 0(root)
OPA- Open Policy Agent
can be used for extension to kubenretes which gives ability to write custom policies with help of Admission controller
Easy implementation of policies (Rego)
Does not know concept of pods, deployment
OPA gatekeeper
use OPA by creating CRD in your cluster
ConstraintTemplate -- > defines rules what to do (templates.gatekeeper.sh/v1beta1)
Constraints --> implements rules given by ConstraintTemplate (constraints.gatekeeper.sh/v1beta1)
kube-mgmt
Mutual SSL
Pod security policy
cluster leve resource which controls under which security condition pod has to run
implemented as an optional Admission controller
enabling this we can control
running privilege containers
usage of host networks/namespaces/volumes/portd
usage of fsGroup
readOnlyRootFilesystem
Linux capabilities
seccomp profiels
seLinux
How to enable
add entry inside enable-admission-plugins=PodSecurtyPolicy inside /etc/kubernetes/manifests/kube-api-server.yaml
create cluster resource pod secuirty policy
create ClusterRole/Role
Create rolebinding
secrets management
as plain text literal
as env file
as envionment variables
creating secret volume
etcd encryption
crete encryption configuration with given password
provide path of encryption file with --encryption-provider-config inside kube-apiserver manifest
create volume and mount it to api server pod and wait api server to restart
existing secrets will not be encrypted but. new one will be encrypted
to encrypt all existing secrets run k get secrets -A -oyaml |k replace -f -
Supply chain security 20 %
1. minimize base image foot print
i) create multi stage build
ii) use distroless base images/alpine
iii) create slim/minimal image
iv) find official minimal image with required package
v) remove package manager, wget, curl editor, shell from actual image and keep only minimal binaries which required to run application
vi) maintain different images for different environments
prod -lean
Development - DEBUG
2. Image security
i) use speclific package version
ii) Dont run as root
iii) make filesystem readonly
iv) remove shell access
3. static analysis
i) OPA conftest
docker run --rm -v $(pwd):/project openpolicyagent/conftest test deploy.yaml
scan image to run as non rot user (security Context)
c) scan Dockerfile for any violations
a) not to use bae image as ubuntu
b) not to use apt shell netstat wget curl command in Dockerfile RUN
ii) kubsec
b) docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < pod.yaml
a) kubectl run nginx --image nginx --dry-run=client>pod.yaml
c) gives a score by scanning pod definition file/ Dockerfille
d) can be run as
docker image
binary
http server
4. secure supply chain
Image policy webhook
i) add ImagePolicyWebhook in side --enable-admission-plugins
ii) create admin-webhook config of type image policy webhook
OPA
i) Install OPA kubectl create -f
https://raw.githubusercontent.com/killer-sh/cks-course-environment/master/course-content/opa/gatekeeper.yaml
ii) add ConstraintTemplate
https://github.com/killer-sh/cks-course-environment/blob/master/course-content/supply-chain-security/secure-the-supply-chain/whitelist-registries/opa/k8strustedimages_template.yaml
5. Vulnerability scanning
Tools to be used
trivy
docker run ghcr.io/aquasecurity/trivy:latest image nginx:latest
alpine gives less issues
clair
Vulnerability can be inside our image and its dependencies
can be checked during runtime, ie Dockerfile, POD defintion
at runtime ie Mutating webhooks and controllers
restrict using OPA/PSP
Best practice
add scanning job inside CI/CD pipeline
admission controller to scan image
own repository with pre-scanned images ready to go
re-scan
Monitoring, Logging and runtime security 20 %
Behavourial analytics
strace
/proc directory
contains information about process and kernel
contains files that not even exist
2. Falco as a tool
types of outputs used
i) standard output
ii) program output
iii) htttp endpoint
iv) file output
rules files
i) default /etc/falco/falco_rules.yml
override using /etc/falco/falco_rules_local/yml
contains
rules
description
condition
output
priority
hot reload can be done using
i) find pid of falco inside /var/run/falco.pid
ii) kill -1 PID of falco process
3. Deep analytics
4. Immutability of container at runtime
i) make rootfs as read only using readOnlyRoofFilesystem:: true inside securityContect
ii) create emptyDir{} volume mounts only for those directories where pod wants to write data
privliedge:false
use pod security policies
seLinux
runAsUser
fsGroup
5. Audit logs
refer:
https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
levels
None --> - don't log events that match this rule.
Metadata --> log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
Request --> log event metadata and request body but not response body. This does not apply for non-resource requests.
RequestResponse --> log event metadata, request and response bodies. This does not apply for non-resource requests.
inside kube-apiserver service/pod defintion
we have to add rules inside Policy given by audit.k8s.io/v1
namespace
resources
group --> apigrpup
resources --> pods/configmap etc
verb --> action to perform
levels
Stages
RequestReceived
ResponseStarted
ResponseComplete
Panic
System Hardening 15 %
Reduce attack surface
Application
i) update application libraries
ii) update system libraries and kernel
iii) remove unnecessary packages
IAM
i) avoid root accesss
a) inside /etc/sshd/sshd_config do
Permit RootLogin no
Password Authentication no
b) inside /etc/sudoers set no login to root user root:x:0:0::/root:/usr/sbin/nologin
ii) create sudo users with proper permissions
d) add list of users to the group having sudo access
c) create group with sudo permission if users are more
Network
i) check and open ports
ii) ii) firewall and IP tables settings
netstat -natp | grep 9090
Avoid external access
1) Disable open ports
For Kubernetes installation using kubeadm ports needed are
Link Title
2)Firewall restriction
To see all open ports netstat -an|grep -w LISTEN
Hardware devices Eg: Cisco ASA, Juniper NGFW, Fortinet
On system ufw/firewalld/iptables
i) Allow all outgoing -> ufw default allow outgoing
ii) Deny all incoming -> ufw default deny incoming
iii) Allow ssh only from jumpserver -> ufw allow from <IP-address of jump-server> to any port 22 proto TCP
iv) Allow http only from jumpserver -> ufw allow from <IP-address of jump-server> to any port 80 proto TCP
v) Allow http only from CIDR -> ufw allow from 172.17.100.0/28 to any port80 proto TCP
vi) deny 8080 to incomig -> ufw deny 8080
To enable --> ufw enable
to see status-> ufw status
to remove --> ufw delete deny 8080, ufw delete 5
Kernal hardening
loading module manually
insmod
modprobe
blacklist modules
i) adding entry inside /etc/modprobe.d/blacklist.conf
ii) eg. blacklist sctp /dccp
iii) shutdown -r now
list all loaded modules
lsmod
list all units of service --> systemctl list-units --all | grep nginx
remove service --> rm /lib/systemd/system/nginx.service
IAM restriction
types of users
user account Eg ubuntu, centos
system account Eg: ssh,mail
suoer user account
service account Eg nginx, http
For Cloud Eg AWS
For end users
create IAM policy and attach to user or group
for services in cloud
create IAM role and attach to service
Linux Syscalls and restriction/ Kernel Hardening
seccomp
for Docker
specify --security-opt seccomp=<path> during docker run
for Kubernetes
i) move profile to /var/lib/kubelet/seccomp/ directory
ii) add secuirtContext and mention seccomp profile of type localhost/<profilename>
apparmor
aa-status --> to check loaded profile
aa-genprof <system-tool> --> to create profile
aa-logprof --> modify profile according to syslogs generated for same
apparmor_parser </etc/apparmor.d/tool-name> --> to create our own profile
to use for docker we have to use
--security-opt apparmor=<profile-name> during docker run
to use for kubernetes
add annotations as container.apparmor.security..beta.kubernetes.io/<contaier-name>: localhost/profile-name