Please enable JavaScript.
Coggle requires JavaScript to display documents.
Google Cloud Networking - Coggle Diagram
Google Cloud Networking
VM
- external IP address is unknown to VM, transperently mappped to internal via VPC
- can have as many NICs as CPU's - up to 8
- DNS is resolved only for network on NIC0
- VM has routes only to networks it is connected to. Default route points to the 1st NIC's GW
- Bandwidth: 2Gbit/s per core
Hybrid Connectivity
INTERCONNECT - via private IPs
All within RFC1918 address space - to VPC
Partner Interconnect
SLA 99,9% or 99,99% - between Google and Partner
- connection to VPC via service provider
- L2 connection via supported partner or L3
- BGP has to be established to Cloud Router by Partner
- 50 Mbit/s or 10 Gbit/s per connection
Dedicated Interconnect
SLA 99,9% or 99,99%
- L2 physical connection at Google's Edge Location - collocation facility
- direct connection to VPC network
- BGP has to be established with Cloud router (or static)
- 10Gbit/s (max 8x10) or 100 Gbit/s (max 2x100) per link
- RFC1918 addressing
- Required LACP link setup even for 1 link - for future extension
- traffic is NOT encrypted
CDN Interconnect
- direct peering links from 3-rd party CDN provider to GCP (Edge)
- optimize your CDN population costs and use direct connectivity to selected CDN providers
- additional package for Partner or Dedicated Interconnect
- good for cloud workloads which frequently update content in 3rd party CDN
Cloud VPN
Cloud VPN gateway is regional
- ESP, UDP/500, UDP/4500
- can be used for Private Google Access for on-premise
- up to 3 Gig/s each tunnel
- MTU 1460
- use route priorites (higher - better) to load balance traffic (equal route priorities) or act/stdb (primary and secondary)
- scale bandwith by adding more VPN gateways or tunnels
- remote-access VPN is NOT supported
- SSL VPN NOT supported
IPSEC VPN
- 1,5 - 3 Gbit/s per tunnel
-- 1,5 Gbit - via public internet
-- 3 Gbit - via direct peering link
- RFC 1918 to VPC
- SLA
- Classic or HA
Classic VPN -Target VPN gateweay
- 99,9 SLA
- Public IP and forwarding rules must be created
- static (policy based, route based) or dynamic routing is supported
- can have more than 1 tunnel
Static routing:
- Policy based routing - IKE1 (traffic selectors): define local (left side) and remote subnets/ip ranges (right side). To change it, destroy VPN and recreate it again.
- Route based routing - IKE2: define only remote IP range (to create routes). To change it, destroy VPN and recreate it again.
HA VPN - VPN gateway
- 99,99 SLA
- public IP from pool, no forwarding rules required
- dynamic routing only - BGP
- 2 links
- Active/Active or Active/Passive options
Cloud router is required to exchange routes
Topologies
HA VPN gateway can peer to:
- 2 remote peer devices with 2 IPs
- 1 remote peer device with 2 IPs
- 1 remote peer device with 1 IP
- Transport Mode: only payload is encrypted
- Tunnel Mode: entire IP packet is encrypted and authenticated
IKEv1 -Policy-Based
- traffic selector required
- no dynamic routing
IKEv2 - Route-Based
- BGP and active routing
- ECMP - equal cost multipathing (LB)
- tunnel interface created - VPN always UP
PEERING - via public IPs
- REACH GOOGLE's EDGE SERVICES (GCP public IPs and GSuite/YT/APIs)
- no SLA
- reach services via public IP address
- via internet or Cloud VPN
- Layer 3
- to cut egress fees
- access only to Google public IPs
- GSuite / YouTube / Google Cloud API's
Carrier Peering
- Connection through Partner
- Capacity depends on partner
- no SLA
- cost of ISP
- BGP or static
Direct Peering
- 10 Gbit/s per link
- via physical router and BGP in co-location
- no SLA
- free of charge
- Google location must be close to us
- Connection to Google's POP's
Pricing and Billing
- Interconnects / Peering / VPN is not the case here
- Ingress traffic is free of charge
We pay for
- Egress between zones in the same region - 0,01$/ GB
- Egress in the same zone via external IP - 0,01$ /GB
- External IP assigned but unused
- External static / ephemeral IP used by standard or preemptible VMs
Service Tiers
Premium (cold potato routing)
- High performance routing
- Global SLA
- Global LoadBalancing / Cloud CDN / Cloud VPN/Router
- Trafic to / from GCP to enduser goes via internal Google's network
Standard (hot potato routing)
- Lower performance
- No global SLA
- only regional LoadBalancers
- Trafic to / from GCP to enduser goes via public internet
Security
Cloud NAT (ANDROMEDA)
- regional NAT gateway
- manual or auto public IP allocation
- associated with a single VPC
- only outbound NAT
- Cloud NAT is managed by Cloud Router but doesn't use BGP
- doesn't affect VMs with external IP addresses
- firewall rules and NAT - remember:
- FW rules are applied at instance level
- ENGRESS - first FW then NAT
- INGRESS - first NAT then FW
Cloud NAT
- Outbound service for instances with internal IPs
- No proxy inpath - external IP is assigned to VM with a slice of ports. VM is performing a NAT by itself
Autonomous
- IP address are assigned dynamically when scale
Manual
- We control IP assigned by NAT
GKE
- NODES - use primary IP range
- PODS / Services - use secondary IP ranges
- Google Private Access (communication within GCP)
- Private DNS
- VPC flow logs (for SIEM)
Private Google Access
- created VPC peering between your VPC and Google VPC where services are located
- allows instances and services to access Google APIs via internal IPs
Private Google Access for on-premises hosts
- to Google APIs for on premise hosts (requires Cloud VPN or Cloud Interconnect)
Private Google Access (can also be Restricted with FW)
VMs with internal IP only, can access public IPs of Google API and services
- enabled by subnet by subnet basis
Security layers:
- HW: lowest level on prem or at DC
- Service deployment: apps are deployed on infrastructure
Data storage: encryption at reset and deletions
- Internet communication: GFE, DDoS, Identity
- Operational Security: Safe software development and social engineering
IAP access src: 35.235.240.0/20 - allow SSH
IAP is an alternative to bastion hosts
requires: IAP Secured Tunnel User role
Shared VPC
- Host project
- multiple Service projects
- projects must belong to the same orgranization
- central network resource and policy administration
- VPC to VPC communication via internal IPs
- consistent policy across projects
- Shared VPC admin has to enable shared VPC on a host project and all existing and new VPCs are shared VPC by default
VPC Flow Logs
- enabled per subnet
- flow of VMs or GKE clusters
- export via Pub/Sub into SIEM
Rules
- Create single VPC per project to map VPC resource quotas to projects
- create VPC network for each autonomous team, with central resources in common HOST VPC
- create VPC in different projects to have autonomous IAM polices and administration
- IAM controls are global for project, but IAM roles apply to all VPCs in a project
- VPC Service Controls
- allow/deny on a project level to API's based on origin of the requests
- BRIDGE: protect (set perimeter) on few projects
- ACCESS LEVELS - to allow some external access to permieters (based on IP, user/service account, region, device policy, access level dependency) - define in Acess Context Manager
IP Addressing
- 4 IP addresses reserverd: subnet / broadcast / default GW and last usable IP - for future use
- IP aliases often assigned for PODs subnets
-
Internet access for VM
- external ip address
or
- cloud NAT/Proxy
- route to 0.0.0.0/0 in VPC
Legacy networks
- IP address range can stretch between regions - continuous (same IPs in different regions) - entire network uses single subnet IP range
- we should avoid them
- instance IP are not grouped by region
internet traffic is going via virtual switch
- many functionalities (cloud nat/vpc peering/shared vpc) is not supported for legacy newtorks
-
Quotas - soft limits
Limits: - hard limits, cannot be exceeded
Load Balancers
REGIONAL
EXTERNAL TCP/UDP Network LB (L3/4) - MAGLEV
- VIP
- regional (HA with multiple zones)
- external
- client IP preserverd
- IP based session affinity
- HTTP healthchecks
- high performance
- Firewall can controll traffic (we see client IP)
- hashing mechanisms (src-dest-ip-port-proto)
- no anycast - any region has it's own facing VIP
INTERNAL TCP/UDP LB (L3/4) - ANDROMEDA
- VIP
- SDN
- client IP preserved
- no middle proxy
- no box inside, SDN defined - control plane config
- no Single Point of Failure
- for internal services, in front of VMs (MIG)- for VM - to VM traffic
- Forwarding rule -> regional Backends
- healthchecks
- premium only
INTERNAL HTTP(S) - ANDROMEDA/ENVOY
- client IP preserverd
- access from VPC Peering, Cloud VPN, Cloud Interconnect (internally - RFC1918)
Limitations:
- components and backend must be in the host project
- client VMs can be in host or any other connected service projects
- premium only
GLOBAL
- anycast IPv4/6
- HTTPS/HTTP, TCP,SSL
EXTERNAL HTTP/S LB (L7 ) - implemented at GFE
- VIP anycast IP4/6 address (same IP announced by BGP routers, leading to closest path)
- premium only
- proxy based
- Global forwarding rule -> URL proxy -> backends
- client session terminated (new session to backends). Normal FW can't be used, Cloud Armour instead is used
- cross regional failover (worldwide capacity)
- easy autoscaling (DDOS protection)
- establishes 2 sessions:
1: ext client -> LB public IP
2: LB -> backend
EXTERNAL SSL Proxy - implemented at GFE
- IPv4 and IPv6
- global (premium) or regional (standard)
- proxy based
- certificate management (self or by Google)
- VIP
- for TCP with SSL offload (strips HTTPS -> HTTP)
EXTERNAL TCP Proxy - implemented at GFE
- for non http traffic
- global (premium) or regional (standard)
- ipv4 and ipv6
- client IP might be preseved (PROXY attribute)
- proxy based
- forwarding rule
- backend services or buckets
- established 2 sessions
1: ext client -> LB public IP
2: LB -> to backend
- VIP
- for TCP without SSL offload
GFE - Google frontends are located at Google's edge POP - target proxies. They have global control plane
NEG - Network Endpoint Groups
[ object that specifies a group of backend endpoints or services ]
- can recognize PODs (container native LBs)
- can healthcheck PODs directly
- knows what's in the backends
- backends for some LBs
- Zonal (ip:port): VMS on GKE's
- Internet (FQDN: port or ip:port)
Hint: use alias IP for NEGs
Healthchecks sources:
- 35.191.0.0/16
- 130.211.0.0/22
Cloud CDN
Backends (called Orign Servers) can be:
- instant groups
- NEGs
- buckets
Cache operations:
- cache hits (object is is cache)
- cache miss (no data in cache)
- cache egress (sending cache data to client)
- cache fill (fill cache data with data served for clients)
- cache hit-ratio (amount of requests which were served from cache)
- cache is fed when client request data
- cache is removed from CDN by expiration or eviction (out of space)
- we can disable cache for files by headers (no cache or private)
- supported Signed URLS
Limits:
- cache only GET requests: 200, 203, 206
- max file size: 5TB (byte range requestes) or 10 MB (iby default)
Cloud DNS
DNS peering (1-side relation)
- resolving zone's records betwen different VPC's DNS
-
-
Internal DNS
- works only within project or VPC
-
:fire: Cloud Armour
- layer 3, 4 and 7
- IPv4 and IPv6
- DDOS protection
- works with HTTP(s) LoadBalancers
- WAF (top 10 OWASP risks)
- XSS and SQL Injection protection
- security policies - filter ingress traffic
- in :star: premium network: policy simulations (--preview)
Protects only backends behind HTTP(s) external loadbalancer.
- Works at POP - edge of Google's network
- lower number - higher priority of rule
gcloud compute security-polices create SIEGE
gcloud computer security-policies rules create 1000 --security-policy=siege-protect --src-ip-ranges="34.120.77.179" --action="deny-403"
gcloud compute backend-services update my-backend --security-policy=siege-protect
Cloud Armour vs GCP Firewall
- Cloud Armour works at edge (client traffic)
- GCP firewall works on VPC level and controls proxied traffic (behind load-balancer)
Limitations
- doesn't work with CDN!
- doesn't not protect Cloud Storage with IP filtering
Buckets
- doesn't work with Internal HTTP(s) LB
- limited number of rules for Kubernetes (only IP whitelist/blacklist)
KMS and Encryption
Options
- Default Google encryption
- Customer-managed Encryption keys (CMEK)
using Cloud KMS - customer manages keys but keep them in GCP cloud KMS
- Customer Supplied Encryption Keys (CSEK) (keys on premise) available for cloud storage and compute engine - customer manage and keep keys on premise
Encryption in Google Storage
1) data is stored in cloud
2) data is chunked on pieces
3) each piece is enrypted with it's own key
4) data chunks are distributed across Google infrastrucure
KEYS [KMS]
- Keyring contains many keys
- Keys might have versions
- Keyrings / keys /versions cannot be deleted to avoid name conflicts
- Only value of key version can be deleted
Key states:
1) peding generatiom
2) enabled
3) disabled (can be enabled again)
4) scheduled for destruction
5) destroyed
Key types:
- Symmteric encryption (enc / dec)
- Assymetric sign - private used to sign, public to verify
- Assymetric encryption (public to enc/private to decr)
Envelope encryption - KEK encrypts DEK
- DEK - data encryption key (encrypted DEK in data chunk metadata)
- KEK - key encryption key (stored in KMS)
Enable KMS service
*gcloud service enable cloudkms.googlepapis.com*
gcloud kms keyring create mykeyrying --location global
gcloud kms keys create mykey --location global --keyring mykeyring -- purpose encryption
LOGGING and MONITORING
-
Logging
Export to: Pub/Sub, BigQuery, Cloud Storage
-
-
-
-