Please enable JavaScript.
Coggle requires JavaScript to display documents.
GCP_Caching - Coggle Diagram
GCP_Caching
Memory store
can be monitored
Redis
create cluster: up to 20 nodes
low latency and HA
it make sense to creat ea lot of nodes with smaller amount of memory
gully managed by Google, based on config
MemCache
sessions
cache DB results
inmem storage - as cache, up to 300 GB
can be accessed from most compute engines
there is HA mode: create failover replica
Cloud CDN
can integrate LB to deliver static content from edge servers.
in back end we need to configure both: back end and CDN
if CDN has the result - it will provide it, if no - trigger backend and insert to cache
TTL is available to set
can cache: static, based be header or all content
we can use CDN to distribute content from personal servers
decrease the latency
best practise
set TTL
if cache dynamic content - need to be aware for small TTL
cache status content
cache key should be right to avoid duplicates
we can use content version in URI to controll TTL
distribute content to edge servers around the world
AppEngine Memcache service
shared is free, dedicated - expensive
it is a legacy