Please enable JavaScript.
Coggle requires JavaScript to display documents.
Infrastructure - Coggle Diagram
Infrastructure
Mail Server Protocol
-
POP3 (Port 110)
allows email client to download email from the server, then delete it.
IMAP:
This protocol allows email clients to download emails from the server. Unlike POP3, it doesn’t delete the email from the server.
Reverse Proxy:
sits in front of the actual server (to which the requests are intended to be sent). A Reverse proxy accepts requests from the client and forwards the requests to the actual server. Typically, the reverse proxy server resides in a DMZ.
DMZ (Demilitarized Zone) refers to the part of the network between the internet (which is an untrusted network) and the corporate network (which is a trusted network). The servers or resources in the DMZ are not as secure as those in the LAN (Local Area Network), but not as insecure as those in the internet.
The resources in the DMZ are exposed to the internet as they will have to interact with the web. However they also need to interact with the resources in the LAN.
Generally, there will be a firewall between the DMZ and the internet and a different firewall with stricter access rules between the DMZ and the intranet.
The resources in DMZ cannot initiate requests, they can only forward requests.
-
Why do we need DMZ?
ensures that we do not expose all our resources to the internet, otherwise entire cooperate network would be at risk.
we expose only a small part of the resources to the web by placing them in the DMZ. This reduces the overall surface area of attack.
Tunneling ‘Port Forwarding’
is a process in which data is sent using unsupported protocol over the network through encapsulation.
the most common form of tunneling is Virtual Private Networks (VPN)
Using Tunneling to get across firewalls
Load Balancers
- Distributes traffic or incoming requests across multiple resources
- It prevents a scenario where in, one single resource or node is overloaded with excess work, whereas the other nodes are idle.
- Assist in failover implementation (will not redirect the traffic to failed nodes.)
-maintain the registry of all nodes and send heartbeats to each nodes regularly.
- Dynamically update registry if node is added or removed.
Routing Algorithms:
Round robin – In this mode, the nodes are selected one after another in a round robin basis.
Priority – In this mode, some nodes are given preference over other.
Least connections – In this mode, the request would be routed to the node having least number of connections.
Load Balancing types
Category 1
Hardware Load Balancer
Preconfigured appliance with processor, memory, OS. ex: APIC, F5
-
Category 2
Server side Load Balancer
Routing happens on server and client doesn't know that it hitting a load balancer (transparent for the client)
ex: Consul
Disadvantagce: Could be a SPOF
Client side Load Balancer
Load balancing and routing happens on client side. The client should be aware of all the nodes and their addresses.
EX: Eureka
Advantage:
- No SPOF, since all LB happens on client side.
Disadvantages:
- the client need to know about any node added/removed.
- Client will be aware of distribution of load and know the overall load per node.
- Additional burden on client to perform load balancing.
SSL Termination
is a process wherein the encrypted secure traffic (HTTPS) is consumed by a server, performs decryption and then forwards unencrypted request (HTTP) to other servers in the network.
The other servers in the network are assumed to be in a secure network and they will not be exposed to the internet.
Generally, a load balancer performs the task of SSL Termination.
SSL Pass Through
in contrary to termination, is a process wherein the encrypted secure traffic (HTTPS) is forwarded as is by the server to other servers in the network.
The individual servers should be capable of decrypting the SSL request.
- Maintain certificate in a centralized place (LB)
- More secure since the traffic is secured end to end.