Please enable JavaScript.
Coggle requires JavaScript to display documents.
microservices Architecture, image RABITMQ - Coggle Diagram
microservices Architecture
Problems
Monolithic
problems
Technology, deployment, cost
fixed platform
(eg node.js) ,net good at document handling
upgrade
of components (whole system upgrade)
inflexible deployment (whole deployment) lot of testing (long development cycle),
BUG testing
(strong coupling)
inefficient compute resources
PROCESS needs more/less resources (same resources are allocated
Large and Complex codebase
no component isolation
testing not detected by bugs
difficult to maintain
system becomes obsolete
SOA
Polyglot
platform independent
Problem
complicated/expensive ESB (Enterprise Service Bus)
smaller company avoided (oracle, IBM)
ESB did everything (routing, authentication, aggregation)
difficult to manage (not lightweiight)
no tool to support SAO reduction in time to develop services (demise)
17:30 pm (at 10 chapter/90)
17:40 (finished 13)
15 completed (17"44 pm)
28 chapther @ 10:38 AM
microservices
9 characteristics of a good microservices
1. compoenentisation via services
:check:
modular (only small piece of code change)
components responsiblie for specific part
Libraries (implementation) import, using
called directly - fast
web API , RPC (SERVICES)
PREFERED METHOD FO R USING SERVICES
2. ORGANISED AROUND BIZ CAPABILITIES
:check:
UI, API, Biz logi, DB etc (each team)
single team handles all aspects
PRODUCTS not PROJECTS
FCUS ON PRODUCTS NOT PROJECT
TEAM responsible for development, also support
increase customer engagement
smart endpoints and dumb endpoints
problem (SOA)
ESB
WS-* Protocol
WS-Disccovery
WS-Security
difficult to track. Made inter-service communication complicated and difficult to maintain
use REST-API (uses) http protocol
direction connection not a good ideas (coupling between services) (gateway/discovery service)
graphQL, gPRC (NEW PROTOCOLS) Complex
DECENTRALISED GOVERNANCE
:check:
POLYGLOT
SERVICES DECIDE TECHNOLOGIES
DECENTRALISED DATA MANAGEMENT
:check: (when possible)
DB PER SERVICE
XNOT POSSIBLEX DISTRIBUTED TRANSACTION, DATA DUPLICATION,
TWO PHASE COMMIT
INFRASTRUCTURE AUTOMATION
:check:
TESTIG AND DELOYMENT SHORT DEPLOYMENT CUCLE
DESIGN FOR Filure
expect failures, logging, monitoring
catch and log exception.....retry(service mesh)
monitors swervices (cpu, ram etc+ alerts
evolutionary design
monolith to microservices
new apis, moitoring, cloud services
PROBLEMS SOLVED BY MICROSERVICES
canot use - multiple development platforms
not best for the TASK
all .net or node.js
decentralised governance allows different development platforms
INFLEXIBLE DEPLOYMENT
DEPLOY WHOLE APP
TEST WHOLE, FIND BUGS (LONG DEVELOPMENT CYCLES)
SOLUTION: COMPONENTISATION OF SERVICES
SOLUTION : LOOSE COUPLING BETWEEN SERVICES
SOLUTION : + DECENTRALISED DATA MANAGEMENT
EACH SERIVCE HAS ITS OWN DB
INEFFICIENT COMPUTE RESOURCES
CPU , RAM DIVIDED ACROSS ALL PROCESSES
CAN'T ALLOCATE MORE CPU, RAM TO ONE SERVICE THAT NEEDS IT
SOLUTION:COMPONENTISATION VIA SERICES
EACH SERVICE RUNS AS A PROCESS
LARGE AND COMPLEX
MONOLITH: CODE BASE - LARGE AND COMPLEX (DEPENDENCIES, COUPLING)
EVERY CHANGE - AFFECTS OTHER COMPONENTS
SO LOTS OF TESTING
STILL BUGS ESCAPE
DIFFICILT TO MAINTAIN
SOLUTION: COMPONENTISATION VIA SERVICES
DECENTRALISED DATA MANAGEMENT
ORGANISED AROUND BUSINESS CAPABILITIES
COMPLICATED AND EXPENSIVE ESB
ESB MAIN COMPONENT WITH SOA
BLOATED, EXPENSIVE
sINGLE PIECE OF CODE DOES EVERYTHING
DIFFICULT TO MAINTAIN
SOLUTION: SMART END POINT AND DUMP PIPES
SERVICES SHOULD HANDLE ALL ASPECTS OF COMMUNICATION AND COMMUNICATION ITSELF SHOULD BE DEAD SIMPLE (USE REST PROTOCOL FOR COMMUNICATION)
APPLICATION GATEWAY & SERVICE DISCOVERY PATTERN TO BE USED (DON'T CONNECT DIRECTLY) + USE OTHER APIS (GraphQL, gRPC)
LACK OF TOOLING
FOR SOA TO BE EFFECTIVE (SHORT DEVELOPMENT CYCLES WERE NEEDED)
ALLOW FOR QUICK TESTING AND DEPLOYMENT
SO IT NEEDED TOOLING TO ALLOW FO THIS
NO TOOLING EXISTED (NO TIME SAVING ACHIEVED)
SOLUTION:INFRSASTRUCTURE AUTOMATION ATTRIBUTE
AUTOMATE TESTING, DEPLOYMENT
PROVIDES SHORT DEPLOYMENT CYCLE
MAKES ARCHITECTURE EFFICIENT AND EFFECTIVE
ch 6; DESIGNING MICROSERVICES ARCHITECTURE
MAPPING THE COMPONENTS
CRITICAL STEPS
ONCE SET CAN'T BE CHANGED
DEFINING SYSTEM COMPONENTS
COMPONENTS = SERVICES (AND NOT LIBRARIES)
DESIGNING
BUSINESS REQUIREMENTS
AROUND SPECIFIC BIZ CAPABILITY
BIZ CAPABILITY AS FRAME OF COMPONENT
REQUIREMENTS ARE ACTIONS COMPONENT CAN DO
FUNCTIONAL AUTONOMY
THAT DOES NOT REQUIRE OTHER BIZ CAPABILITIES
DATA ENTITIES
DATA CAN BE RELATED TO OTHER ENTITIES BY IDS (ORDER WITH CUSTOMER ID)
RELATIONSHIP BEWTWEEN ENTITIES REMAIN
DATA AUTONOMY
UNDERLYING DATA - ATOMIC UNIT
EDGE CASES
LARGE DATA (REPORT ENGINE)
DATA DUPLICATION
/SERVICE QUERY/AGGREGATION SERVICE (WHICH ONE TO USE WHEN?)
CROSS CUTTING SERVICES
PROVIDE SYSTEM WIDE UTILITIES
LOGGING, CACHING, USER MANAGEMENT ETC
MUST BE PART OF MAPPING
HOW TO DESIGN? :
DEFINING COMMUNICATION PATTERN (INTER SERVICE COMMUNICATION)
ELSE POOR ERROR PATTERN
1-1 ASYNC / FIRE AND FORGET
PASS A MESSAGE TO THER SERVICE AND OTHER SERVICE HANDLES MESSAGE AND TAKE CARE OF ERRORS
PROS
PERFORMANCE,
CONS
SETUP COMPLICATED (THIRD PARTY TOOLS USED), ERROR HANDLING COMPLICATED (WHERE DID IT HAPPEN)
RABITMQ (FREE, EASY TO SETUP)
1-1 SYNC
WAITS FOR RESPONSE
EG ORDER AND INVENTORY SERVICE (ORDER NEEDS INVENTORY)
PROS
IMMIDEIATE RESPONSE, ERROR HANDLING, EASY TO IMPLEMENT
CONS
PERFORMANCE/BLOCKING
TWO WAYS
SERVICE DISCOVERY (CONSUL)
GATEWAY
PROVIDES MONITORING, AUTHORISATION, & MORE
PUB-SUB/EVENT DRIVEN
SERVICE DOESN'T KNOW HOW MANY WILL LISTEN
FIRE AND FORGET (SAME AS ASYNC)
PROS
PERFORMANCE, MULTIPLE SERICES NOTIFIED
CONS
ERROR HANDLING, NEEDS MORE SERVICE, MIGHT CUASE LOAD ON SYSTEM (EG HIGH CPUR, RAM, NETWORK USAGE)
RABITMQ/AZURE EVENT GRID
designing architecture
layers
CH 7: DEPLOYING MICROSERVICES (39 TO 44)
DEPLOYING MICROSERVICES
ELSE WE WILL HAVE
SLOW AND COMPLICATED DEPLOYMENT : SYSTEM RENDERED INEFFECTIVE
ARCHITECT NEEDED TO BE AWARE
HAVE IT, NOT MAINTAIN IT
40 CI/CD
CD: staging test, production movement
integration : build, unit test, integration tests
oncloud / on premise (jenkins)
41 (deployments) - CONTAINERS
TRADITIONAL DEPLOYMENT
WORKS ON MY MACHINE
SOFTWARE, DEPENDENCIES, CONFIGURATION FILES COPIES (AS ATOMIC UNIT)
CONTAINER VS VM
ADVANTAGES
PREDICTABILITY - SAME PACKAGE FROM DEV-TEST-PRODUCTION
imp for developers
PERFORMANCE : GOES UP IN SECONDS VS MINUTES IN VM
DENSITY: THOUSANDS OF CONTAINERS VS DOZENS OF VM
why not: ISOLATION is lighter than VM (ISOLATION) VS VM
42: CONTAINER IMPLEMENTATION (DOCKER)
REGISTRY (HAS IMAGES), CONTAINERS (RUNNING IMAGE), CLI (CONFIGURE DOCKER DEAMON), DOCKER ENGINE/DAEMON (PROVIDES LOGGING FUNCTIONALITY AND OTHER CONTAINERISIATION FUNCTIONALITY)
DOCKERFILE
HAS INSTRUCTIONS TO BUILD CUSTOM IMAGE
COPY, CHANGE WORKING DIRECTORY, INSTALL, COPY SOME MORE FILES
USUALLY SMALL
SUPPORT
OS
CLOUD
ECR (ELASTIC CONTAINER REGISTRY : AMAZON
AZURE ACR: AZURE CONTAINER REGISTRY
43:CONTAINER MANAGEMENT
DEPLOYMENT
SCALABILITY (TO ADD MORE CONTAINER TO DISTRIBUT LOAD)
MONITORING (IF STHG GOES DOWN)
ROUTING (TO ROUTE REQUEST TO CONTAINER REQUEST)
HIGH AVAILABILITY
tool came into picture - K8S
routing
SCALING
HA
AUTOMATED DEPLOYMENT
CONFIGURATION MANAGEMENT
IMP CONCEPTS
PODS = WRAPPER AROUND CONTAINERS
SERVICE (EXPOSES IPs AND SERVICES) HA, MONITORING ETC
sec 8 : TESTING MICROSERVICES (45 TO 50)
45:INTRO
ADDITIONAL CHALLENGES
UNIT, INTEGRATION, END TO END TESTS
46:CHALLENGES WITH MICROSREVICES TESTING
LOT OF MOVING PARTS
TESTING STATE ACROSS SERVICES
NON-FUNCTIONAL DEPENDENT SERVICES
47; UNIT TEST
INDIVIDUAL CODE, IN-PROCESS ONLY
AUTOMATED: nunit, junit (.net and java)
same framework and methodlogies as MONOLTIS
48 INTEGRATION TESTS
TEST SERVICES FUNCTIONALITY
COVER ALL CODE PATHS IN SERVICE (CODE COVERAGE)OR COVER CRITICAL PATH
NEEDS ACCESS TO DB/OTHER SERVICES (WE SHOULD CROSS BOUNDARY)
TEST DOUBLE
FAKE
IN PROCESS STORED
NEEDS CHANGE IN CODE
STUB
HOLDS HARD CODING DATA
REPLACES DATA IN DB
QUICK SIMULATION
NO CODE CHANGE REQUIRED IN TESTING
MOCK
VERIFIES ACCESS WAS MADE. NO DATA HELD, NO CODE CHANGE
QA team (not by developer), user SERVICE API, should be automated
unit testing framework support integration testing
49: END TO END TESTS
TOUCH ALL SERVICES IN SYTEM
TEST FOR END STATE
challenges
EXTREMELY FRAGILE (MAINTAING TEST ENVIRONMENT)
REUQIRES CODE
USED FOR MAIN SCENARIOS ONLY (AND NOT EXTEME ONES)
50 summary
ARCHITECT - ENSURE
TEST AUTOMATION FRAMEWORK EXISTS
TEST RESULT ANALYSIS (ANY ARCHITECTURAL CHANGE)
SEC 9 . SERVICE MESH (51 TO 56)
SERVICE TO SERVICE COMMUNICATION
PROVIDES ADDITIONAL SERVICE
PLATFORM AGNOSTIC
PROBLEM SOLVED
TIMEOUTS, SECURITY (DATA PASSED IS SECURE BETWEEN SERVICES), RETRIES, MONITORING
SERVICE MESH: SOFTWARE COMPONENTS THAT SIT NEAR THE SERVICE AND MANAGE ALL SERVICE TO SERVICE COMMUNICATION
PROVIDES ALL COMMUNICATION SERVICES
service mesh services
PROTOCOL CONVERSION
COMMUNICATION SECURITY (ENCRYPTION MECHANISM)
AUTHENTICATION
RELIABILITY (TIMEOUTS, RETRIES, HEALTH CHECKS, CIRCUIT BREAKING)
PREVENT CASCADING FAILURES IF SERVICE FAILS
MONITORING
SERVICE DISCROVERY
TESTING (A/B, TRAFFIC SPLITTING)
LOAD BALANCING
ETC
53: SERVICE MESH ARCHITECTURE
DATA PLANE (CONVERSION ETC) AND CONTROL PLANE (CONTROLS DATA PLANE) - INITIAL PHASE - PRODUCTS COME WITH PROPRIOTORY PRODUCTS
TWO TYPES OF SERVICE MESH
IN PROCESS (MESH PART OF SERVICE)
SIDE CAR (OUT OF PROCESS COMPONENT)
INSERVICE (PERFORMANCE |
SIDE CAR
(PLATFORM AGNOSTIC, CODE AGNOSTIC)
SIDE CAR PREFERRED
PRODUCTS AND IMPLEMENTATIONS
SIDECA (ISTIO, LINKERD, MAESH
ISTIO MANAGED BY CONTAINERS
DDS
MILITARY INDUSTRY
NOT FREE
TO USE/NOT TO USE?
LOT OF SERVICES COMMUNICATING A LOG
COMPLEX COMMUNICATION REQUIREMENTS WITH VARIOUS PROTOCOLS OR BRITTLE NETWORK
SEC 10; LOGGING AND MONITORING (57-60)
CRITICAL - BECAUSE FLOW GOES THROUGH MULTIPLE PROCESSESS
DIFFICULT TO GET WHOLISTIC VIEW
58: LOGGING VS MONITORING
LOGGING
USED FOR AUDITING
LOGGING: RECORDING SYSTEM'S ACTIVITY
DOCUMENTING ERRORS
TOOLS
SPLUNK, ELK STACK
CAN ACCESS LOT OF DATA SOURCES (ELK PREFERRED)
MONITORING
LOOK AT METRICS, DETECTS ANAMOLIES :INFRASTRUCTURE SIDE: cpu,ram, disk
DATA source
: AGENT ON THE MACHINE
APPLICATION SPECIFIC: requests per minutes, orders per day
DATA SOURCE:
APPLICATION LOG/ EVENT LOG
ALERTS : WHEN OUTSIDE RANGE
TOOL
KIBANA
MOST MONITORING TOOLS PROVIDE BOTH
DON'T DEVELOP
CLOUD NEW RELIC, AZURE APPLICATION INSIGHTS
ON PREMISE: NAGIOS, ELK STACK,
AS ARCHITECT : HAVE IT, ENSURE IT CAN CONNECT TO SYSTEMS LOG. IT GUYS LET CONFIGURE IT
INTENTION - RELIABLE AND STABLE SYSTEM
LOGGING
TRACE END TO END FLOW (INCLUDE AS MUCH INFO AS POSSIBLE) PORT IP, MACHINE ETC.
CAN BE FILTERED
TRADITIONAL - SINGLE LOG FILE OR RESPECTIVE LOG FILES
SEPARATE LOGS (END TO END FLOW DIFFICULT)
DIFFERENT LOG FORMATS (JSON, REGULAR TEXT) - GOING THROUGH LOGS SLOW, CONFUSIN
LOGS ARE NOT AGGREGRATED (EG ERRORS IN LAST DAY, DB ACCESSES) = NEED DATA IN SINGLE PLACE
LOGS TO BE IN CENTRALISED PLACE
MICROSERVICES
SINGLE LOGGING SERVICE (UNIFIED, AGGREGATED, CAN BE ANLYSED)
IMPLEMENTATION
LOGGING LIBRARY
use one for all services/one per platform (winston for nodejs and serilog for .net)
use severity wisely
do not have info messages (do not put in log)
timestamp, user, severity, service, message (or event we want to log), full stack trace, correlatoin ID (unique value in system)
don't put errors are info (which is invisible to monitoring tools)
IMPLEMENTATION
CORELATION-ID (UNIQUE ID)
TRANSPORT
USING QUEUE
2 more items...
logging serice
receives and stores
5 more items...
SEC 11: WHEN NOT TO USE MICROSERVICES
SMALL SYSTEMS WITH LOW COMPLEXITY (ELSE MICROSERVICES ADD COMPLEXITY) say 2/3 services is not to be split
REAL TIME (MILITARY, GAMING) (N/W HOPS) (SLA - milli second or micro seconds )
IF SERVICES/DATA CAN'T BE SEGREGATED
QUICK AND DIRTY SYSTEMS: POC (demonstrate an idea)
have short life span
NO PLANNED UPDATES (NO PLANNED FUTURE UPDATE)
EMBEDDED SYSTEMS
MICROSERVICE STRENGHT : SHORT UPDATES (NO UPDATE, NO MICROSERVICES)
SEC 12: MICROSERVICES AND THE ORGANISATION
CONWAY'S LAW
MELVIN CONWAY (1967)
Any organisation that designs a system will produce a deign whose structure is a copy of the organisation's communication structure
CORROLARY: DESIGN TEAMS THAT REFLECT SERVICES
TRADITIONAL TEAMS:
IT SERVICES (VM, DEPLOYMENTS, SOURCE CONTROL ETC),
BACKEND DEVELOPERS: JAVA/.NET/NODE
FRONTEND/UI: REACT, ANGULAR,VIEW
DBAS:SETTING UP DBs
EACH TEAM LOOKS AT PROJECT / SOME TASK THEY DID. NO ONE FOCUSSES ON PRODUCT
ATTRIBUTE#3 : PRODUCTS NOT PROJECTS
IDEAL TEAM: RESPONSIBLE FOR ALL ASPECTS (3-7)
CHANGING MINDSET:
TRAINING MICROSERVICES, BASIC PRINCILES, POC (SMALL), WORK CLOSELY WITH DESIGN AND DEVELOPMENT
SEC 13: ANTI PATTERNS
NO WELL DEFINED SERVICES (MAP COMPONETS, SELECT TECHNOLOGY STACK, DESIGN ARCHITECTURE)
NEGLECTED SERVICES => BLOATED SERVICES
NO WELL DEFINED API
API
VERSIONED
CONSISTENT
PLATFORM AGNOSTIC (SERIALISABLE)
PART OF DESIGN API DESIGN TABLE
3: IMPLEMENTING CORSS-CUTTING (SYSTEM WIDE) LAST
EXAMPLES
USER MANAGEMENT
AUTHORISATION AND AUTHENTICATION
CACHING
LOGGING
IMPLEMENT FIRST (OTHERS WILL USE THEM)
4: EXPANDING SERVICE BOUNDARIS
BLOATING, TEMPTING TO ADD OUTSIDE FEATURES => DESIGN NEW SERVICE
improve current system
THOROUGLY PLANNED, HIGH RATE OF FAILURES
GET MOTIVATION (FOR BREAKING MONOLITHS)
SHORTEN UPDATE CYCLE
MODULARISE THE SYSTEM (COMPONENTISATION)
SAVE COST
OUTDATED TECHNOLOGY (NOW OPENSOURCE)
SMALLER EFFICIENT TEAMS
MODERNISE SYSTEM
REMOVE OUT OF SUPPORT SYSTEMS
BEING ATTRACTIVE
THREE STRATEGIES
NEW MODULES AS SERVICES
EASY TO IMPLEMENT, MIN CODE CHANGES || TAKES TIME, END RESULT NOT PURE MICROSERVICES ARCHITECTURE
SEPARATE EXISTING MODULES TO SERVICES
END WITH PURE MICROSERVICES SYSTEM || TAKES TIME (ONE COMPONENT AT A TIME), A LOT OF CODE CHANGE REQUIRED (LEGACY TO SERVICE) - CODE REVIEW, REGRESSION TESTING REQUIRED
COMPLETE REWRITE
SYSTEM OLD, TOO MANY DEPENDENCIES
PURE MICROSERVICES ARCHITECTURE, OPPORTUNITY FOR MODERNISATION || TAKES TIME (DEISGN AND DEVELOPMENT), RIGOROUS TESTING
MORE COMPLEX => MORE REWRITE
NEED CODE REVIEW
RABITMQ