Please enable JavaScript.
Coggle requires JavaScript to display documents.
PPDM: Power Protect Data Management, Workflow:
FSA agent protection…
-
Workflow:
FSA agent protection will be involved.
BPM engine , it executes pre defined actions with dynamic attributes.
Ex: action to create a storage unit in DD and another action to do protection and push the copy to dd entry that twas created.
Actions are orchestrated by the workflow engine.
Queue and Retry are new workflow design.
How do we interact with workflow?
Configured to call a REST API. Each action is a REST api.
They are all boot strapped when the appliance is deployed.
Workflow is not an external for users currently.
How many requests does it support? - Upto 1000 workflows in 5mins.
Types of Workflow:
Protection - Configure workflow to run once for asset configuration.
Scheduled to run for protection of assets.
Unconfigure the assets
Compliance - Validate assets for compliance with objectives on the discovered copies.
Restore - perform restore operation on a copy selected by user.
eCDM core components
System Manager
Startup and shutdown coordination.
Component runtime monitoring and restart.
DR
Leverages open source components
Auth Service
provides the auth authorisation rbac tenant policies, stores all sensitive data in our CST lockbox.
-
-
eCDM Business Components
VMDM
Vmware asset protection using Vproxy, VRPA
also does the protection engine management.
ADM
Application assets protection for MSSQl,oracle, also supports copy discovery.
DDBEA (Data Domain Boost for Enterprise Applications) Agent management
White lists the agents that gets registered with the eCDM
SDM
Storage protection VMAX and XIO
creation of mtrees, user creds etc
CBS Common Business Services
There are some services common to all the above 3 pillars. They come under this.
Like Inventory sources.
PLS(protection life cycle) manage ment of assets and copies.
eCDM: Enterprise Copy data Management
It is designed to do 3 things.
- Automate the Configuration of the RMAN agent.
- Create a backup of the recovery catalog, monitor them to determine if the retention policies are being followed.
- Manage the backup lifecycle that is to ensure that the backups are marked as garbage as per the retention policies.
eCDM has a collection of components. Each component is standalone, are installable packages and provides services.
Services provide operations and they are interconnected via rest api & message bus.
Zuul for service routing.
Eureka is the Edge server
Nginix HA proxy used to redirect btw the UI(3 UI) -- Install UI, Management UI, Upgrade UI(upgrading ecdm appliance)
ADM,VMDM,CBS and CIS are for business services
For the Message bus we use rabbitmq
Core components are the bottom ones.
For persistence Elastic search(NoSql db). For workflow we use postgress.
The service startup in eCDM.
/etc/systemd service.
The puppet init service brings up the necessary services during the start up.
During the puppet startup the infrastructure is setup by starting the rabbitmq, etc..
once the infrastructure is started, using
system manager in the ecdm detects the state of various appliances.
All other services are started by the puppet_init.
the sysmngr keeps the status of the appliance.
Status like:
pending: where core services are available,
operational: all the services are started.
maintenance: where the services are in the upgrade mode.
Jenkins is used to build the ova. Ecdm is delivered as a ovm package.
for build process we use packer(open source) to create a vM and install rpm.
Packages are versioned. Snapshot build during dev.
Jenkins to build ova. Ecdm is delivered as ova pakage.
for build process packer is used. open source toll to create vm, install rpm and convert it to ova using ovf tool.
Build process has acceptance test.
Packages are version during dev cycle.
service startup:
using systemd service. as soon as appliacen powers on the pupppet_init starts all the serice. this brings up the infrastructures.
once the infra services are up, we using the system manager which is ecmd component. It detects the appliance in diff states.
system in pending during the inital config, then switch to operational and then during upgrade the appliace goes to maintainence mode.
These microservices communicate through API gateway. they are running on specific ports and they wont be accessible from outside.
Component On-boarding
All the above components have to be packaged as RPM(Resource packet manager) an installed by puppet services.
bundled during the jenkins server process.
Components can run on Leader mode or the active mode.
Components have to be first registered with the system manager in order to start the services.
Component Startup
The components establish connection with the client using rabbit-mq and Eureka discovery services
Sync the log levels with log manager
Register resource handlers via the message bus.
Configure zookeeper with either Active or Leader mode
Zuul service routing: diag is for optimus.
when a login request triggers, goes to the node server, node server routes it to the api gateway.
Based on the route it redirects to the right components.
Routing is done through API gateway.
why the node server? -- its still a legacy system. Its just a pass through.
For JAZZ: CIS is the temporary component which binds the message through message bus. This will be removed in JAZZ.
Message bus will used for notification.
Rest request will be via API gateway.
PLC:
User has one place in the UI to configure an asset for protection or self services and be able to manage the copies of the data.
WHAT: Assets like VM, MSSQL,ORACLE etc
WHEN: Schedule has to be weekly/Daily/Hourly etc
HOW LONG: Retention/Expiry
HOW: Protection Engine
WHERE: Across multiple Jazz.
Dynamic Protection Library: DPS
Its library added to all our PLC component.
When you create a PLC for protection for every 6 hrs and if the copy needs to be stored in the DD, as soon as the PLC is configured, the CBS does the validation and sends a notification through the message bus to VMDM, ADM, SDM.
The associated component would schedule the workflow based on the iformation.
All the above work is done by the DPS library.
Data Model:
Management interface or the host is the entry point.
User can add DDMC vcenter, system performs discovery on management interface and populates the host.
Does the second level discovery and discovers all the VMs related to the Data storage.
Discovery jobs would persists the discovery storage under the DD.