Coggle requires JavaScript to display documents.
Cross-Region Backup (all services but not Redshift and FSx for ONTAP) Cross-Account Backup (all services but not Redshift and FSx for ONTAP) Lifecycle Cold Storage (S3, EFS, DynamoDB and Timestream)
S3 EC2 EBS EFS FSx for Windows / Lustre / NetApp / openZFS Aurora RDS Redshift DynamoDB Timestream Neptune VMware on AWS Storage Gateway AWS CloudFormation stacks other VMware on-premise
Create a Backup Plan (how, when, retention) based on Backup Rules: Backup Schedule (frequency, RPO and backup window). For some services you can Enable continuous backups for point-in-time recovery (PITR) with continuous backups, you can restore your AWS Backup-supported resource by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Available for RDS, S3, and SAP HANA on Amazon EC2 resources Lifecycle Rule (move to cold storage, retention, deletion). If you transition to cold storage, then Retention must be at least 90 days after transitioning to cold storage Backup Vault (where backup are stored, with vault created by default) You can optionally create a copy into a different Region Tags (added to each backup, e.g Monthly Backup, Daily Backup, or something that make sense to you) Assign Resources to the Backup Plan Specify a IAM Role to be assumed by AWS Backup, Default vs Choose IAM Role Resource selection, All resources vs Selected Resources (Resource Types, include/exclude IDs, refine suing tags ) AWS Backup does the rest (creates and retain based on the plan) Once the backup job is completed the Recovery Point (the backup artefact) will appear in the specified Backup Vault
Backup Schedule (frequency, RPO and backup window). For some services you can Enable continuous backups for point-in-time recovery (PITR) with continuous backups, you can restore your AWS Backup-supported resource by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Available for RDS, S3, and SAP HANA on Amazon EC2 resources Lifecycle Rule (move to cold storage, retention, deletion). If you transition to cold storage, then Retention must be at least 90 days after transitioning to cold storage Backup Vault (where backup are stored, with vault created by default) You can optionally create a copy into a different Region Tags (added to each backup, e.g Monthly Backup, Daily Backup, or something that make sense to you)
Specify a IAM Role to be assumed by AWS Backup, Default vs Choose IAM Role Resource selection, All resources vs Selected Resources (Resource Types, include/exclude IDs, refine suing tags )
Select Resource type and Resource IDs Backup window: Create backup now (starts within 1 hour) Customize backup window (starts within and complete within) Transition to cold storage Retention Backup vault IAM Role Tags
Create backup now (starts within 1 hour) Customize backup window (starts within and complete within)
Max I/O not supported for EFS One Zone or Elastic throughput mode scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations highly parallelized workloads that can tolerate higher latencies Use Cases: big data, media processing
not supported for EFS One Zone or Elastic throughput mode scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations highly parallelized workloads that can tolerate higher latencies Use Cases: big data, media processing
General Purpose (recommended) up to 35,000 IOPS and has the lowest per-operation latency EFS One Zone always use General Purpose Use Cases: CMS, web servers
up to 35,000 IOPS and has the lowest per-operation latency EFS One Zone always use General Purpose Use Cases: CMS, web servers
Throughput scales with the amount of storage in your file system and supports bursting to higher levels for up to 12 hours per day (based on burst credit) Minimum all EFS can burst at least to 100MiB/s (1MiB = 1.048576MB) of metered throughput EFS > 1TiB of Standard storage class can burst to 100MiB/s per TiB. (Example: EFS = 10TiB can burst to 1,000MiB/s of metered throughput) Burst duration: the larger the file system, the greater the bursting throughput and the longer the duration Metered Throughput a blend of read request and write request, with reads metered as 1:3 ratio of writes. This means you get up to 3 MiB/s of read throughput and 1 MiB/s of write throughput for every 1 MiB/s of throughput provisioned
Throughput performance automatically scales up or down to meet the needs of your workload activity Spiky or unpredictable workloads, performance requirements that are difficult to forecast, or w When your app throughput at an average-to-peak ratio of 5% or less
Specify a level of throughput that the file system can drive independent of the file system's size or burst credit balance Charged for the throughput provisioned over and above what you get based on data you have stored You should start with default burstable option and switch to provisioned if you need it You can always go back to burstable throughput as along as it is > 24 hours since last throughput mode change Increase anytime, but only decrease throughput 24 hours after the most recent decrease Use Provisioned Throughput if: you know your workload's performance requirements your app needs throughput at 5% or more of the average-to-peak ratio
you know your workload's performance requirements your app needs throughput at 5% or more of the average-to-peak ratio
File system performance is measured by using: Latency Throughput IOPS
Latency Throughput IOPS
Storage class – EFS One Zone or EFS Standard Performance mode – General Purpose or Max I/O Throughput mode – Elastic, Provisioned, or Bursting
highly scalable storage using NFS CMS web servers (single folder structure for your website) big data media processing
None On first access
Replicate your file system to one additional AWS Region or the same AWS Region Enable this after the File System is created (no need to re-create) Provides RPO and RTO of minutes
Application-specific entry points into an EFS file system Make it easier to manage application access to shared datasets Can enforce a user identity, including the user's POSIX groups, for all file system requests that are made through the access point Can also enforce a different root directory for the file system so that clients can only access data in the specified directory or its subdirectories You need to create at least one mount target on your EFS file system to use access points
Measure the number of bits read or write per second MB/s Important for large dataset, large I/O sizes, complex queries Choose Throughput Optimised HDD (st1)
Measure number of read and write operations per second Important for transactional app, low-latency apps Choose Provisioned IOPS SSD (io1 or io2)
Create a file system Runaa database Run an OS Store data Install app
AES-256 algorithm using AWS managed key KMS Key (CMK) only symmetric
AWS managed key KMS Key (CMK) only symmetric
the OS suspend-to-disk Hibernations saves the contents from the RAM to EBS root volume EBS root volume and any attached EBS data volumes are persisted
EBS root volume is restored to its previous state RAM content is reloaded Processes are resumed Previously attached EBS data volumes are reattached and the instance retains its instance ID
Long-running processes Services that take time to initialise
EC2 Hibernation preserves the in-memory RAM on persistent storage (EBS) Much faster to boot up because you do not need to reload the OS Available for On-Demand and Reserved Instances
It is not possible to enable or disable hibernation for an instance after it has been launched Instance RAM must be less than 150GB Instances can't be hibernated for more than 60 days Available for Windows, Amazon Linux 2, Ubuntu Instance families: C3, C4, C5, M3, M4, M5, R3, R4, R5 The root volume must be encrypted
Migrate Windows-based app requiring centralised file storage Sharepoint, MS SQL Server, Workspaces, IIS, any other native MS applications
tens of GB/s, millions IOPS up to 64 TB per file system automatically encrypted at rest and in transit single AZ and multi-AZ (active/standby) deployment options data is backed-up daily to S3
Supports auditing end-user access to your files, folders, and file shares using Windows event logs Logs are published to Amazon CloudWatch Logs or streamed to Kinesis Data Firehose
HDD storage is designed for a broad spectrum of workloads SSD storage is designed for the highest-performance and most latency-sensitive workloads (e.g. DB)
Can store data directly on S3 and seamlessly integrates with S3 Can “read S3” as a file system (through FSx) Can write the output of the computations back to S3 (through FSx)
Can “read S3” as a file system (through FSx) Can write the output of the computations back to S3 (through FSx)
HPC ML Media Data Processing Electronic Design Automation Financial Modeling
Scratch File System Temporary storage SSD only Data is not replicated (doesn’t persist if file server fails) Usage: short-term processing, optimize costs Persistent File System Long-term storage SSD or HDD with SSD cache Data is replicated within same AZ Replace failed files within minutes Usage: long-term processing, sensitive data
Temporary storage SSD only Data is not replicated (doesn’t persist if file server fails) Usage: short-term processing, optimize costs
Long-term storage SSD or HDD with SSD cache Data is replicated within same AZ Replace failed files within minutes Usage: long-term processing, sensitive data
Data processing job on Lustre with S3 as an input data source can be started without full download of dataset first Only the data that is actually processed is loaded (decrease cost and latency) Data loaded once thus reduce request on S3
Compatible with NFS, SMB, iSCSI protocols Single and Multiple AZ Use Case: Move workloads running on ONTAP or NAS to AWS Supports: Linux, Windows, MacOS, VMware Cloud on AWS, Workspace & AppStream, EC2, ECS, EKS Tech: Storage shrinks or grows automatically Snapshots, replication, low-cost, compression and data de-deuplication Point-in-time instantaneous cloning (helpful for testing new workloads)
Storage shrinks or grows automatically Snapshots, replication, low-cost, compression and data de-deuplication Point-in-time instantaneous cloning (helpful for testing new workloads)
Compatible with NFS v3, v4, v4.1, v4.2 Single and Multiple AZ Use Case: Move workloads running on ZFS to AWS Supports: Linux, Windows, MacOS, VMware Cloud on AWS, Workspace & AppStream, EC2, ECS, EKS Tech: snapshots, low-cost, compression Point-in-time instantaneous cloning (helpful for testing new workloads)
snapshots, low-cost, compression Point-in-time instantaneous cloning (helpful for testing new workloads)
Migration from Single AZ to Multi AZ: Create new FSx multi-AZ Migrate with: Datasync - > slow but no down-time backup/restore -> faster, but some down-time Decrease FSx Volume Size You can only increase capacity If you take a backup, you can only restore to a same size Create new smaller FSx and migrate data with DataSynch
Create new FSx multi-AZ Migrate with: Datasync - > slow but no down-time backup/restore -> faster, but some down-time
Datasync - > slow but no down-time backup/restore -> faster, but some down-time
You can only increase capacity If you take a backup, you can only restore to a same size Create new smaller FSx and migrate data with DataSynch
create an AMI from an instance Amazon EC2 powers down the instance before creating the AMI to ensure that everything on the instance is stopped and in a consistent state during the creation process (you can tell Amazon EC2 not to power down and reboot the instance as some file systems, such as XFS, can freeze and unfreeze activity, making it safe to create the image without rebooting the instance this is the No reboot parameter true/false
Share an AMI with specific AWS accounts without making the AMI public Sharing an AMI makes it available in that region To share AMI in a different region, copy the AMI to the region and then share it To share an encrypted AMI it must be encrypted with KMS Key and you must allow the AWS accounts to use the KMS key Users can only launch instances from the AMI. They can’t delete, share, or modify it EC2 Console > AMI > Actions > Edit AMI permission > Private / Shared accounts / Add account ID
stripe multiple volumes together for greater I/O performance some instance types can drive more I/O throughput than what you have with a single EBS volume
not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your volumes provide 20-30% fewer usable IOPS than a RAID 0 configuration with identical volume sizes and speeds, a 2-volume RAID 0 array can outperform a 4-volume RAID 6 array
mirror two volumes together which can also offer fault tolerance not recommended as requires more EC2 to EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously does not provide any write performance improvement
without support for EBS-optimized throughput, network traffic can contend with traffic between your instance and your EBS volumes with EBS-optimized instances, the two types of traffic are kept separate
use EBS multi-volume snapshots to ensure that the snapshots are consistent do not need to stop your instance to coordinate between volumes to ensure consistency because snapshots are automatically taken across multiple EBS volumes
ideal for large, sequential I/O workloads such as: big data, EMR, ETL, DWH and log processing for small random I/O use gp2 not st1 up to 16TiB Baseline throughput 40 MB/s per TB and burst up up to 250 MB/s per TB Max throughput 500 MB/s per volume Cannot be a boot volume Up to 99.9% Durability No support for Multi-attach
50 IOPS/GiB Up to 99.9% Durability
500 IOPS/GiB Up to 99,999% Durability
Suitable for OLTP and latency-sensitive app (sub-millisecond) Up to 64,000 IOPS per volume Up to 16 TB (io1 and io2) Supports Multi-attach Most expensive
highest-performance SSD volume designed for business-critical latency-sensitive transactional workloads Up to 256.000 IOPS per volume Up to 64 TB Up to 99.999% Durability
high performance app predictable 3,000 IOPS baseline and 125 MiB/s regardless volume size allows to increase IO independently from storage size
Suitable for boot disks Up to 16,000 IOPS per volume 99.9% Durability Up to 16 TB Do not support Multi-attach
boot disks general app
Suitable for less frequently accessed data large, sequential, cold-data wkld Lowest cost Cannot be a boot volume Baseline 12 MB/s per TB and burst up to 80 MB/s per TB Max throughput 250 MB/s per volume Up to 99.9% Durability No support for Multi-attach
Available for io1/io2 only Filesystem needs to be cluster aware (not XFS, EXT4, ...)
You can share snapshots but only in the same region Modify the permissions to share it with other AWS accounts publicly with all other AWS accounts (unencrypted only) privately with specified AWS accounts To share to other regions you need copy them to the destination region first Users that you have authorized can use the snapshots that you share to create their own EBS volumes, while your original snapshot remains unaffected Can't share snapshots encrypted with the default AWS managed key Share an encrypted snapshot, you must also share the customer managed key
publicly with all other AWS accounts (unencrypted only) privately with specified AWS accounts
up to 16TB or up to 64000 IOPS or 1000MiB/s throughput
data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs when resources are deleted, they are retained in the Recycle Bin for a time period that you specify before being permanently deleted to enable and use Recycle Bin, you must create retention rules in the AWS Regions in which you want to protect your resources