AWS - Certified Security Specialty

Logging and Monitoring

Identity and Access Management

Data Protection

Further materials

Basic AWS security

Amazon.com is completely separated from AWS network

Shared Responsibility Model

Amazon is responsible for security OF the cloud, while customers for security IN the cloud

Responsibilities of Amazon:

  • hardware
  • software
  • networking
  • facilities
  • global infrastructure
  • managed services

The Shared Responsibility Model CHANGES for different service types

Infrastucture Services

Container services

Abstracted

e.g. compute services (e.g. VPC, EC2, Auto Scaling)

You control OS and any identity management systems that provides access

e.g. RDS, EMR, Elastic Beanstalk

You are responsible for setting up and managing network controls, e.g. firewall rules, and for managing platform-level identity and access management separately from IAM

e.g. high-level storage, database and messaging services like S3, Glacier, DynamoDB, SQS

You access the endpoints of these abstracted services using AWS APIs and AWS manages the underlying service components or the operating system on which they reside

Security IN the cloud

Visibility

e.g. AWS Config

Auditability

e.g. AWS CloudTrail

Controlability

e.g. AWS KMS (Multi Tenant)

e.g. CloudHSM (dedicated)

Agility

e,g, AWS CloudFormation

Automation

e.g. AWS OpsWorks, CodeDeploy

Services cross all controls

AWS IAM

AWS CloudWatch

AWS Trusted Advisor

Why should you trust AWS?

multiple compliance programs: https://aws.amazon.com/compliance

Policies

3 types of policies

AWS Managed Policies

Customer Manages Policies

created and administered by a user

Inline Policies

Helpful if you want to make a one-to-one relationship between a policy and the principal entity

Can be used only for one principal entity

Power users can do everything what Admins except managing IAM users and groups

S3

S3 bucket policy

applicable only to s3

can be broken to user level, e.g. Alice can PUT but not DELETE objects

When to use?

Simple way to grant cross-account to S3 service without using IAM
roles

When you want to keep access control in your S3 environment

Example: You have lots of employees in various groups and subgroups and you have one bucket to which only 2 accounts should have access. It's much easier to do it via Bucket Policy rather than denying access in all IAM policies

When your IAM policy bump up against size limit

Limits for IAM: 2 kb for users, 5 kb for groups, 10 kb for roles. S3 bucket policies have limits up to 20 kb.

If you use the policy generator you have to add /* at the end of bucket's ARN - without it you can still do actions on any object (only actions against bucket will be denied, e.g. list all objects)!!! On the other hand if you put /* then everyone else can list its content.

Explicit DENY always overrides ALLOW!!!

ACLs

ACLs are as old as the S3 and the predate the IAM.

Amazon recommends using IAM and bucket policy instead of ACLs.

BUT they can be useful for setting up access control mechanism for individual object

Bucket policies are limited to 20 kb in size so ACLs can become useful if your bucket policy grows too large

You cannot specify ACLs for IAM users from the browser but you can do it using AWS CLI

You need account number and owner canonical user ID

The account number can be taken from: click your user name -> My Security Credentials -> Account Identifiers

The command aws s3api list-buckets will give you the canonical user ID for your IAM

Policies are CASE SENSITIVE!!! e.g. you'll get an error while trying to assign "DENY" rule. But "Deny" will be accepted

Those which are created and administered by AWS

Conflicting policies

Explicit DENY always override ALLOW. So if you explicitly DENY access in bucket policy and allow access in IAM then a DENY rule will be applicable

By default a least privilege rule is being followed and by default decision is made always to DENY, e.g. if no there's no ALLOW rule then it will be DENY.

Screenshot 2019-01-14 at 23.23.40

Forcing encryption

You can force downloading S3 objects using bucket policy.

Firstly allow some action then Deny it and add a condition:
"Condition":{
"Bool":
{"awsSecureTransport":false}}

Example:

{"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": ""
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketnamehere/
"
},
{
"Sid": "PublicReadGetObject",
"Effect": "Deny",
"Principal": {
"AWS": ""
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketnamehere/
",
"Condition":{
"Bool":
{ "aws:SecureTransport": false }
}
}
]
}

Cross Region Replication

By default it's made via SSL, so no need for specifying additional policies to turn it on (no need to use aws:SecureTransport condition).

The objects can be replicated only once. So after replicating you cannot replicate it again

you can specify only one destination

Requirements

Source and destination buckets must have versioning enabled

Source and destination have to be in different regions

Amazon S3 must have permissions to replicate objects. When you're doing it for the first time then a custom policy is generated

If the source bucket owner also owns the object, the bucket owner has full permissions to replicate the object. If not, the object owner must grant the bucket owner the READ and READ ACP permissions via the object ACL

The best practice is to replicate CloudTrail logs to the bucket owned by totally different account

If you delete object without specifying version ID the S3 will add the DELETE marker in the source bucket.

What is NOT replicated?

anything before CRR is turned on

Objects created with server-side encryption using customer provided (SSE-C) encryption keys

Objects created with server-side encryption using AWS KMS managed (SSE-KMS) encryption keys - UNLESS YOU EXPLICITLY ENABLE THIS OPTION

Objects to which the bucket owner doesn't have permissions (e.g. when the object owner is different from bucket owner)

Deletes to a particular VERSION of an object

If you specify Filter element in a replication configuration rule, S3 does not replicate the delete marker.
More: https://docs.aws.amazon.com/AmazonS3/latest/dev/crr-what-is-isnot-replicated.html

If don't specify the Filter element, Amazon S3 assumes replication configuration is a prior version V1. In the earlier version, Amazon S3 handled replication of delete markers differently.


If you specify an object version ID to delete in a DELETE request, Amazon S3 deletes that object version in the source bucket, but it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

Forcing S3 to use CloudFront

SSL

You need a separate certificate for your ELB and separate certificate for CloudFront distribution

To use custom SSL certificate (for your custom domain name, e.g. example.com) you have to import it using ACM (AWS Certificate Manager)

To force S3 to use CloudFront you have to specify the Origin Access Identity and enable an option Restrict Bucket Access

you can grant read permissions when configuring CloudFront distribution or you can do it by yourself (by updating the Origin Access Identity permissions)

It takes a lot of time distribute this change *from several hours to 24 hours even)

is Global service!

they can be change without any notification or can give unnecessary permissions, e.g. AmazonEC2RoleforSSM allows also for reading and writing to S3 (details: https://cloudonaut.io/aws-ssm-is-a-trojan-horse-fix-it-now/)

If you give public access via ACL to an object (so it can be accessible anonymously), the Deny permission in Bucket Policy won't work (it's applicable only to authenticated users)

Here's how policies work together
Screenshot 2019-08-28 at 17.14.12

Infrastructure Security

S3 pre-signed URLs are typically handled via SDK. The URLs expires after defined time


They can be generated also via CLI, e.g.
$ aws s3 presign s3://rzepsky/hello.txt --expires-in 300

AWS Security Token Service (STS)

grants temporary access to AWS

users can come from different sources

Federation (e.g. AD)

Cross-access users (from different AWS account)

Federation with Mobile Apps (e.g. Facebook, Google, OpenID providers)

STS token lifetime is 1-36 hours

Web Identity Federation

Allows you to log in using your FB, Google Amazon Creds

Cognito service

Cognito maps token from Web ID provider (e.g. FB) to IAM roles to access AWS resources

data is synced across multiple devices

Behaves as Identity Broker between your app and Web ID providers

No need to store locally AWS credentials

recommended for mobile apps

uses OAuth 2.0

Definitions

User pools - user directories; users can login directly to user pools or indirectly via ID providers (e.g. FB)

Identity pools - create unique identities, can give temporary credentials to AWS resources

OAuth scope - options to verify identity, e.g. phone mail.

Implicit grant - you'll get your JWT token

Authorization code grant - Cognito will give you authorization code back to process it further on the backend side

Glacier

Low cost cloud storage (0,004$ per GB/month)

data is stored in archives (zip or tar)

Archives are stored in containers called vaults

Vault Lock Policy

Used for:

configuring WORM (Write Once Read Many))

creating data retention policy (e.g. 5 years)

Once you attach policy the lock is in-progress state for 24 hours

after you accept they become immutable. In other words once it is accepted and applied it cannot be changed or removed!

CloudTrail

It doesn't support ALL AWS services

It records AWS API calls for your account and delivers log files

for example RDP/SSH sessions are NOT logged

Logs are delivered every 5 active minutes (with up to 15 minutes delay)

Log file Integrity is validated by default (checks if it was modified)

Every hour log files are delivered with 'digest' file to validate the log's integrity

It uses SHA-256 hashing and SHA-256 with RSA for digital signing

AWS Organizations

Allows for setting up Service Control Policy (policy applied to Organization Units to block access for certain services)

Allows for specifying permission boundary even for root account

Overwrites any local policy

you can use SSE-S3 or SSE-KMS for encrypting logs

Prevent logs from being deleted by configuring S3 MFA Delete

you can move/delete logs after every X days by using S3 Lifecyclemanagement

CloudWatch

key components

CloudWatch

CloudWatch Events

CloudWatch Logs

for metrics, alarms and notifications

allows for configuring rules based on events

pushed from your systems/apps as well as other AWS services

stored indefinitely (not in S3)

Certificates for CloudFront have to be stored in US East (N. Virginia) region or in IAM (certs can be imported to IAM only via CLI)

AWS Config

to monitor access, AWS Config uses CloudTrail

to enable it in all region you have to do it manually

Allows for

compliance auditing

security analysis

resource tracking

key terms

Config Items

Point-in-time attributes of resources

Configuration Recorder

configuration of Config that records and stores Config Items

records configuration change

Configuration snapshots

collection of Configuration Items

Configuration Stream

stream of changed Config Items

Configuration History

Collection of Config Items for a resource over time

Stores everything in S3 Bucket

requires IAM role (with Read only permissions to the all resources, Write access to S3 logging bucket, Publish access to SNS)

Cloud HSM

dedicated Hardware Security Module

provides secure key storage and cryptographic operations

you can for example generate here your keys to EC2

you control keys (Amazon doesn't have access to your keys)

Compliant with 140-2 & EAL-4

AWS Inspector

automated security assessment service

require installed agent on EC2 instance

require a role with ec2:DescribeInstances permission

uses tags to which instance should be scanned

Rules can be evaluated periodically or when configuration change happened

Trusted Advisor

will advice you on Cost Optimization, Performance, Security, Fault Tolerance

For more than basic checks you have to upgrade your support plan to Business or Enterprise

AWS KMS (Key Management Service)

Key rotation options

CMK with your own imported key material (in other words CMK not generated in AWS) does NOT allow for automatic rotation (only manually). CMK with AWS mangaed key material can be rotated (once a year)

AWS Managed Keys are rotated every 3 years automatically

AWS Managed Keys cannot be rotated manually!

encryption in EC2

once you removed public key in the console, you still can log in to the instance using this key (in metadata the key is still accessible)

CMK automatic rotation is by default disabled (but it's possible to enable it)

you can import your own SSH public key

root EBS volume cannot be encrypted if it is used as unencrypted

to encrypt it : detach volume, create snapshot, copy AMI and enable encryption

you can have multiple keys attached to the instance (e.g. for different users and each has different keys)

you cannot use KMS with SSH for EC2. But you can with CloudHSM

KMS Grants

used for temporary, granular permission

you should use Key Policies for static permissions and for explicit deny

a generated Grant Token can be passed to KMS API

CLI - important commands

create-grant

list-grants

revoke-grant

programatically delegates permissions

Grants allow access, not deny

general characteristics

regional service

if you want to encrypt objects in the bucket, the key has to be in the same region as the bucket

requires choosing key administrator (who can manage the key) and key users (who can use the key)

service for controlling the encryption keys

keys are generated using Amazon's multi-tenancy HSM

the AWS CloudHSM uses dedicated hardware

if you encrypt the S3 object with KMS and then make that object public, then the encrypted content cannot be displayed both anonymously and as a user without permissions to KMS key

<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>7E8FBBAA6A8A9A6C</RequestId>
<HostId>
I8n34c6XU03lbq1VFs6dWF3JYV5NPTxxnZzzwV+zHOf+yCPcoR0UbONCBNqLrfekvYPkgoxNotM=
</HostId>
</Error>


BUT if you use SSE-S3 (Amazon stores keys, not you) then when the object is public, then anonymously you can display it

CMK - Customer Master Key

1 key per CMK (you cannot use someone else's key if it's in use)

to do it manually

create new CMK with no key material -> import new encrypted key material into that CMK -> change CMK identifier in your app to the key ID for the new CMK

key deletion

if you imported your own key you don't have to wait 7-30 days for deletion, but you can delete it by clicking button

administrators can create, delete and decrypt keys

administrator cannot use key. You have to explicitly point the same user as a key user (no tonly administrtaor)

you can import your own (external key)

1) you have to firstly download wrapping key
2) then import token

uses wrapping algorithm RSAES_OAEP_SHA-1

but CMK can never be exported!

KMS Key Policy

to specify a condition, e.g. policy condition to disable access after a date

AWS KMS Condition Keys are predefined conditions you can use

for example kms:ViaService limits use of CMK requested from particular service, e.g. only allows requests which come from particular Lambda

to configure access for external account you have to enable it in Key Policy in owner account, as well as you have to specify the access in IAM policy for the external account

if enabled then it's rotated once a year automatically through opt-in or on-demand manually

KMS best practices

separate keys per business unit and data classification

CMK admins separated from users

limit KMS actions (no kms:*)

encryption context

key-value pair logged in clear text within CloudTrail passed during encryption and then during decryption to ensure the integrity

videos

AWS re:invent 2017: Best Practices for Implementing AWS Key Management Service (SID330)

AWS re:Invent 2017: A Deep Dive into AWS Encryption Services (SID329)

Best Practices for DDoS Mitigation on AWS

AWS re:Invent 2018: [REPEAT 1] Become an IAM Policy Master in 60 Minutes or Less (SEC316-R1)

AWS re:Invent 2018: Your Virtual Data Center: VPC Fundamentals and Connectivity Options (NET201)

Advanced Security Best Practices Masterclass

DDoS

DDoS

AWS Shield protects against SYN/UDP Floods, Reflection and other layer 3 and 4 attacks

additional resources

re:Invent Video: DDOS Best Practices:

should be enabled when you use ELB, CloudFront or Route53

mitigation

minimize the attack surface (e.g. by using Bastion Host with whitelisted IPs, the attack surface is limited to exposed, few hardened entry points)

Safeguard Exposed Resources (e.g. by using geolimitatio, CloudFront, Route53, WAFs)

in Route53 using alias Record Sets you can redirect traffic to CloudFront distribution, Private DNS

WAF

integrates with both ALB and CloudFront

ALB WAF is regional

only few regions allow for integrating WAF with ALB

CloudFront distributions are global

you can configure the followings:

whitelisting

blacklisting

counting requests that match your criteria

verifies the followings:

IP

length of request

headers

strings that appear in requests

query string parameters

EC2 dedicated instances vs dedicated hosts

dedicated instances = EC2 instances that are run on hardware that is dedicated to the single customer

Amazon may share this hardware with other instances from the same AWS account (if those instances are not dedicated instances)

charged by instance

dedicated hosts = you have a control on the physical server

some 3rd party software say you have to run their software on dedicated host

charged by host

AWS Certificate Manager

allows you for automatic certificate renewal unless it wasn't imported or associated with Route53 private hosted zone

you can use the cert in ALB or CloudFront

but you cannot export the certificate

Load Balancer

Forward secrecy - compromising long term keys don't compromise past session keys (more: https://en.wikipedia.org/wiki/Forward_secrecy)

to have Perfect Forward Secrecy use ECDHE-... ciphers

It is recommended to use ELBSecurityPolicy-2016-08 security policy (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html)

Network LB should be used if you need ultra high performance or you need to terminate TLS/SSL on the EC2 instance. Usually you should use ALB

At least 2 subnets have to be specified for ALB provisioning (1 subnet = 1 availability zone)

in Elastic Load Balancer you can terminate TLS/SSL connection either on the LB or on your EC2 instances

ALB supports ONLY TLS/SSL termination on the LB and only supports HTTP/S

API gateway

Throttling - if there are too many requests (above the limit) the API Gateway replies with "429 Too Many Requests"

throttled by default

by default the steady-state limit is 10 000 rps (requests per second)

Burst limit is up to 5 000 requests across all APIs

you can (NOT enabled by default) enable caching (default for 5 minutes, can be raised up to 1 hour)

AWS Systems Manager

allows you to execute any command on the EC2 instance without SSH

requires installed SSM agent

you can use this service with EC2, CloudFormation, Lambda etc.

common use case: automating common admin tasks on thousands of instances (e.g. based on tags)

Parameter Store (under EC2)

to securely store sensitive values (e.g. license key for installation)

you can reference them using their names

you can store them as plain text or you can encrypt them using KMS

FIPS 140-2 is a US government computer security standard used to approve cryptographic modules. level 4 is the highest, AWS HSM reaches just level 3

AWS Hypervisor

hypervisor automatically scrubs (sets to 0) unallocated EBS memory (no risk of accessing someone else data)

EC2 instances are run on Xen Hypervisor

Windows EC2 instances can only be HVM (Hardware Virtual Machine) whereas Linux can be PV (paravirtualized) or HVM

Amazon recommends using HVM over PV

in PV, CPU supports 4 privilege modes: Ring 0 is used by host OS and guest OS uses only Ring 1-3

NACLs

1 subnet = 1 NACL

VPC automatically comes with default NACL allowing all inbound and outband traffic

but custom NACL by default denies all inbound and outbound traffic

NACL can have multiple subnets but a subnet can have just 1 NACL (old one is replaced by the newest one)

NACLs are stateless

outbound rule by default is to DENY any traffic

ensure to ALLOW ephemeral ports (1024-65535) in outbound traffic

NACL assess before SG

NACLs are usually used to blacklist IPs

VPC

SG sits only in 1 VPC (you cannot assign SG between different VPCs)

you can have multiple VPCs in one region

there's no transitive peering (if VPC A can communicate with B and B can communicate with C, then A cannot communicate with C unless you explicitly configure it)

1 Internet gateway for 1 VPC

creating a new VPC means creating new default route table, security group and NACL

subnets

1 subnet = 1 availability zone

by default there's no auto-assign public IP

first 4 and last IP address from the subnet is reserved by Amazon for: VPC router, DNS server, reservered for future use, network address, broadcast address

VPC endpoints

2 types

Interface endpoint = Elastic Network Interface (ENI)

Gateway endpoint = similar to Network Gateway

VPC endpoint is internal gateway for accessing other AWS services from private subnets

when VPC endpoint is used the source IP address uses private address, not public

VPC Flow Logs

Flow log is stored using CloudWatch Logs

in CloudWatch you have to specify a dedicated Log Group

requires a role to write logs to CloudWatch

you cannot tag a Flow log

not all traffic is recorded, e.g. DHCP, Windows license activation, DNS, metadata traffic

you cannot enable Flow Logs for VPCs that are peered outside your account

you cannot change a configuration after creating a Flow log

Athena

interactive service to query data in S3 using standard SQL

pay per query (5$ for each 5TB)

serverless service

you have to create a database and a table and then propagate there data to be able to run queries against the data

good for querying logs

GuardDuty

needs 7-14 days to set a baseline (learn from normal behaviour)

charge based on amount of CloudTrail Events and volume of DNS and VPC Flow Logs

NAT Instances vs Gateways

NAT Instances

NAT Gateways

NAT instances can be found in AWS Marketplace

you must disable source/destination checks (enabled by default for all EC2 instances) for NAT Inastances

you have to increase instance size in case of bottlenecking

you can use NAT Instance as Bastion Host

NAT Gateways are recommended over NAT Instances

Bandwidth up to 10 Gbps

no need to patch NAT Gateways = more secure than NAT Insatances

NAT Gateways are not associated with SG, they have automatically assigned public IP

NAT Gateways are highly available

NAT Gateways should be enabled in every availability zone

no need to disable source/destination checks like in NAT Instances

AWS Secrets Manager

similar to Parameter Store, but SM has built-in integration with RDS, Aurora, MySQL and PostgreSQL

uses encryption in-transit and at rest using KMS

automatically rotates credentials

once rotation is enabled, then SM immediately rotates the secret to test configuration

don't enable it if your apps use embedded credentials

CloudWatch Logs require an agent installed and running on EC2 instance and EC2 instance need to have a role with permissions to send logs to CloudWatch

AD Federation (ADFS)

ADFS is Trusted ID provider

AWS is Trusted Relying Party

you have to configure Relying Party Trust with AWS as the Trusted Relying Party

provides SSO for users

gives temporary creds to AWS Console (STS API AssumeRoleWithSAML)

AWS Lambda

Function Policy - defines which AWS resources are allowed to invoke your function

Execution Role - defines to which actions and resources Lambda should have access

Basic Log permissions are given by default to Lambda (but only basic; the detailed logging with data events has to be enabled explicitly)

S3 and Lambda Data Events are NOT enabled by default in CloudTrail - you have to enable logging them explicitly and additional charge is taken (0,1$ per 100 000 events)

Others

AWS doesn't provide a solution for Deep Packet Inspection. You can use 3rd party solutions like Alert Logic, Trend Micro, McAfee

Simple Email Service - by default EC2 throttles traffic over port 25. Better to use port 587 or 2587

Use cases

You want to set up a guard rails across accounts

use SCP

You want to control creation of resources to specific regions

use IAM policies (use an RequestedRegion condition)

You want to enable your developers to create roles safely


use Permission Boundaries (condition PermissionsBoundary, pointing to policy with for example region restrictions; then a user can create a role but only with attached policy with region restriction)

You want to use Tags to scale permissions management

use IAM policies (force a user to create each resource with specific tag using RequestTag condition; then control an access using combination of RequestTag (that he's created) and ResourceTag (existing tag to verify)). You can specify allowed Tags by using ForAllValues:StringEquals.


You can also give each role a project Tag and create a general policy to allow operations only with your project Tag by specifying the condition $[aws:PrincipalTag/project]

If you select the Enable Private DNS Name option, the standard AWS KMS DNS hostname
(https://kms.<region>.amazonaws.com) resolves to your VPC endpoint. Thanks to this the communication between VPC and KMS will not go through the public service endpoints.

When Cognito receives a SAML assertion it needs to be able to map SAML attributes to user pool attributes. When configuring Cognito to receive SAML assertions from an identity provider you need ensure that the IDP is configured to have Cognito as a relying party. API Gateway will need to be able to understand the authorization being passed from Cognito, so you should update API Gateway to use an Amazon Cognito User Pools authorizer.

Basic Lambda permissions required to log to CloudWatch Logs include: CreateLogGroup,
CreateLogStream, and PutLogEvents.

if you want to change it, call the abort-vault-lock operation, fix the typo, and call the initiate-vault-lock again.

sample questions

free tests

Amazon provides encryption client which is embedded into the AWS SDK and CLI

Client-side encryption workflow

Customer creates a CMK in KMS associated with Key ID

File/Object and CMK Key ID is passed to the AWS encryption client using SDK or CLI

The encryption client requests a data key from KMS using a specified CMK key ID

KMS uses CMK Key ID to generate unique data encryption key, which client uses to encrypt the object data

Minimum set of permissions that should be applied in the Key policies to allow users encrypt and decrypt data using CMK keys

"kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt", "kms:GenerateDataKey", "kms:DescribeKey"

you can create ALIAS keys

each CMK key can have multiple ALIAS's point to it

the ALIAS key must be unique in the AWS account and region

If you're using the domain name that CloudFront assigned to your distribution, such as abc.cloudfront.net, you change the Viewer Protocol Policy for one or more cache behaviours to require HTTPS communication. In that configuration CloudFront provides SSL/TLS certificate.

If you're using your own domain name, such as example.com you need to change several CloudFront settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), import a certificate from a third-party certificate authority into ACM or the IAM certificate store, or create and import a self-signed certificate.

When you enable logging for a distribution, you specify the Amazon S3 bucket that you want CloudFront to store log files in. If you're using Amazon S3 as your origin, we recommend that you do not use the same bucket for your log files; using a separate bucket simplifies maintenace.

Use signed cookies in the following cases:

you want to provide access to multiple restricted files

you don't want to change your current URLs.

If you are subject to regulatory compliance like PCI or HIPAA you might be able to use AWS Marketplace rule groups to satisfy web application firewall requirements

When kms:GrantIsForAWSResource is true, only integrated AWS services can create grants

The AWS 'CLI aws kms encrypt' is suitable for encrypting a file which is less than 4 kb

Users can reimport the key material however the key material must be the same.

By default CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE)

Use the System Manager Patch Manager to generate the report and also install the missing patches

After account B has uploaded objects to the bucket in account A, the objects are still owned by account B and account A doesn;t have access to it. In order to fix this the option of --acl "bucket-owner-full-control" should be added when the object is uploaded via aws s3api put-object.

Service Control Policies (SCPs) are guardrails to disable service access. They DO NOT grant access


In order to use your own DNS server you need to ensure that you create a new custom DHCP options set with the IP of the custom DNS server. You cannot modify the existing set so you have to create a new one.

Data key caching stores data keys and related cryptographic material in cache. When you encrypt or decrypt data the AWS Encryption SDK looks for a matching data key in the cache. Data key caching can improve performance, reduce costs, and help you stay within service limits as your application scales.

CMK keys can be used for encrypting data in maximum 4KB in size.

Redshift

Amazon Redshift uses 4-tier key-based architecture for encryption: master keys encrypts cluster key, cluster key encrypts the database key, the database key encrypts the data encryption key.

API Gateway Lambda authorizer (formerly custom authorizer) is a Lambda function that you provide to control access to your API methods.

WAF Sandwich = EC2 instance running your WAF is included in Auto Scaling group and placed in between 2 ELBs