Please enable JavaScript.
Coggle requires JavaScript to display documents.
AWS Security Specialty (Security Automation (Triggers (Cloudwatch,…
AWS Security Specialty
Identity & Access Management
Policies
Types
Identity-Based policies
Customer Manages Policies
created and administered by a user
AWS Managed Policies
Can be changed without any notification or can give unnecessary permissions
Those which are created and administered by AWS
Inline Policies
Helpful if you want to make a one-to-one relationship between a policy and the principal entity
Can be used only for one principal entity
Permissions to an Identity
Access control lists (ACLs)
cross-account permissions policies that grant permissions to the specified principal entity
cannot grant permissions to entities within the same account
Resource-based policies
grant permissions to a principal entity
Cross-account, must also use an identity-based policy to grant the principal entity access to the resource
inline policies to resources
Organizations SCPs
define the maximum permissions for account members of an organization
Session policies
programmatically create a temporary session for a role or federated user
limit the permissions that the role or user's identity-based policies grant to the session
"AssumeRole*"API operation
Permissions boundaries
defines the maximum permissions that the identity-based policies can grant to an entity
permissions boundaries do not reduce the permissions granted by resource-based policies
Power users - Admin except managing IAM users and groups
Case sensitive - lower case
Evaluation Logic Link
Explicit DENY > Explicit ALLOW > Default Implicit DENY
If no there's no ALLOW rule then DENY
All policies checked at once
IAM Role
Switch Role
Role Chaining
Max 1 hour
"AssumeRole" action
Revoke Security Credentials
attach "AWSRevokeOlderSessions" inline policy
Any user who assumes the role after you revoked sessions is not affected
Maximum IAM role duration is 12 hours
Default 1 hour
must attach both a trust policy and an identity-based policy
Trust policies
define which principal entities (accounts, users, roles, and federated users) can assume the role
Request Components
Action
Principal
Resource
Use cases
Use Tags to scale permissions management
use IAM policies (create each resource with specific tag using RequestTag condition; then control an access using combination of RequestTag (that he's created) and ResourceTag (existing tag to verify)). You can specify allowed Tags by using ForAllValues:StringEquals.
Give each role a project Tag and create a general policy to allow operations only with your project Tag by specifying the condition $[aws:PrincipalTag/project]
Controlling Access with Tags
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.html
Enable your developers to create roles safely
Permission Boundaries (condition PermissionsBoundary, pointing to policy; user can create a role but only with attached policy restrictions)
Set up a guard rails across accounts
use SCP
Control creation of resources to specific regions
use IAM policies (use an RequestedRegion condition)
conditions
Require MFA
"Condition": { "Null": { "aws:MultiFactorAuthAge": true }}
StringEquals “aws:RequestedRegion”
Allow Condition Bool “aws:MultiFactorAuthPresent:” True
"aws:SourceIp":
Security Token Service (STS)
grants temporary credentials for access to AWS resources
users can come from different sources
Cross Account
Use with Roles for Cross-account access to users
Federation with Mobile Apps
Federation
SAML based SSO
GetFederationToken
STS token lifetime is 1-36 hours
not stored with the user
generated dynamically when requested
supports SAML 2.0
Roles for EC2 running applications
AWS Organizations
Create Service Control Policies (OU permissions boundries)
Allows for specifying permission boundary even for root account
Overwrites any local policy
Disable service access. DO NOT grant access
IAM access advisor
displays a list of services and service last-accessed information
Enable SCPs on org root
AWS Organization Service Access Report
Cognito
Supports OAuth 2.0
Implicit grant - you'll get your JWT token
Authorization code grant - Cognito will give you authorization code back to process it further on the backend side
OAuth scope - options to verify identity, e.g. phone mail.
Mobile Apps
data is synced across multiple devices
Supports AWS Mobile SDKs
Web Identity Federation (FB, Google, Amazon)
Cognito maps token from Web ID provider to IAM roles to access AWS resources
Behaves as Identity Broker between your app and Web ID providers
Identity Stores(Amazon, FB)
Must call "AssumeRoleWithWebIdentity" API
Setup requires client ID , client secret and scopes to authorize.
No need to store locally AWS credentials
User pools - user directories; users can login directly to user pools or indirectly via ID providers (e.g. FB)
Cognito Groups
groups to create collections of users to manage their permissions or to represent different types of users
Social IdP registeration
Register with IdP, Add IdP to User Pool, Test IdP configuration
user sign-up and sign-in
User Directory
Social sign in
App for hosted Web UI
SAML provider
User pool authentication, generates JWTs (JSON Web Tokens)
Identity pools - create unique identities, can give temporary credentials to AWS resources
maps identities for users authenticated with providers
allow for unauthenticated identities (guest users)
authenticated identities
cognito user pool, external social Idp, SAML based or custom existing
temporary credentials are associated with a specific IAM role.
To receive a SAML assertion from identity provider needs to map SAML attributes to user pool attributes.
Need ensure that the IDP is configured to have Cognito as a relying party.
API Gateway will need to be able to understand the authorization being passed from Cognito, so you should update API Gateway to use an Amazon Cognito User Pools authorizer.
Control Access to a REST API Using Cognito User Pools as Authorizer
must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer
Create list of OAuth scopes on the API method
corporate identity federation
SAML
Microsoft AD
fine-grained Role-Based Access Control (RBAC)
assign different IAM roles to different authenticated users
OpenID Connect support
Enterprise Identity Federation
gives temporary creds to AWS Console (STS API "AssumeRoleWithSAML")
Custom Broker or SAML 2.0
requires Trusted ID provider
eg. Microsoft ADFS
SAML based SSO
SAML to enable SSO from AWS to LDAP
AWS is Trusted Relying Party
have to configure Relying Party Trust with AWS as the Trusted Relying Party
Federated AD Groups to IAM Roles
General
IAM Credential Reports
List all users and status of their credentials (passwords, access keys, and MFA devices)
User, arn, user_creation_time, password_enabled,last_used, last_changed, next_rotation, mfa_active,access_key_active,last_rotated,last_used_date,last_used_region,last_used_service
Global service
Delegated/Partner external account access
ARN for Role
Create Role for account
External ID
Data Protection
KMS
Permissions
KMS Key Policy
AWS KMS Condition Keys are predefined conditions you can use
kms:ViaService
limits use of CMK requested from particular service, e.g. only allows requests which come from particular Lambda
to specify a condition, e.g. policy condition to disable access after a date
aws: MultiFactorAuthAge
Minimum set of permissions that should be applied in the Key policies to allow users encrypt and decrypt data using CMK keys
"kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt
",
"kms:GenerateDataKey
",
"kms:DescribeKey"
Default key Policy
Console
Allows Key Users to Use the CMK
Allows Key Users to Use a CMK for Cryptographic Operations
Allows Key Users to Use the CMK with AWS Services
Enables IAM policies to allow access to the CMK
Allows Key Administrators to Administer the CMK
Grants full permission to root
KMS API, SDK or CLI
Grants full permission to root
Enables IAM policies to allow access to the CMK
If no explicit rights are applied via a key policy then even the root account has no access
KMS Grants
CLI - important commands
list-grants
revoke-grant
create-grant
The AWS 'CLI aws kms encrypt' is suitable for encrypting a file which is less than 4 kb
used for temporary, granular permission
programatically delegates permissions
Grants allow access, not deny
to other AWS principals
a generated Grant Token can be passed to KMS API
True “kms:GrantIsForAWSResource” condition key
It allows a user to create grants on this CMK only when the grant is created on the user's behalf by any one of the integrated services
does not allow the user to create grants directly
you should use Key Policies for static permissions and for explicit deny
IAM Key Admin
administrator cannot use key. You have to explicitly point the same user as a key user (not only administrtaor)
administrators can create, delete and decrypt keys
Key users (who can use the key)
users/roles that can use the key to encrypt and decrypt data
Delegate permissions
include the root principal of a trusted account within the CMK key policy
trusted account must delegate to users/roles in their account
CMK - Customer Master Key
Symmetric CMK or private Asymmetric CMK can never be exported
symmetric data keys can be exported
“GenerateDataKey” API or the “GenerateDataKeyWithoutPlaintext”
public portion of an asymmetric CMK can
“GetPublicKey” API
Types
Customer managed CMK
import your own key material
Create CMK with no Key material
Import token expire in 24hrs
must be 256-bit symmetric key
Download public/wrapping key and import token
Generate Key material (OpenSSL)
1 more item...
manually rotate
remap key alias or change CMK identifier in your app to the key ID for the new CMK
create new CMK with no key material -> import new encrypted key material into that CMK ->
can re-import your copy of the key material to reuse key
must be the same
can set an expiration period
does NOT allow for automatic rotation
can delete imported key material on demand
store outside of AWS for security
key material generated by AWS
do not have an expiration time
can rotate automatically every year
must enable
or on-demand manually
cannot be deleted immediately
have to wait 7-30 days for deletion
can be manually disabled or scheduled for deletion
CMK itself is deleted, not just the underlying key material
cannot be used if it is scheduled for deletion
AWS managed CMK
generated by AWS for service use
rotated every 3 years automatically
cannot be rotated manually
ex. aws/service-name
cannot be deleted
1 key per CMK (you cannot use someone else's key if it's in use)
ALIAS - interchangeable with ARN/keyID
must be unique in the AWS account and region
each CMK key can have multiple ALIAS's
Customer Data Keys (CDK)
Envelope encryption
no size limit
potentially reduce the number of API calls to KMS
Data key caching stores data keys and related cryptographic material in cache. When you encrypt or decrypt data the AWS Encryption SDK looks for a matching data key in the cache.
Data key caching can improve performance, reduce costs, and help you stay within service limits as your application scales.
encryption key that is used to protect data.
CMK keys can be used for encrypting data in maximum 4KB in size.
Unique ARN contains Key ID
KMS best practices
limit KMS actions (no kms:*)
separate keys per business unit and data classification
CMK admins separated from users
If you select the Enable Private DNS Name option, the standard AWS KMS DNS hostname
(
https://kms
.<region>.amazonaws.com) resolves to your VPC endpoint. Thanks to this the communication between VPC and KMS will not go through the public service endpoints.
supports VPC endpoint
monitor attempts to use the key while it is disabled, create an Amazon CloudWatch alarm
Full key rotation integrated with RDS, coded integration through Lambda for other services
regional service
Key must be in same region as object
keys are generated using Amazon's multi-tenancy HSM
Client Encryption
embedded in AWS SDK and CLI
Client-side encryption workflow
Customer creates a CMK in KMS associated with Key ID
File/Object and CMK Key ID is passed to the AWS encryption client using SDK or CLI
The encryption client requests a data key from KMS using a specified CMK key ID
KMS uses CMK Key ID to generate unique data encryption key, which client uses to encrypt the object data
SDK supports data key caching
AWS SDKs, AWS Encryption SDK, the Amazon DynamoDB Client-side Encryption, and the Amazon S3 Encryption Client
For data outside of AWS services
AWS Encryption SDK
generates a unique data key for each data object that it encrypts
encryption context
logged in clear text within CloudTrail
key-value pair
provide additional authenticated information
passed during encryption and then during decryption to ensure the integrity
Commands
kms:ReEncrypt
allow KMS to re-encrypt the data keys, without revealing any plaintex
change the customer master key (CMK) under which data is encrypted
GenerateDataKeyWithoutPlaintext
useful for systems that need to encrypt data at some point, but not immediately.
kms:DescribeKey
allow your app to retrieve information about the CMK's
encrypt
--key-id, --plaintext, --encryption-context, --grant-tokens, --encryption-algorithm
KMS update alias
Update-alias - --alias-name <value> --target-key-id <value>
decrypt
If the ciphertext was encrypted under a symmetric CMK, you do not need to specify the CMK or the encryption algorithm.
--ciphertext-blob, --encryption-context, --grant-tokens,--key-id
To access an encrypted resource, the principal needs to have permissions to use the resource,as well asto usethe encryption key that protects the resource
GET and PUT requests for an object protected by AWS KMS will fail if not made by SSL or SigV4
Cloud HSM
dedicated Hardware Security Module
EAL-4
provides secure key storage and cryptographic operations
you control keys (Amazon doesn't have access to your keys)
you can for example generate here your keys to EC2
FIPS 140-2 level 3
key_mgmt_util
CLI to manage keys
Sign command
AWS KMS custom key store
configure your own CloudHSM cluster and authorize AWS KMS to use it as a dedicated key store for your keys
provides Java Cryptography Extensions (JCE) API's
S3 cannot integrate directly with an HSM
Glacier
data is stored in archives (zip or tar)
Archives are stored in containers called vaults
Vault Lock Policy
Used for:
creating data retention policy (e.g. 5 years)
configuring WORM (Write Once Read Many)
Once you attach policy the lock is in-progress state for 24 hours
after you accept they become immutable. In other words once it is accepted and applied it cannot be changed or removed!
if you want to change it, call the abort-vault-lock operation, fix the typo, and call the initiate-vault-lock again.
Low cost cloud storage (0,004$ per GB/month)
Parameter Store
to securely store sensitive values (e.g. license key for installation)
you can store them as plain text or you can encrypt them using KMS
you can reference them using their names
Systems Manager
uses KMS customer master keys to encrypt the parameter values when you create or change them
S3
Encryption
if you encrypt the S3 object with KMS and then make that object public, then the encrypted content cannot be displayed both anonymously and as a user without permissions to KMS key
<Error>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
BUT if you use SSE-S3 (Amazon stores keys, not you) then when the object is public, then anonymously you can display it
Client-Side Encryption
Master Key stored in application
Use AWS SDK
CMK from KMS
Forcing encryption
You can force downloading S3 objects using bucket policy.
Firstly allow some action then add Deny Condition "Condition":{
"Bool":
{"awsSecureTransport":false}}
Server-Side Encryption
object metadata is not encrypted
Store separately ex. encrypted database
SSE - S3
"s3:x-amz-server-side-encryption": "AES256"
SSE - KMS
"s3:x-amz-server-side-encryption":"aws:kms"
SSE - C
Cross Region Replication
Uses SSL by default
you can specify only one destination
Source and destination have to be in different regions
If you delete object without specifying version ID the S3 will add the DELETE marker in the source bucket.
If you specify Filter element in a replication configuration rule, S3 does not replicate the delete marker.
More:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr-what-is-isnot-replicated.html
If you specify an object version ID to delete in a DELETE request, Amazon S3 deletes that object version in the source bucket, but it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.
If don't specify the Filter element, Amazon S3 assumes replication configuration is a prior version V1. In the earlier version, Amazon S3 handled replication of delete markers differently.
Cross Account
By default, an S3 object is owned by the AWS account that uploaded it.
object owner can update the ACL either during a put or copy operation
add option “--acl bucket-owner-full-control” to aws s3api put-object
Require that objects grant the bucket owner full control
Add a bucket policy that grants users access to put objects in your bucket only when they grant you (the bucket owner) full control of the object.
"s3:x-amz-acl": "bucket-owner-full-control"
Replication configuration
Add "AccessControlTranslation" element
grants full permissions to the owner of the destination bucket
Object owner must grant the bucket owner the READ and READ ACP permissions via the object ACL
Source and destination buckets must have versioning enabled
NOT replicated?
Objects created with server-side encryption using AWS KMS managed (SSE-KMS) encryption keys - UNLESS YOU EXPLICITLY ENABLE THIS OPTION
anything before CRR is turned on
Objects created with server-side encryption using customer provided (SSE-C) encryption keys
Deletes to a particular VERSION of an object
Objects to which the bucket owner doesn't have permissions (e.g. when the object owner is different from bucket owner)
Amazon S3 must have permissions to replicate objects. When you're doing it for the first time then a custom policy is generated
objects can be replicated only once. after replicating you cannot replicate it again
Support SSE-S3 and SSE-KMS
No SSE-C support
Bucket Policy
Use?
When your IAM policy bump up against size limit
Limits for IAM: 2 kb for users, 5 kb for groups, 10 kb for roles. S3 bucket policies have limits up to 20 kb.
When you want to keep access control in your S3 environment
Example: You have lots of employees in various groups and subgroups and you have one bucket to which only 2 accounts should have access. It's much easier to do it via Bucket Policy rather than denying access in all IAM policies
Simple way to grant cross-account to S3 service without using IAM
roles
have to add /* at the end of bucket's ARN
without it you can still do actions on any object
If not actions against bucket will be denied
S3 Resource policy
can be broken to user level, e.g. Alice can PUT but not DELETE objects
Explicit DENY always overrides ALLOW!!!
if you explicitly DENY access in bucket policy and allow access in IAM then a DENY rule will be applicable
allow control of unauthenticated access and allow conditions
ACLs
Must use CLI or API to specify ACLs for IAM users
You need account number and owner canonical user ID
The command
aws s3api list-buckets
will give you the canonical user ID for your IAM
The account number can be taken from: click your user name -> My Security Credentials -> Account Identifiers
Amazon recommends using IAM and bucket policy instead of ACLs.
Bucket policies are limited to 20 kb in size so ACLs can become useful if your bucket policy grows too large
BUT they can be useful for setting up access control mechanism for individual object
If you give public access via ACL to an object (so it can be accessible anonymously), the Deny permission in Bucket Policy won't work (it's applicable only to authenticated users)
ACLs are as old as the S3 and the predate the IAM.
Use
object/file level permissions
if bucket policy grows to large
S3 pre-signed URLs are typically handled via SDK. The URLs expires after defined time
They can be generated also via CLI, e.g.
$ aws s3 presign s3://rzepsky/hello.txt --expires-in 300
Forcing S3 to use CloudFront
Have to specify the Origin Access Identity and enable an option Restrict Bucket Access
you can grant read permissions when configuring CloudFront distribution or you can do it by yourself (by updating the Origin Access Identity permissions)
It takes a lot of time distribute this change *from several hours to 24 hours even)
condition
s3:LocationConstraint
Server Access Logging
requester, bucket name, request time, request action, response status, and an error code
detailed records for the requests that are made to a bucket
AWS Secrets Manager
automatically rotates credentials
once rotation is enabled, then SM immediately rotates the secret to test configuration
don't enable it if your apps use embedded credentials
uses encryption in-transit and at rest using KMS
similar to Parameter Store, but SM has built-in integration with RDS, Aurora, MySQL and PostgreSQL
Difference over Parameter Store
charge on use, encrypted only, auto-rotate, password generation, cross-account access
Secret sign-in failures
Mutli-user rotation
Exponential backoff
Redshift
uses 4-tier key-based architecture for encryption: master keys encrypts cluster key > cluster key encrypts the database key > the database key encrypts the data encryption key.
Run in VPCs
DynamoDB
encryption at rest
Uses KMS CMK
encrypts customer data in table, primary keys, local and global secondary indexes
Shared Responsibility Model
Changes depending on service type
Amazon.com is completely separated from AWS network
https://d1.awsstatic.com/security-center/Shared_Responsibility_Model_V2.59d1eccec334b366627e9295b304202faf7b899b.jpg
Whitepapers:
https://aws.amazon.com/security/security-resources/
KMS Cryptographic Details Paper
DDoS Best Practices Paper
Logging Paper
KMS Best Practices Paper
Incident Response Paper
Security Automation
Terminate EC2
API call > Cloudwatch Event > Event Rule Targets Lambda > Lambda Executes action
Change Security Groups
Modify WAF rules
AWS Inspector remediation
KMS key rotation
Tag Resource (Creation or Later)
Account Credential Rotation
Triggers
Cloudwatch
Cloudtrail
AWS config
AWS Lambda Function Logging in Python
To output logs from your function code, you can use the print method, or any logging library that writes to stdout or stderr.
Created by Emilio Nazario
https://www.linkedin.com/in/emilionazario/
Logging & Monitoring
#
CloudWatch
CloudWatch
for metrics, alarms and notifications
specify metrics to search logs for values
CloudWatch Logs
pushed from your systems/apps as well as other AWS services
Logs require an agent installed and running on EC2 instance and EC2 instance need to have a role with permissions to send logs to CloudWatch
stored indefinitely (not in S3)
Basic Lambda permissions required to log to CloudWatch Logs include: CreateLogGroup,
CreateLogStream, and PutLogEvents.
Log Stream
Log group
group of log streams that share the same retention, monitoring, and access control settings
each separate source of logs makes up a separate log stream
sequence of log events that share the same source
CloudWatch Events
allows for configuring rules based on events
eg. monitor root user activity
Events
indicates a change in your AWS environment
Targets
processes events
Rules
matches incoming events and routes them to targets for processing
Encrypt Log Data in CloudWatch Logs Using AWS KMS
CloudTrail
Doesn't support ALL AWS services
Records AWS API calls for your account and delivers log files to S3
Logs are delivered every 5 active minutes (with up to 15 minutes delay)
Support SSE-S3 or SSE-KMS for encrypting logs
Default S3 server-side encryption (SSE-S3)
Log File integrity validation
Every hour log files are delivered with 'digest' file to validate the log's integrity
Uses SHA-256 hashing and SHA-256 with RSA for digital signing
must enable
Example : RDP/SSH sessions are NOT logged
Prevent logs from being deleted by configuring S3 MFA Delete
S3 and Lambda Data Events are NOT enabled by default in CloudTrail - you have to enable logging them explicitly and additional charge is taken (0,1$ per 100 000 events)
Can separately add Management Events and Data Events
best practice is to replicate CloudTrail logs to the S3 bucket owned by different account
CloudTrail trigger
create Cloudwatch events to trigger on event parameter
Lambda Functions , function takes action
AWS Config
to monitor access, AWS Config uses CloudTrail
key terms
Configuration Recorder
Detect changes and capture as configuration items
records configuration change
Stores everything in S3 Bucket
Configuration History
Collection of Config Items for a resource over time
Configuration snapshots
collection of Configuration Items
Configuration Stream
Once created Config Items are added to stream
Configuration Items
Point-in-time attributes of resources
Generated when the configuration of a resource changes
to enable it in all region you have to do it manually
requires IAM role (with Read only permissions to the all resources, Write access to S3 logging bucket, Publish access to SNS)
Allows for
audit & compliance
Encryption in use
security analysis
changes over time
IAM permissions
EC2 security group rules
resource tracking
list all resources
Triggers
Periodic at set frequency 1 to 24 hours
Receive notification of change
Configuration changes (created, changed, deleted)
Config rules cannot be a direct target of cloudwatch event
can trigger lambda functions directly - custom config rules
AWS Inspector
Requires a role permission "ec2:DescribeInstances"
uses tags to which instance should be scanned
require installed agent on EC2 instance
automated security assessment service
Assess EC2
VPC Flow Logs
Flow log is stored using CloudWatch Logs
in CloudWatch you have to specify a dedicated Log Group
requires a role to write logs to CloudWatch
you cannot change a configuration after creating a Flow log
not all traffic is recorded, e.g. DHCP, Windows license activation, DNS, metadata traffic
you cannot enable Flow Logs for VPCs that are peered outside your account
you cannot tag a Flow log
AWS doesn't provide a solution for Deep Packet Inspection. You can use 3rd party solutions like Alert Logic, Trend Micro, McAfee
contain traffic metadata only - no content
Athena
interactive service to query data in S3 using standard SQL (log querying)
pay per query (5$ for each 5TB)
you have to create a database and a table and then propagate there data to be able to run queries against the data
Trusted Advisor
For more than basic checks you have to upgrade your support plan to Business or Enterprise
Advice on Cost Optimization, Performance, Security, Fault Tolerance
Checks security groups rules that allow unrestricted access to specific ports
GuardDuty
Built-in lists
Can upload custom lists
Ingests CloudTrail, VPC Flow logs, DNS Logs
charge based on amount of CloudTrail Events and volume of DNS and VPC Flow Logs
needs 7-14 days to set a baseline
Detect if any of your EC2 instances are exhibiting unusual behavior
Trigger Cloud Watch events
subscribe to SNS for notifications
Infrastructure Security
#
Perimeter
WAF
verifies the followings:
headers
query string parameters
IP
length of request
strings that appear in requests
integrates with both ALB and CloudFront
ALB WAF is regional
only few regions allow for integrating WAF with ALB
you can configure the followings:
counting requests that match your criteria
whitelisting
blacklisting
WAF Sandwich - EC2 instance running your WAF is included in Auto Scaling group and placed in between 2 ELBs
If you are subject to regulatory compliance like PCI or HIPAA you might be able to use AWS Marketplace rule groups to satisfy web application firewall requirements
Kinesis Firehouse for logging
S3 to store logs
AWS Shield
mitigation
minimize the attack surface (e.g. by using Bastion Host with whitelisted IPs, the attack surface is limited to exposed, few hardened entry points)
Enabled when you use ELB, CloudFront or Route53
Safeguard Exposed Resources (e.g. by using geolimitatio, CloudFront, Route53, WAFs)
in Route53 using alias Record Sets you can redirect traffic to CloudFront distribution, Private DNS
AWS DDoS protection whitepaper:
https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf
Shield Standard
Infrastructure layer 3 and 4 attacks
CloudFront and Route 53
Shield Advanced
Business/Enterprise support
AWS DRT mitigation support
Cost protection
Attack notification
Elastic IP, ELB, CloudFront, Global Accelerator, Route 53
Application layer 7 attacks
Elastic Load Balancer
Terminate TLS/SSL connection either on the LB or on your EC2 instances
Forward secrecy - compromising long term keys don't compromise past session keys
to have Perfect Forward Secrecy use ECDHE-... ciphers
It is recommended to use ELBSecurityPolicy-2016-08 security policy
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
At least 2 subnets have to be specified for ALB provisioning (1 subnet = 1 availability zone)
Application Load Balancers
Support multiple SSL certs
HTTP and HTTPS level
ALB supports ONLY TLS/SSL termination on the LB and only supports HTTP/S
Network Load Balancers
TCP and TLS level
For end-to-end encryption, you need to terminate SSL /TLS on the EC2 instance and this is only possible using the Network Load Balancer or Classic Load Balancer.
Need to use TCP
preserve source IP
used if you need ultra high performance
VPC
VPC Endpoints
2 Types
Gateway endpoint = similar to Network Gateway
DynamoDB
S3
Interface endpoint = Elastic Network Interface (ENI)
services powered by AWS PrivateLink
when VPC endpoint is used the source IP address uses private address, not public
VPC endpoint is internal gateway for accessing other AWS services from private subnets
specify vpc endpoint Condition
aws:sourceVpce
subnets
first 4 and last IP address from the subnet is reserved by Amazon for: VPC router, DNS server, reservered for future use, network address, broadcast address
1 subnet = 1 availability zone
by default there's no auto-assign public IP
Custom DNS server - need to ensure that you create a new custom DHCP options set with the IP of the custom DNS server.
cannot modify the existing set so you have to create a new one.
Can only have 1 NACL
NACLs
Default NACL allowing all inbound and outband traffic
Custom NACL denies all inbound and outbound traffic
NACLs are stateless
outbound rule by default is to DENY any traffic
ensure to ALLOW ephemeral ports (1024-65535) in outbound traffic
Used to blacklist IPs
Assessed before SG
NACL can have multiple subnets but a subnet can have just 1 NACL (old one is replaced by the newest one)
evaluate rules in order
Subnet level
allow & deny
General
you can have multiple VPCs in one region
1 Internet gateway per 1 VPC
No transitive peering
SG sits only in 1 VPC (you cannot assign SG between different VPCs)
creating a new VPC means creating new default route table, security group and NACL
NAT Instances vs Gateways
NAT Gateways
Recommended over NAT Instances
No need to patch
Not associated with SG, they have automatically assigned public IP
Bandwidth up to 10 Gbps
NAT Gateways should be enabled in every availability zone
Highly available
NAT Instances
Increase instance size in case of bottlenecking
Can be used as Bastion Host
Can be found in AWS Marketplace
Disable source/destination checks (enabled by default for all EC2 instances)
Security Groups
allow only
Instance level
5 SG per instance
evaluate all rule at once
New SG Deny all inbound traffic & allow all outbound
DNS
create a set of DHCP options to override default
AWS Certificate Manager
you can use the cert in ALB or CloudFront
cannot export the certificate
allows you for automatic certificate renewal unless it wasn't imported or associated with Route53 private hosted zone
SSL
To use custom SSL certificate (for your custom domain name, e.g. example.com) you have to import it using ACM (AWS Certificate Manager)
Certificates for CloudFront have to be stored in US East (N. Virginia) region or in IAM (certs can be imported to IAM only via CLI)
You need a separate certificate for your ELB and separate certificate for CloudFront distribution
IAM Certificate manager
supports deploying server certificates in all Regions
IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage.
must obtain your certificate from an external provider for use with AWS
use only when you must support HTTPS connections in a Region that is not supported by ACM
regional service certificates must be imported in each region where they will be used
API gateway
Supports Caching, 5min up to 1 hour. - need to enable
Throttling - if there are too many requests (above the limit) the API Gateway replies with "429 Too Many Requests"
throttled by default
Burst limit is up to 5 000 requests across all APIs
by default the steady-state limit is 10 000 rps (requests per second)
API Gateway Lambda authorizer (formerly custom authorizer) is a Lambda function that you provide to control access to your API methods.
CloudFront
Use signed cookies in the following cases:
you want to provide access to multiple restricted files
you don't want to change your current URLs.
Distributions are Global
Use pre-signed URL's
restrict access to individual files
Default domain name
Change Viewer Protocol Policy
HTTPS Only
Redirect HTTP to HTTPS
CloudFront default SSL/TLS certificate
enable distribution logging
specify S3 bucket
Custom domain name
Use SSL/TLS certificate provided by ACM, import from third-party or IAM certificate store
must request or import the certificate in the US East (N. Virginia) Region
HTTPS between viewers and CloudFront
certificate that was issued by a trusted certificate authority
require HTTPS between viewers and CloudFront, you must change the AWS region to US East (N. Virginia) in the AWS Certificate Manager console before you request or import a certificate.
or you can use a certificate provided by AWS Certificate Manager
HTTPS between CloudFront and a custom origin
If the origin is not an ELB load balancer the certificate must be issued by a trusted CA
f your origin is an ELB load balancer, you can also use a certificate provided by ACM
If you want to require HTTPS between CloudFront and your origin, and you're using an ELB load balancer as your origin, you can request or import a certificate in any region.
VPN
AWS VPN CloudHub
multiple AWS Site-to-Site VPN connections
AWS Site-to-Site VPN
Virtual private gateway (2 endpoints) to customer gateway
IPsec
AWS Client VPN
OpenVPN-based VPN client (TLS VPN session)
Third party EC2 appliance in VPC
Compute
EC2
EC2 dedicated instances vs dedicated hosts
dedicated hosts = you have a control on the physical server
some 3rd party software say you have to run their software on dedicated host
charged by host
dedicated instances = EC2 instances that are run on hardware that is dedicated to the single customer
charged by instance
Amazon may share this hardware with other instances from the same AWS account (if those instances are not dedicated instances)
Key Pair
once you removed public key in the console, you still can log in to the instance using this key (in metadata the key is still accessible)
you can have multiple keys attached to the instance (e.g. for different users and each has different keys)
Create Key Pair
EC2 console, the command line, or third-party tool and then import the public key to EC2
EC2 private key file permissions
protected from read and write operations from any other users - chmod 400 or 600
you cannot use KMS with SSH for EC2. But you can with CloudHSM
EC2 stores the public key only, and you store the private key
At boot time, the public key content is placed on the instance in an entry within ~/.ssh/authorized_keys
root EBS volume cannot be encrypted if it is used as unencrypted
to encrypt it : detach volume, create snapshot, copy AMI and enable encryption
Simple Email Service - by default EC2 throttles traffic over port 25. Better to use port 587 or 2587
Regain access to EBS-backed instance
stop the instance, detach its root volume and attach it to another instance as a data volume, modify the authorized_keys file, move the volume back to the original instance, and restart the instance.
do not delete compromised EC2 roles
EBS KMS steps
In its GenerateDataKeyWithoutPlaintext and Decrypt requests to AWS KMS, Amazon EBS uses an encryption context with a name-value pair that identifies the volume or snapshot in the request.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD
AWS Systems Manager
SSM agent
Security group must allow communication to console
common use case: automating common admin tasks on thousands of instances (e.g. based on tags)
Run Command for OS level info
allows you to execute any command on the EC2 instance without SSH
you can use this service with EC2, CloudFormation, Lambda etc.
create IAM Role for on-prem server communication
System Manager Patch Manager to generate the report and also install the missing patches automatically
Troubleshooting
Instance role to talk to SSM
SSM installed, latest version
Agent is running
EC2 Health API
Check \amazon-ssm-agent.log
\error.log
to read a KMS secured string need permissions to GET Parameter API and KMS API call to decrypt it
Window Server password recovery
AWSSupport-RunEC2RescueForWindowsTool command
AWS Lambda
Execution Role - defines to which actions and resources Lambda should have access
Type of IAM Role
Basic Log permissions are given by default to Lambda (but only basic; the detailed logging with data events has to be enabled explicitly)
Function Policy - defines which AWS resources are allowed to invoke your function
Environment variables are encrypted by KMS
“Enable encryption helpers” checkbox.
variables will also be individually encrypted using a CMK of your choice
Use case (triggered to)
create new CMKs and update S3 Buckets with new CMK
terminate non compliant instance
If your function needs network access to a resource like a relational database that isn't accessible through AWS APIs or the internet, configure it to connect to your VPC
reports metrics and logs to cloudwatch
Custom authentication challenges can be implemented with Lambda triggers
AWS Hypervisor
hypervisor automatically scrubs (sets to 0) unallocated EBS memory (no risk of accessing someone else data)
Amazon recommends using HVM over PV
EC2 instances are run on Xen Hypervisor
Windows EC2 instances can only be HVM (Hardware Virtual Machine) whereas Linux can be PV (paravirtualized) or HVM
in PV, CPU supports 4 privilege modes: Ring 0 is used by host OS and guest OS uses only Ring 1-3
AWS Health API
provides programmatic access to the AWS Health information
provides ongoing visibility into the state of your AWS resources, services, and accounts.