Please enable JavaScript.
Coggle requires JavaScript to display documents.
Arch DynamoDB P1 (capacity unit (Ex: table with 5 x RCU and 5x WCU (5x 4KB…
Arch DynamoDB P1
capacity unit
read capacity unit
-
-
If read an item that is larger than 4 KB, DynamoDB needs additional read request units
-
-
-
-
-
ACID (Atomic, consistent, Isolated, Durable)
-
-
-
-
durable across system failure, such as reboot/restart
-
-
Adaptive
-
One responsibility that DynamoDB has is to shard data across partitions. Once your tables reach a certain size, it is no longer efficient for all of the data to be placed in a single partition. Once this point is reached, DynamoDB creates a second partition, on a second server, and reshards data evenly across the two partitions. As the data volume continues to grow, DynamoDB creates new partitions and reshards the data into these partitions
-
-
encryption
DynamoDB encrypts all of your data transparently, and offers three modes for encryption:
– AWS owned CMK (customer master key) – DynamoDB owns the key (no additional charge). This is the default encryption type.
– AWS managed CMK – AWS KMS manages the key, which is stored in your account (KMS charges apply).
– Customer managed CMK – The key is stored in your account and is created, owned, and managed by you. You have full control over the CMK (AWS KMS charges apply).
In transit using SSL ???
RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned.
encryption
-
-
-
○ After encryption at rest is enabled, it can't be disabled.
-
-
-
-
-
Capacity Size unit
1)read capacity unit = one strongly consistent read PER SECOND, or 2 eventually consistent reads per second, for 1 item up to 4 KB in size.
-
DTDT: Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB.(same as E.C)
Demo: Transactional write requests require two write capacity units to perform one write per second for items up to 1 KB.
-
2)Example of 1 RCU
if strong consistency with 1 item of 4KB, require 1 RCU
if eventually consistency, wiht 1 item of 4KB, it need 2 RCU
if item size larger than 4KB, need more RCU
-
Comparatively, cheaper for read and expensive for write
stream
purpose:
Captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours.
Apps can access this log and view the data items as they appeared before and after they were modified, in near-real time.
summary feature
Basic
Consists of table, items and attribute
Table no need fixed schema, no need colum and row
-
-
-
primary key pt ?
2 type of primary key
- partion key and sort key (Composite key)
-
-
-
-
-
IN TABLE THAT ONLY HAVE PARTITION KEY, No 2 items can have same partition key value
-
-
Index pt ?
-
2 type
local secondary index
Must be create when create table,
-
-
-
-
Global Table
DynamoDB Global Tables is a new multi-master, cross-region replication capability of DynamoDB to support data access locality and regional fault tolerance for database workloads.
Applications can now perform reads and writes to DynamoDB in AWS regions around the world, with changes in any region propagated to every region where a table is replicated.
Global Tables help in building applications to advantage of data locality to reduce overall latency.
-
Global Tables replicates data among regions within a single AWS account,
-
steps:
- create 1 empty ables in SG and Hong Kong
- Put them into the same replication group
- when you add/delete the item, it will auto replicate to other tables
-
-
-
-
-
-
-
DynamoDB can replicate TTL delete to the replica table(s)
The replicated TTL delete to the replica table(s) consumes a replicated write capacity unit when using provisioned capacity, or replicated write when using on-demand capacity mode, in each of the replica regions and applicable charges will apply.
trgger tab (Demo)
-
DynamoDB triggers can be used in scenarios like sending notifications, updating an aggregate table, and connecting DynamoDB tables to other data sources
-
-
A trigger for a given table can be created by associating an AWS Lambda function to the stream (via DynamoDB Streams) on a table.
When the table is updated, the updates are published to DynamoDB Streams.
In turn, AWS Lambda reads the updates from the associated stream and executes the code in the function.
-
-
on-demand
Used case
Indexes created on a table using on-demand mode inherit the same scalability and billing model. You don’t need to specify throughput capacity settings for indexes, and you pay by their use. If you don’t have read/write traffic to a table using on-demand mode and its indexes, you only pay for the data storage.
DynamoDB on-demand is useful if your application traffic is difficult to predict and control, your workload has large spikes of short duration, or if your average table utilization is well below the peak. For example:
New applications, or applications whose database workload is complex to forecast
-
SaaS provider and independent software vendors (ISVs) who want the simplicity and resource isolation of deploying a table per subscriber
You can change a table from provisioned capacity to on-demand once per day. You can go from on-demand capacity to provisioned as often as you want.
-
-
-