DynamoDB - Best Practices



Certain practices optimize code, prevent errors, and minimize throughput cost when working with various sources and elements.

The following are some of the most important and commonly used best practices in DynamoDB.

Tables

The distribution of tables means the best approaches spread read/write activity evenly across all table items.

Aim for uniform data access on table items. Optimal throughput usage rests on primary key selection and item workload patterns. Spread the workload evenly across partition key values. Avoid things like a small amount of heavily used partition key values. Opt for better choices like large quantities of distinct partition key values.

Gain an understanding of partition behavior. Estimate partitions automatically allocated by DynamoDB.

DynamoDB offers burst throughput usage, which reserves unused throughput for “bursts” of power. Avoid heavy use of this option because bursts consume large amounts of throughput quickly; furthermore, it does not prove a reliable resource.

On uploads, distribute data in order to achieve better performance. Implement this by uploading to all allocated servers concurrently.

Cache frequently used items to offload read activity to the cache rather than the database.

Items

Throttling, performance, size, and access costs remain the biggest concerns with items. Opt for one-to-many tables. Remove attributes and divide tables to match access patterns. You can improve efficiency dramatically through this simple approach.

Compress large values prior to storing them. Utilize standard compression tools. Use alternate storage for large attribute values such as S3. You can store the object in S3, and an identifier in the item.

Distribute large attributes across several items through virtual item pieces. This provides a workaround for the limitations of item size.

Queries and Scans

Queries and scans mainly suffer from throughput consumption challenges. Avoid bursts, which typically result from things like switching to a strongly consistent read. Use parallel scans in a low-resource way (i.e., background function with no throttling). Furthermore, only employ them with large tables, and situations where you do not fully utilize throughput or scan operations offer poor performance.

Local Secondary Indices

Indexes present issues in the areas of throughput and storage costs, and the efficiency of queries. Avoid indexing unless you query the attributes often. In projections, choose wisely because they bloat indexes. Select only those heavily used.

Utilize sparse indexes, meaning indexes in which sort keys do not appear in all table items. They benefit queries on attributes not present in most table items.

Pay attention to the item collection (all table items and their indices) expansion. Add/update operations cause both tables and indexes to grow, and 10GB remains the limit for collections.

Global Secondary Indices

Indexes present issues in the areas of throughput and storage costs, and the efficiency of queries. Opt for key attributes spreading, which like read/write spreading in tables provides workload uniformity. Choose attributes which evenly spread data. Also, utilize sparse indexes.

Exploit global secondary indices for fast searches in queries requesting a modest amount of data.

Advertisements