An e-commerce company has an application that uses Amazon DynamoDB tables configured with provisioned capacity. Order data is stored in a table named Orders. The Orders table has a primary key of order-ID and a sort key of product-ID. The company configured an AWS Lambda function to receive DynamoDB streams from the Orders table and update a table named Inventory. The company has noticed that during peak sales periods, updates to the Inventory table take longer than the company can tolerate. Which solutions will resolve the slow table updates? (Select TWO.)
Answer : B, C
:
Key Problem:
Delayed Inventory table updates during peak sales.
DynamoDB Streams and Lambda processing require optimization.
Analysis of Options:
Option A: Adding a GSI is unrelated to the issue. It does not address stream processing delays or capacity issues.
Option B: Optimizing batch size reduces latency and allows the Lambda function to process larger chunks of data at once, improving performance during peak load.
Option C: Increasing write capacity for the Inventory table ensures that it can handle the increased volume of updates during peak times.
Option D: Increasing read capacity for the Orders table does not directly resolve the issue since the problem is with updates to the Inventory table.
Option E: Increasing Lambda timeout only addresses longer processing times but does not solve the underlying throughput problem.
AWS Reference:
DynamoDB Streams Best Practices
Provisioned Throughput in DynamoDB
A company is building a serverless application to process orders from an e-commerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.
Answer : B
Key Requirements:
Serverless architecture.
Handle traffic bursts with high availability.
Process orders asynchronously in the order they are received.
Analysis of Options:
Option A: Amazon SNS delivers messages to subscribers. However, SNS does not ensure ordering, making it unsuitable for FIFO (First In, First Out) requirements.
Option B: Amazon SQS FIFO queues support ordering and ensure messages are delivered exactly once. AWS Lambda functions can be triggered by SQS to process messages asynchronously and efficiently. This satisfies all requirements.
Option C: Amazon SQS standard queues do not guarantee message order and have 'at-least-once' delivery, making them unsuitable for the FIFO requirement.
Option D: Similar to Option A, SNS does not ensure message ordering, and using AWS Batch adds complexity without directly addressing the requirements.
AWS Reference:
AWS Lambda and SQS Integration
A media company has an ecommerce website to sell music. Each music file is stored as an MP3 file. Premium users of the website purchase music files and download the files. The company wants to store music files on AWS. The company wants to provide access only to the premium users. The company wants to use the same URL for all premium users.
Which solution will meet these requirements?
Answer : C
CloudFront Signed Cookies:
CloudFront signed cookies allow the company to provide access to premium users while maintaining a single, consistent URL.
This approach is simpler and more scalable than managing presigned URLs for each file.
Incorrect Options Analysis:
Option A: Using EC2 and EBS increases complexity and cost.
Option B: Managing presigned URLs for each file is not scalable.
Option D: CloudFront signed URLs require unique URLs for each file, which does not meet the requirement for a single URL.
A finance company has a web application that generates credit reports for customers. The company hosts the frontend of the web application on a fleet of Amazon EC2 instances that is associated with an Application Load Balancer (ALB). The application generates reports by running queries on an Amazon RDS for SQL Server database.
The company recently discovered that malicious traffic from around the world is abusing the application by submitting unnecessary requests. The malicious traffic is consuming significant compute resources. The company needs to address the malicious traffic.
Which solution will meet this requirement?
Answer : B
AWS WAF Bot Control:
AWS WAF Bot Control provides managed rules to detect and block malicious bots without requiring manual IP management. This is a more effective solution compared to blocking specific IP addresses.
Incorrect Options Analysis:
Option A and D: Blocking specific IP addresses is inefficient and does not scale well.
Option C: AWS Shield is effective for DDoS protection but does not address abuse from application-level requests.
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure. The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
Answer : A
Amazon S3 with Amazon CloudFront:
Amazon S3 provides highly scalable and durable storage for petabytes of data.
Amazon CloudFront, as a content delivery network (CDN), caches frequently accessed data at edge locations to reduce latency. This combination is ideal for storing and accessing engineering drawings.
Incorrect Options Analysis:
Option B: Amazon S3 Glacier Deep Archive is for long-term archival storage, not frequent access.
Option C: Amazon EBS is unsuitable for large-scale, multi-user data access and does not support caching directly.
Option D: AWS Storage Gateway is for hybrid cloud storage, which is unnecessary for a fully cloud-based architecture.
A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application's ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.
Which solution will meet these requirements?
Answer : A
Provisioned Concurrency:
AWS Lambda's provisioned concurrency ensures that a predefined number of execution environments are pre-warmed and ready to handle requests, reducing latency during traffic spikes.
This solution optimizes costs during low-traffic periods when combined with AWS Application Auto Scaling to dynamically adjust the provisioned concurrency based on demand.
Incorrect Options Analysis:
Option B: Switching to EC2 would increase complexity and cost for a serverless application.
Option C: A fixed concurrency level may result in over-provisioning during low-traffic periods, leading to higher costs.
Option D: Periodically warming functions does not effectively handle sudden spikes in traffic.
A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.
Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.
The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.
How should the developer resolve this issue?
Answer : C
Partition Key Issue:
Using 'service name' as the partition key results in uneven data distribution. Some shards may become hot due to excessive logs from certain services, leading to throttling errors.
Changing the partition key to 'creation timestamp' ensures a more even distribution of records across shards.
Incorrect Options Analysis:
Option A: On-demand capacity mode eliminates throughput management but is more expensive and does not address the root cause.
Option B: Adding more shards does not solve the issue if the partition key still creates hot shards.
Option D: Using separate streams increases complexity and is unnecessary.