A developer is receiving an intermittent ProvisionedThroughputExceededException error from an application that is based on Amazon DynamoDB. According to the Amazon CloudWatch metrics for the table, the application is not exceeding the provisioned throughput. What could be the cause of the issue?
Answer : B
DynamoDB distributes throughput across partitions based on the hash key. A hot partition (caused by high usage of a specific hash key) can result in a ProvisionedThroughputExceededException, even if overall usage is below the provisioned capacity.
Why Option B is Correct:
Partition-Level Limits: Each partition has a limit of 3,000 read capacity units or 1,000 write capacity units per second.
Hot Partition: Excessive use of a single hash key can overwhelm its partition.
Why Not Other Options:
Option A: DynamoDB storage size does not affect throughput.
Option C: Provisioned scaling operations are unrelated to throughput errors.
Option D: Sort keys do not impact partition-level throughput.
A company has an application that is based on Amazon EC2. The company provides API access to the application through Amazon API Gateway and uses Amazon DynamoDB to store the application's dat
a. A developer is investigating performance issues that are affecting the application. During peak usage, the application is overwhelmed by a large number of identical data read requests that come through APIs. What is the MOST operationally efficient way for the developer to improve the application's performance?
Answer : A
DynamoDB Accelerator (DAX) provides a managed caching layer specifically optimized for DynamoDB, reducing latency for repeated read requests.
Why Option A is Correct:
Purpose-Built: DAX is designed for DynamoDB, enabling sub-millisecond response times for frequently accessed items.
Operational Efficiency: No need for additional application-level caching logic.
Why Not Other Options:
Option B: Auto Scaling increases capacity but does not address repetitive reads.
Option C: API Gateway caching helps reduce request processing time but does not optimize DynamoDB reads.
Option D: ElastiCache is a general-purpose cache, adding unnecessary complexity for DynamoDB use cases.
A developer is creating a microservices application that runs across multiple compute environments. The application must securely access secrets that are stored in AWS Secrets Manager with minimal network latency. The developer wants a solution that reduces the number of direct calls to Secrets Manager and simplifies secrets management across environments. Which solution will meet these requirements with the LEAST operational overhead?
Answer : B
The Secrets Manager Agent provides an out-of-the-box solution for securely caching secrets locally, reducing latency and operational overhead.
Why Option B is Correct:
Caching: The agent securely caches secrets locally, minimizing Secrets Manager API calls.
Security: Secrets remain secure during retrieval and storage.
Low Operational Overhead: Managed solution eliminates the need for custom logic.
Why Not Other Options:
Option A: Custom scripts introduce complexity and require ongoing maintenance.
Option C: Using Redis requires managing an additional service, increasing overhead.
Option D: Storing secrets in S3 lacks the fine-grained security controls of Secrets Manager.
A company uses more than 100 AWS Lambda functions to handle application services. One Lambda function is critical and must always run successfully. The company notices that occasionally, the critical Lambda function does not initiate. The company investigates the issue and discovers instances of the Lambda TooManyRequestsException: Rate Exceeded error in Amazon CloudWatch logs. Upon further review of the logs, the company notices that some of the non-critical functions run properly while the critical function fails. A developer must resolve the errors and ensure that the critical Lambda function runs successfully. Which solution will meet these requirements with the LEAST operational overhead?
Answer : A
Reserved concurrency guarantees a specific number of concurrent executions for a critical Lambda function. This ensures that the critical function always has sufficient resources to execute, even if other functions are consuming concurrency.
Why Option A:
Ensures Function Availability: Reserved concurrency isolates the critical Lambda function from other functions.
Low Overhead: Configuring reserved concurrency is straightforward and requires minimal setup.
Why Not Other Options:
Option B: Provisioned concurrency is ideal for reducing cold starts, not for managing execution limits.
Option C & D: Alarms and re-invocation mechanisms add complexity without resolving the root cause.
A developer created an AWS Lambda function to process data in an application. The function pulls large objects from an Amazon S3 bucket, processes the data, and loads the processed data into a second S3 bucket. Application users have reported slow response times. The developer checks the logs and finds that Lambda function invocations run much slower than expected. The function itself is simple and has a small deployment package. The function initializes quickly. The developer needs to improve the performance of the application. Which solution will meet this requirement with the LEAST operational overhead?
Answer : C
Configuring the Lambda function to use ephemeral storage and processing data in the /tmp directory improves performance by leveraging local storage during execution.
Why Option C is Correct:
Ephemeral Storage: Lambda provides temporary storage (up to 10 GB) in the /tmp directory for each invocation, which is faster than pulling data directly from S3 multiple times.
Performance Boost: Data can be downloaded to /tmp, processed locally, and uploaded to the destination S3 bucket, minimizing S3 network calls.
Low Overhead: This approach requires only minimal changes to the function's configuration.
Why Not Other Options:
Option A: Using Amazon EFS adds complexity and is unnecessary for this use case.
Option B: Scheduling the function does not address the root cause of slow performance.
Option D: Lambda layers improve deployment efficiency, not runtime performance for this scenario.
A company has a serverless application that uses Amazon API Gateway and AWS Lambda functions to expose a RESTful API. The company uses a continuous integration and continuous delivery (CI/CD) workflow to deploy the application to multiple environments. The company wants to implement automated integration tests after deployment.
A developer needs to set up the necessary infrastructure and processes to automate the deployment and integration tests for the serverless application.
Answer : C
Comprehensive and Detailed Step-by-Step
Option C: Use AWS CodePipeline for CI/CD Workflow:
AWS CodePipeline automates the entire CI/CD pipeline, including build, deploy, and test stages. This minimizes manual effort and integrates well with AWS services.
API Gateway Stages: Represent different environments, such as dev, test, and prod, allowing isolated deployment and testing.
AWS CloudFormation Templates: Ensure that the infrastructure for Lambda and API Gateway is consistent across environments.
AWS CodeBuild for Automated Tests: Validates the deployments in each stage, ensuring integration and functionality are tested post-deployment.
Why Other Options Are Incorrect:
Option A and B: While using AWS SAM or CloudFormation for infrastructure management is valid, these options lack the fully automated CI/CD pipeline provided by CodePipeline.
Option D: Manually invoking Lambda functions using the AWS CLI introduces operational overhead and lacks the automation provided by CodePipeline.
A development team is creating a serverless application that uses AWS Lambda functions. The team wants to streamline a testing workflow by sharing test events across multiple developers within the same AWS account. The team wants to ensure all developers can use consistent test events without compromising security.
Answer : A
Comprehensive and Detailed Step-by-Step
Option A: Use Amazon S3 for Shared Test Events:
Storing JSON test event files in an S3 bucket provides a centralized, cost-effective, and highly available solution.
Granular IAM policies can restrict access to specific developers or roles, ensuring security while maintaining consistency for shared test events.
This solution has minimal operational overhead and integrates easily with existing workflows.
Why Other Options Are Incorrect:
Option B: Using DynamoDB and a Lambda function introduces unnecessary complexity for a relatively simple requirement. S3 provides a simpler and more cost-efficient solution.
Option C: AWS Lambda test events are not inherently shareable across developers, making this option invalid.
Option D: Using a Git repository adds operational overhead and requires developers to clone/update repositories for access, which is more cumbersome compared to S3.