A development team is creating a serverless application that uses AWS Lambda functions. The team wants to streamline a testing workflow by sharing test events across multiple developers within the same AWS account. The team wants to ensure all developers can use consistent test events without compromising security.
Answer : A
Comprehensive and Detailed Step-by-Step
Option A: Use Amazon S3 for Shared Test Events:
Storing JSON test event files in an S3 bucket provides a centralized, cost-effective, and highly available solution.
Granular IAM policies can restrict access to specific developers or roles, ensuring security while maintaining consistency for shared test events.
This solution has minimal operational overhead and integrates easily with existing workflows.
Why Other Options Are Incorrect:
Option B: Using DynamoDB and a Lambda function introduces unnecessary complexity for a relatively simple requirement. S3 provides a simpler and more cost-efficient solution.
Option C: AWS Lambda test events are not inherently shareable across developers, making this option invalid.
Option D: Using a Git repository adds operational overhead and requires developers to clone/update repositories for access, which is more cumbersome compared to S3.
A developer used the AWS SDK to create an application that aggregates and produces log records for 10 services. The application delivers data to an Amazon Kinesis Data Streams stream.
Each record contains a log message with a service name, creation timestamp, and other log information. The stream has 15 shards in provisioned capacity mode. The stream uses service name as the partition key.
The developer notices that when all the services are producing logs, ProvisionedThroughputExceededException errors occur during PutRecord requests. The stream metrics show that the write capacity the applications use is below the provisioned capacity.
Answer : C
Comprehensive and Detailed Step-by-Step
Issue Analysis:
The stream uses service name as the partition key. This can cause 'hot partition' issues when a few service names generate significantly more logs compared to others, causing uneven distribution of data across shards.
Metrics show that the write capacity used is below provisioned capacity, which confirms that the throughput errors are due to shard-level limits and not overall capacity.
Option C: Change Partition Key to Creation Timestamp:
By changing the partition key to the creation timestamp (or a composite key including timestamp), the distribution of data across shards can be randomized, ensuring an even spread of records.
This resolves the shard overutilization issue and eliminates ProvisionedThroughputExceededException.
Why Other Options Are Incorrect:
Option A: Switching to on-demand capacity mode might temporarily alleviate the issue, but the root cause (hot partitioning) remains unresolved.
Option B: Adding shards increases capacity but does not fix the skewed data distribution caused by using the service name as the partition key.
Option D: Creating separate streams for each service adds unnecessary complexity and does not scale well as the number of services grows.
Best Practices for Kinesis Data Streams Partition Key Design
A developer needs to set up an API to provide access to an application and its resources. The developer has a TLS certificate. The developer must have the ability to change the default base URL of the API to a custom domain name. The API users are distributed globally. The solution must minimize API latency.
Answer : C
Comprehensive and Detailed Step-by-Step
Option C: Edge-Optimized API Gateway with Custom Domain Name:
Edge-Optimized API Gateway: This endpoint type automatically leverages the Amazon CloudFront global distribution network, minimizing latency for API users distributed globally.
Custom Domain Name: API Gateway supports custom domain names for APIs. Importing the TLS certificate into AWS Certificate Manager (ACM) and associating it with the custom domain name ensures secure connections.
Disabling the Default Endpoint: Prevents direct access via the default API Gateway URL, enforcing the use of the custom domain name.
Why Other Options Are Incorrect:
Option A: While CloudFront can distribute API requests globally, API Gateway with edge-optimized endpoints already provides this functionality natively without requiring Lambda@Edge.
Option B: Private endpoint types are used for internal access via VPC, which does not meet the global distribution and low-latency requirement.
Option D: CloudFront Functions are not needed because API Gateway's edge-optimized endpoints handle global distribution efficiently.
A company runs an AWS CodeBuild project on medium-sized Amazon EC2 instances. The company wants to cost optimize the project and reduce the provisioning time.
Answer : D
Comprehensive and Detailed Step-by-Step
Option D: Set up Amazon S3 Caching for CodeBuild:
CodeBuild supports S3 caching to store intermediate build artifacts and dependencies. This reduces the time required to download dependencies during subsequent builds, effectively lowering costs and improving build performance.
By using S3 caching, developers can optimize costs without changing the compute type or adding complexity.
Why Other Options Are Incorrect:
Option A: CodeBuild does not have a 'reserved capacity fleet' option.
Option B: AWS Lambda cannot be used as the compute mode for CodeBuild projects. CodeBuild uses its own managed build environments.
Option C: CodeBuild already operates on an on-demand basis, so this does not address the need for optimization or reduced provisioning time.
A company has many microservices that are comprised of AWS Lambda functions. Multiple teams within the company split ownership of the microservices.
An application reads configuration values from environment variables that are contained in the Lambda functions. During a security audit, the company discovers that some of the environment variables contain sensitive information.
The company's security policy requires each team to have full control over the rotation of AWS KMS keys that the team uses for its respective microservices.
Answer : B
Comprehensive and Detailed Step-by-Step
Customer Managed Keys (CMK) for Granular Control (Option B):
Customer-managed KMS keys are required to meet the security policy requirement of team-specific control over KMS key rotation. Each team can manage the lifecycle of its own key.
The kms:Decrypt permission allows the Lambda function execution roles to decrypt the environment variables during runtime.
This solution adheres to the principle of least privilege and satisfies the need for team-specific key control.
Why Other Options Are Incorrect:
Option A: AWS-managed keys cannot provide team-specific control or support the custom rotation policy required by the teams.
Option C: Adding kms:CreateGrant and kms:Encrypt permissions to Lambda roles is unnecessary for this scenario. The key usage is limited to decryption at runtime.
Option D: AWS-managed keys still lack team-specific control, and adding kms:CreateGrant and kms:Encrypt is redundant.
A bookstore has an ecommerce website that stores order information in an Amazon DynamoDB table named BookOrders. The DynamoDB table contains approximately one million records.
The table uses OrderID as a partition key. There are no other indexes.
A developer wants to build a new reporting feature to retrieve all records from the table for a specified customer, based on a CustomerID property.
Answer : A
Comprehensive and Detailed Step-by-Step
The requirement is to query records by CustomerID, which is not the current partition key (OrderID). To achieve this efficiently:
Option A: Create a GSI with CustomerID as the Partition Key:
A Global Secondary Index (GSI) allows developers to create a different partition key and optional sort key for querying the data.
By creating a GSI with CustomerID as the partition key, the developer can query the table efficiently using CustomerID as the primary lookup key.
This avoids scanning the entire table and matches the requirement.
Why Other Options Are Incorrect:
Option B: Using CustomerID as a sort key for the GSI and performing a scan operation is inefficient. Queries are optimized, but scans are not.
Option C and D: Local Secondary Indexes (LSI) are only valid when the partition key remains the same as the base table. Since OrderID is the base table's partition key, using CustomerID as the partition key or sort key in an LSI is not valid.
A developer is creating an ecommerce workflow in an AWS Step Functions state machine that includes a HTTP Task state. The task passes shipping information and order details to an endpoint.
The developer needs to test the workflow to confirm that the HTTP headers and body are correct and that the responses meet expectations.
Answer : A
Comprehensive and Detailed Step-by-Step
To confirm that the HTTP headers, body, and responses meet expectations, you need to test the specific HTTP Task state in isolation and inspect the details.
Option A: TestState API with TRACE:
The TestState API allows developers to test individual states in a state machine without executing the entire workflow.
Setting the inspection level to TRACE provides detailed information about the HTTP request and response, including headers, body, and status codes.
This option provides the precise and granular testing required to verify the HTTP Task functionality.
Why Other Options Are Incorrect:
Option B: The DEBUG inspection level provides less detailed information than TRACE and focuses on general debugging, not a detailed view of HTTP interactions.
Option C: Step Functions does not have a 'data flow simulator' to test individual tasks; this option is not valid.
Option D: Changing the state machine's log level to ALL increases logging granularity for the entire state machine but does not allow isolated testing of a specific HTTP Task.