Amazon DOP-C02 AWS Certified DevOps Engineer - Professional Exam Practice Test

Page: 1 / 14
Total 250 questions
Question 1

A DevOps engineer is setting up an Amazon Elastic Container Service (Amazon ECS) blue/green deployment for an application by using AWS CodeDeploy and AWS CloudFormation. During the deployment window, the application must be highly available and CodeDeploy must shift 10% of traffic to a new version of the application every minute until all traffic is shifted.

Which configuration should the DevOps engineer add in the CloudFormation template to meet these requirements?



Answer : B


This corresponds to Option B: Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1Minutes deployment configuration.

Question 2

A company is using AWS CodeDeploy to automate software deployment. The deployment must meet these requirements:

* A number of instances must be available to serve traffic during the deployment Traffic must be balanced across those instances, and the instances must automatically heal in the event of failure.

* A new fleet of instances must be launched for deploying a new revision automatically, with no manual provisioning.

* Traffic must be rerouted to the new environment to half of the new instances at a time. The deployment should succeed if traffic is rerouted to at least half of the instances; otherwise, it should fail.

* Before routing traffic to the new fleet of instances, the temporary files generated during the deployment process must be deleted.

* At the end of a successful deployment, the original instances in the deployment group must be deleted immediately to reduce costs.

How can a DevOps engineer meet these requirements?



Answer : C


Step 2: Use an Application Load Balancer and Auto Scaling Group The Application Load Balancer (ALB) is essential to balance traffic across multiple instances, and Auto Scaling ensures the deployment scales automatically to meet demand.

Action: Associate the Auto Scaling group and Application Load Balancer target group with the deployment group.

Why: This configuration ensures that traffic is evenly distributed and that instances automatically scale based on traffic load.

Step 3: Use Custom Deployment Configuration The company requires that traffic be rerouted to at least half of the instances to succeed. AWS CodeDeploy allows you to configure custom deployment settings with specific thresholds for healthy hosts.

Action: Create a custom deployment configuration where 50% of the instances must be healthy.

Why: This ensures that the deployment continues only if at least 50% of the new instances are healthy.

Step 4: Clean Temporary Files Using Hooks Before routing traffic to the new environment, the temporary files generated during the deployment must be deleted. This can be achieved using the BeforeAllowTraffic hook in the appspec.yml file.

Action: Use the BeforeAllowTraffic lifecycle event hook to clean up temporary files before routing traffic to the new environment.

Why: This ensures that the environment is clean before the new instances start serving traffic.

Step 5: Terminate Original Instances After Deployment After a successful deployment, AWS CodeDeploy can automatically terminate the original instances (blue environment) to save costs.

Action: Instruct AWS CodeDeploy to terminate the original instances after the new instances are healthy.

Why: This helps in cost reduction by removing unused instances after the deployment.

This corresponds to Option C: Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.HalfAtATime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.

Question 3

A company uses an organization in AWS Organizations to manage several AWS accounts that the company's developers use. The company requires all data to be encrypted in transit.

Multiple Amazon S3 buckets that were created in developer accounts allow unencrypted connections. A DevOps engineer must enforce encryption of data in transit for all existing S3 buckets that are created in accounts in the organization.

Which solution will meet these requirements?



Answer : C


Step 2: Deploying a Conformance Pack with Managed Rules After AWS Config is enabled, you need to deploy a conformance pack that contains the s3-bucket-ssi-requests-only managed rule. This rule enforces that all S3 buckets only allow requests using Secure Socket Layer (SSL) connections (HTTPS).

Action: Deploy a conformance pack that uses the s3-bucket-ssi-requests-only rule. This rule ensures that only SSL connections (for encrypted data in transit) are allowed when accessing S3.

Why: This rule guarantees that data is encrypted in transit by enforcing SSL connections to the S3 buckets.

Step 3: Using an AWS Systems Manager Automation Runbook To automatically remediate the compliance issues, such as S3 buckets allowing non-SSL requests, a Systems Manager Automation runbook is deployed. The runbook will automatically add a bucket policy that denies access to any requests that do not use SSL.

Action: Use a Systems Manager Automation runbook that adds a bucket policy statement to deny access when the aws:SecureTransport condition key is false.

Why: This ensures that all S3 buckets across the organization comply with the policy of enforcing encrypted data in transit.

This corresponds to Option C: Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-bucket-ssi-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the aws:SecureTransport condition key is false.

Question 4

A company is refactoring applications to use AWS. The company identifies an internal web application that needs to make Amazon S3 API calls in a specific AWS account.

The company wants to use its existing identity provider (IdP) auth.company.com for authentication. The IdP supports only OpenID Connect (OIDC). A DevOps engineer needs to secure the web application's access to the AWS account.

Which combination of steps will meet these requirements? (Select THREE.)



Answer : B, D, E


So, this corresponds to Option B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.

Step 2: Creating an IAM Role with Specific Permissions Next, you need to create an IAM role with a trust policy that allows the external IdP to assume it when certain conditions are met. Specifically, the trust policy needs to allow the role to be assumed based on the context key auth.company.com:aud (audience claim in the token).

Action: Create an IAM role that has the necessary permissions (e.g., Amazon S3 access). The role's trust policy should specify the OIDC IdP as the trusted entity and validate the audience claim (auth.company.com:aud), which comes from the token provided by the IdP.

Why: This step ensures that only the specified web application authenticated via OIDC can assume the IAM role to make API calls.

This corresponds to Option D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.

Step 3: Using Temporary Credentials via AssumeRoleWithWebIdentity API To securely make Amazon S3 API calls, the web application will need temporary credentials. The web application can use the AssumeRoleWithWebIdentity API call to assume the IAM role configured in the previous step and obtain temporary AWS credentials. These credentials can then be used to interact with Amazon S3.

Action: The web application must be configured to call the AssumeRoleWithWebIdentity API operation, passing the OIDC token from the IdP to obtain temporary credentials.

Why: This allows the web application to authenticate via the external IdP and then authorize access to AWS resources securely using short-lived credentials.

This corresponds to Option E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.

Summary of Selected Answers:

B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.

D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.

E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.

This setup enables the web application to use OpenID Connect (OIDC) for authentication and securely interact with Amazon S3 in a specific AWS account using short-lived credentials obtained through AWS Security Token Service (STS).

Question 5

A company uses Amazon EC2 as its primary compute platform. A DevOps team wants to audit the company's EC2 instances to check whether any prohibited applications have been installed on the EC2 instances.

Which solution will meet these requirements with the MOST operational efficiency?



Answer : A

* Configure AWS Systems Manager on Each Instance:

AWS Systems Manager provides a unified interface for managing AWS resources. Install the Systems Manager agent on each EC2 instance to enable inventory management and other features.

* Use AWS Systems Manager Inventory:

Systems Manager Inventory collects metadata about your instances and the software installed on them. This data includes information about applications, network configurations, and more.

Enable Systems Manager Inventory on all EC2 instances to gather detailed information about installed applications.

* Use Systems Manager Resource Data Sync to Synchronize and Store Findings in an Amazon S3 Bucket:

Resource Data Sync aggregates inventory data from multiple accounts and regions into a single S3 bucket, making it easier to query and analyze the data.

Configure Resource Data Sync to automatically transfer inventory data to an S3 bucket for centralized storage.

* Create an AWS Lambda Function that Runs When New Objects are Added to the S3 Bucket:

Use an S3 event to trigger a Lambda function whenever new inventory data is added to the S3 bucket.

The Lambda function can parse the inventory data and check for the presence of prohibited applications.

* Configure the Lambda Function to Identify Prohibited Applications:

The Lambda function should be programmed to scan the inventory data for any known prohibited applications and generate alerts or take appropriate actions if such applications are found.

Example Lambda function in Python

import json

import boto3

def lambda_handler(event, context):

s3 = boto3.client('s3')

bucket = event['Records'][0]['s3']['bucket']['name']

key = event['Records'][0]['s3']['object']['key']

response = s3.get_object(Bucket=bucket, Key=key)

inventory_data = json.loads(response['Body'].read().decode('utf-8'))

prohibited_apps = ['app1', 'app2']

for instance in inventory_data['Instances']:

for app in instance['Applications']:

if app['Name'] in prohibited_apps:

# Send notification or take action

print(f'Prohibited application found: {app['Name']} on instance {instance['InstanceId']}')

return {'statusCode': 200, 'body': json.dumps('Check completed')}

By leveraging AWS Systems Manager Inventory, Resource Data Sync, and Lambda, this solution provides an efficient and automated way to audit EC2 instances for prohibited applications.


AWS Systems Manager Inventory

AWS Systems Manager Resource Data Sync

S3 Event Notifications

AWS Lambda

Question 6

A company has an organization in AWS Organizations. A DevOps engineer needs to maintain multiple AWS accounts that belong to different OUs in the organization. All resources, including 1AM policies and Amazon S3 policies within an account, are deployed through AWS CloudFormation. All templates and code are maintained in an AWS CodeCommit repository Recently, some developers have not been able to access an S3 bucket from some accounts in the organization.

The following policy is attached to the S3 bucket.

What should the DevOps engineer do to resolve this access issue?



Answer : D

Verify No SCP Blocking Access:

Ensure that no Service Control Policy (SCP) is blocking access for developers to the S3 bucket. SCPs are applied at the organization or organizational unit (OU) level in AWS Organizations and can restrict what actions users and roles in the affected accounts can perform.

Verify No IAM Policy Permissions Boundaries Blocking Access:

IAM permissions boundaries can limit the maximum permissions that a user or role can have. Verify that these boundaries are not restricting access to the S3 bucket.

Make Necessary Changes to SCP and IAM Policy Permissions Boundaries:

Adjust the SCPs and IAM permissions boundaries if they are found to be the cause of the access issue. Make sure these changes are reflected in the code maintained in the AWS CodeCommit repository.

Invoke Deployment Through CloudFormation:

Commit the updated policies to the CodeCommit repository.

Use AWS CloudFormation to deploy the changes across the relevant accounts and resources to ensure that the updated permissions are applied consistently.

By ensuring no SCPs or IAM policy permissions boundaries are blocking access and making necessary changes if they are, the DevOps engineer can resolve the access issue for developers trying to access the S3 bucket.


AWS SCPs

IAM Permissions Boundaries

Deploying CloudFormation Templates

Question 7

A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.

A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning The company wants to update the architecture so that the application must reprocess only the failed steps.

What is the MOST operationally efficient solution that meets these requirements?



Answer : D

* Use AWS Step Functions to Orchestrate Processing:

AWS Step Functions allow you to build distributed applications by combining AWS Lambda functions or other AWS services into workflows.

Decoupling the processing into Step Functions tasks enables you to retry individual steps without reprocessing the entire record.

* Architectural Steps:

Create a web application to pass records to AWS Step Functions:

The web application can be a simple frontend that receives input and triggers the Step Functions workflow.

Define a Step Functions state machine:

Each step in the state machine represents a processing stage. If a step fails, Step Functions can retry the step based on defined conditions.

Use AWS Lambda functions:

Lambda functions can be used to handle each processing step. These functions can be stateless and handle specific tasks, reducing the complexity of error handling and reprocessing logic.

* Operational Efficiency:

Using Step Functions and Lambda improves operational efficiency by providing built-in error handling, retries, and state management.

This architecture scales automatically and isolates failures to individual steps, ensuring only failed steps are retried.


AWS Step Functions

Building Workflows with Step Functions

Page:    1 / 14   
Total 250 questions