A company is using an Amazon EC2 Auto Scaling group to support a workload A Sytfhe company now needs to centruito Scaling group is configured with two similar scaling policies dP) to centrally manage access to One scaling policy adds 5 instances when CPU utilization reaches 80%. The other sctrator can connect to the extemahen CPU utilization leaches 80%.
What will happen when CPU utilization reaches the 80% threshold?
Answer : B
Scaling Policies in Auto Scaling:
When multiple scaling policies trigger at the same time, each policy is executed independently.
If both policies are set to add 5 instances when CPU utilization reaches 80%, they will both be executed when the threshold is met.
Therefore, the total number of instances added will be the sum of the instances specified in both policies.
In this case, 5 instances from one policy and 5 instances from the other policy will result in a total of 10 instances being added.
Steps to Configure and Verify Scaling Policies:
Go to the AWS Management Console.
Navigate to EC2 and select 'Auto Scaling Groups.'
Select your Auto Scaling group and review the scaling policies.
Ensure that both scaling policies are configured to trigger at 80% CPU utilization.
Monitor the Auto Scaling group's activity to verify the addition of instances when the CPU utilization threshold is reached.
Topic 2, Simulation
A SysOps administrator must ensure that all of a company's current and future Amazon S3 buckets have logging enabled If an S3 bucket does not have logging enabled an automated process must enable logging for the S3 bucket.
Which solution will meet these requirements?
Answer : C, D
AWS Config Managed Rule for S3 Logging:
The s3-bucket-logging-enabled AWS Config rule checks whether S3 buckets have logging enabled.
Steps:
Go to the AWS Management Console.
Navigate to AWS Config.
Create a rule using s3-bucket-logging-enabled.
Add a remediation action using an AWS Lambda function or Systems Manager Automation runbook.
Using AWS Lambda for Remediation:
Create a Lambda function that enables logging on S3 buckets.
Steps:
Write a Lambda function in Python or Node.js to enable logging.
Configure the function to trigger on non-compliant buckets.
Using AWS Systems Manager Automation:
The AWS-ConfigureS3BucketLogging runbook automates enabling logging.
Steps:
Go to the AWS Management Console.
Navigate to Systems Manager.
Create an Automation document or use the existing AWS-ConfigureS3BucketLogging runbook.
Configure the remediation action to use this runbook.
A SysOps administrator needs to create a report that shows how many bytes are sent to and received from each target group member for an Application Load Balancer (ALB).
Which combination of steps should the SysOps administrator take to meet these requirements? (Select TWO.)
Answer : A, C
Enable Access Logging for the ALB:
Access logging provides detailed information about requests sent to your load balancer.
Steps:
Go to the AWS Management Console.
Navigate to EC2 and select 'Load Balancers.'
Select your Application Load Balancer.
Under the 'Attributes' tab, enable 'Access logs.'
Specify an S3 bucket where the logs will be saved.
Use Amazon Athena to Query the ALB Logs:
Athena allows you to run SQL queries on data stored in S3.
Steps:
Go to the AWS Management Console.
Navigate to Athena.
Create a table for the ALB logs using the appropriate schema.
Run queries to calculate the total bytes sent and received, grouped by the target
field.
Example query:
SELECT target, SUM(received_bytes) as total_received, SUM(sent_bytes) as total_sent
FROM alb_logs
GROUP BY target, port
A company hosts an application on Amazon EC2 instances The instances are in an Amazon EC2 Auto Scaling group that uses a launch template The amount of application traffic changes throughout the day. Scaling events happen frequently.
A SysOps administrator needs to help developers troubleshoot the application. When a scaling event removes an instance. EC2 Auto Scaling terminates the instance before the developers can log in to the instance to diagnose issues.
Which solution will prevent termination of the instance so that the developers can log in to the instance?
Answer : B
Enabling Instance Scale-In Protection:
Instance scale-in protection prevents Auto Scaling from terminating specific instances.
Steps:
Go to the AWS Management Console.
Navigate to EC2 and select 'Auto Scaling Groups.'
Select your Auto Scaling group.
Go to the 'Instance management' tab.
Select the instances you want to protect and click 'Actions.'
Choose 'Enable scale-in protection.'
This ensures that instances are not terminated during troubleshooting.
A company needs to monitor the disk utilization of Amazon Elastic Block Store (Amazon EBS) volumes The EBS volumes are attached to Amazon EC2 Linux Instances A SysOps administrator must set up an Amazon CloudWatch alarm that provides an alert when disk utilization increases to more than 80%.
Which combination of steps must the SysOps administrator lake lo meet these requirements? (Select THREE.)
Answer : A, C, E
Create an IAM role with the CloudWatchAgentServerPolicy:
This policy grants the necessary permissions for the CloudWatch agent to collect and send metrics.
Steps:
Go to the AWS Management Console.
Navigate to IAM and create a new role.
Choose 'EC2' as the trusted entity.
Attach the 'CloudWatchAgentServerPolicy' managed policy to the role.
Attach this IAM role to your EC2 instances.
Install and start the CloudWatch agent:
The CloudWatch agent must be installed and configured to collect disk utilization metrics.
Steps:
Use AWS Systems Manager or SSH to connect to your instances.
Install the CloudWatch agent using the following commands:
sudo yum install amazon-cloudwatch-agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/path/to/your-config-file.json -s
Start the agent:
sudo systemctl start amazon-cloudwatch-agent
Configure a CloudWatch alarm:
Create an alarm based on the disk_used_percent metric.
Steps:
Go to the AWS Management Console.
Navigate to CloudWatch and select 'Alarms' from the left-hand menu.
Click on 'Create alarm.'
Select the disk_used_percent metric.
Set the threshold to 80% and configure the alarm actions (e.g., sending a notification).
A company runs a single-page web application on AWS The application uses Amazon CloudFront lo deliver static content from an Amazon S3 bucket origin The application also uses an Amazon Elastic Kubemetes Service (Amazon EKS) duster to serve API calls
Users sometimes report that the website is not operational, even when monitoring shows that the index page is reachable and that the EKS cluster is healthy. A SysOps administrator must Implement additional monitoring that can delect when the website is not operational before users report the problem.
Which solution will meet these requirements?
Answer : A
Amazon CloudWatch Synthetics:
CloudWatch Synthetics allows you to create canaries to monitor your endpoints and API calls, simulating user behavior to detect issues before users do.
Steps:
Go to the AWS Management Console.
Navigate to CloudWatch and select 'Synthetics.'
Click on 'Create canary.'
Choose 'Heartbeat monitoring' as the blueprint.
Configure the canary to point to the FQDN of the website.
Set the frequency and retention settings as per your requirement.
Create the canary.
This setup continuously checks the operational status of your website, alerting you if it becomes unreachable or has issues.
An application uses an Amazon Aurora MySQL DB cluster that Includes one Aurora Replica The application's read performance degrades when there are more than 200 user connections. The number of user connections is approximately 180 on a consistent basis Occasionally, the number of user connections increases rapidly to more than 200
A SysOps administrator must implement a solution that will scale the application automatically as user demand increases or decreases.
Which solution will meet these requirements?
Answer : D
Aurora Auto Scaling:
Aurora Auto Scaling adjusts the number of Aurora Replicas in response to changes in connectivity or workload.
Steps:
Go to the AWS Management Console.
Navigate to RDS and select the Aurora cluster.
Under 'Actions,' choose 'Add Aurora Replica' to initially add replicas if needed.
Go to the 'Auto Scaling' section and create an auto scaling policy.
Set the target value for the DatabaseConnections metric to 195.
Define the minimum and maximum number of replicas.
Save the configuration.
This ensures that the Aurora cluster scales automatically when the number of connections approaches the threshold, improving read performance.