A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls
What should a solutions architect do to improve the security of data in transit to the web tier?
Answer : A
A: How do you protect your data in transit?
Best Practices:
Implement secure key and certificate management: Store encryption keys and certificates securely and rotate them at appropriate time intervals while applying strict access control; for example, by using a certificate management service, such as AWS Certificate Manager (ACM).
Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendations to help you meet your organizational, legal, and compliance requirements.
Automate detection of unintended data access: Use tools such as GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level, for example, to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol.
Authenticate network communications: Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec.
https://wa.aws.amazon.com/wat.question.SEC_9.en.html
A company is planning to migrate an on-premises online transaction processing (OLTP) database that uses MySQL to an AWS managed database management system. Several reporting and analytics applications use the on-premises database heavily on weekends and at the end of each month. The cloud-based solution must be able to handle read-heavy surges during weekends and at the end of each month.
Which solution will meet these requirements?
A. Migrate the database to an Amazon Aurora MySQL cluster. Configure Aurora Auto Scaling to use replicas to handle surges. B. Migrate the database to an Amazon EC2 instance that runs MySQL. Use an EC2 instance type that has ephemeral storage. Attach Amazon EBS Provisioned IOPS SSD (io2) volumes to the instance. C. Migrate the database to an Amazon RDS for MySQL database. Configure the RDS for MySQL database for a Multi-AZ deployment, and set up auto scaling. D. Migrate from the database to Amazon Redshift. Use Amazon Redshift as the database for both OLTP and analytics applications.
Answer : A
A solutions architect needs to implement a solution that can handle up to 5,000 messages per second. The solution must publish messages as events to multiple consumers. The messages are up to 500 KB in size. The message consumers need to have the ability to use multiple programming languages to consume the messages with minimal latency. The solution must retain published messages for more than 3 months. The solution must enforce strict ordering of the messages.
Which solution will meet these requirements?
A. Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out. Ensure that consumers ingest the data stream by using dedicated throughput. B. Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic. C. Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer's own target. D. Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.
Answer : A
A company runs a Microsoft Windows SMB file share on-premises to support an application. The company wants to migrate the application to AWS. The company wants to share storage across multiple Amazon EC2 instances.
Which solutions will meet these requirements with the LEAST operational overhead? (Select TWO.)
A. Create an Amazon Elastic File System (Amazon EFS) file system with elastic throughput. B. Create an Amazon FSx for NetApp ONTAP file system. C. Use Amazon Elastic Block Store (Amazon EBS) to create a self-managed Windows file share on the instances. D. Create an Amazon FSx for Windows File Server file system. E. Create an Amazon FSx for OpenZFS file system.
Answer : A, D
How can DynamoDB data be made available for long-term analytics with minimal operational overhead?
Answer : A
Option A is the most automated and cost-efficient solution for exporting data to S3 for analytics.
Option B involves manual setup of Streams to S3.
Options C and D introduce complexity with EMR.
How can trade data from DynamoDB be ingested into an S3 data lake for near real-time analysis?
Answer : A
Option A is the simplest solution, using DynamoDB Streams and Lambda for real-time ingestion into S3.
Options B, C, and D add unnecessary complexity with Data Firehose or Kinesis.
A company runs HPC workloads requiring high IOPS.
Which combination of steps will meet these requirements? (Select TWO)
Answer : B, E
Option B: FSx for Lustre is designed for HPC workloads with high IOPS.
Option E: A cluster placement group ensures low-latency networking for HPC analytics workloads.
Option A: Amazon EFS is not optimized for HPC.
Option D: Mountpoint for S3 does not meet high IOPS needs.