Amazon DBS-C01 AWS Certified Database - Specialty Exam Practice Test

Page: 1 / 14
Total 322 questions
Question 1

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?



Answer : B

According to the Amazon Redshift documentation1, you can enable database encryption for your clusters to help protect data at rest. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.


Question 2

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?



Answer : B

Correct Answer: B. Create a new AWS Key Management Service (AWS KMS)

Explanation from Amazon documents:

Amazon Redshift supports encryption at rest using AWS Key Management Service (AWS KMS) customer master keys (CMKs). To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.

Option A is incorrect because you cannot copy a CMK from one Region to another. You can only import key material from an external source into a CMK in a specific Region. Option C is incorrect because it involves unnecessary steps of copying snapshots to S3 buckets and using S3 Cross-Region Replication. Option D is incorrect because it is not possible to create a CMK with the same private key as another CMK in a different Region. You can only use customer-supplied key material to create a CMK with a specific key ID in a specific Region.


Question 3

A database administrator needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days.

Which solution will meet these requirements in the MOST operationally efficient way?



Answer : A

Correct Answer: A

Explanation from Amazon documents:

Amazon RDS for Microsoft SQL Server supports two types of database snapshots: automated and manual. Automated snapshots are taken daily and are retained for a period of time that you specify, from 1 to 35 days. Manual snapshots are taken by you and are retained until you delete them.

To save a particular automated database snapshot for longer than the maximum number of days, the database administrator can create a manual copy of the snapshot. This can be done using the AWS Management Console, the AWS CLI, or the RDS API. The manual copy of the snapshot will be retained until it is deleted, regardless of the retention period of the automated snapshot. This solution is the most operationally efficient way to meet the requirements, because it does not require any additional steps or resources.

Therefore, option A is the correct solution to meet the requirements. Option B is not operationally efficient because it requires exporting the contents of the snapshot to an Amazon S3 bucket, which can be time-consuming and costly. Option C is not possible because the maximum retention period for automated snapshots is 35 days, not 45 days. Option D is not operationally efficient because it requires creating a native SQL Server backup and saving it to an Amazon S3 bucket, which can also be time-consuming and costly.


Question 4

A company is running critical applications on AWS. Most of the application deployments use Amazon Aurora MySQL for the database stack. The company uses AWS CloudFormation to deploy the DB instances.

The company's application team recently implemented a CI/CD pipeline. A database engineer needs to integrate the database deployment CloudFormation stack with the newly built CllCD platform. Updates to the CloudFormation stack must not update existing production database resources.

Which CloudFormation stack policy action should the database engineer implement to meet these requirements?



Answer : D

Correct Answer: D

Explanation from Amazon documents:

A CloudFormation stack policy is a JSON document that defines the update actions that can be performed on designated resources in a CloudFormation stack. A stack policy can be used to prevent accidental updates or deletions of stack resources, such as a production database.

The Update:Replace action is an update action that replaces an existing resource with a new one during a stack update. This action can cause data loss or downtime for the resource. To prevent this action from affecting the production database resources, the database engineer should use a Deny statement for the Update:Replace action on the production database resources in the stack policy. This statement will override any Allow statements for the same action and resource, and protect the production database resources from being replaced during a stack update.

Therefore, option D is the correct stack policy action to meet the requirements. Option A is incorrect because the Update:Modify action is not a valid update action for a stack policy. The valid update actions are Update:Replace, Update:Skip, and Delete. Option B is incorrect because it does not specify a valid update action for the Deny statement. Option C is incorrect because the Update:Delete action is not a valid update action for a stack policy. The valid update actions are Update:Replace, Update:Skip, and Delete.


Question 5

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?



Answer : A

Correct Answer: A

Explanation from Amazon documents:

Amazon Aurora PostgreSQL is a fully managed relational database service that is compatible with PostgreSQL. Aurora PostgreSQL offers up to three times better performance than standard PostgreSQL, as well as high availability, scalability, security, and durability. Aurora PostgreSQL also supports Babelfish, which is a new feature that enables Aurora to understand queries from applications written for Microsoft SQL Server. Babelfish allows you to migrate your SQL Server databases to Aurora PostgreSQL with minimal or no code changes, and run complex T-SQL queries and stored procedures on Aurora PostgreSQL.

Migrating the database to Amazon Aurora PostgreSQL and turning on Babelfish will meet the requirements of minimizing database server maintenance and operating costs, and minimizing the need to rewrite code as part of the migration effort. This solution will allow the company to benefit from the performance, reliability, and cost-efficiency of Aurora PostgreSQL, while preserving the compatibility and functionality of SQL Server. The company will also avoid the hassle and expense of managing and licensing SQL Server on premises or on AWS.

Therefore, option A is the correct solution to meet the requirements. Option B is not suitable because Amazon S3 is an object storage service that is not designed for OLTP workloads. Amazon Redshift Spectrum is a feature that allows you to query data in S3 using Amazon Redshift, but it is not compatible with SQL Server or T-SQL. Option C is not optimal because Amazon RDS for SQL Server is a managed relational database service that supports SQL Server, but it does not offer the same performance, scalability, or cost savings as Aurora PostgreSQL. Kerberos authentication is a security feature that does not affect the migration effort or the operating costs. Option D is not suitable because Amazon EMR is a big data processing service that runs Apache Hadoop and Spark clusters, not relational databases. EMR does not support SQL Server or T-SQL, and it is not optimized for OLTP workloads.


Question 6

A company is using AWS CloudFormation to provision and manage infrastructure resources, including a production database. During a recent CloudFormation stack update, a database specialist observed that changes were made to a database resource that is named ProductionDatabase. The company wants to prevent changes to only ProductionDatabase during future stack updates.

Which stack policy will meet this requirement?

A.

B.

C.

D.



Answer : A


Question 7

A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average.

What is the MOST cost-effective solution?



Answer : C

Correct Answer: C

Explanation from Amazon documents:

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is a storage class for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee1. S3 Standard-IA is designed for long-lived and infrequently accessed data. Examples include disaster recovery, backups, and long-term data retention1.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run2. Athena scales automatically---executing queries in parallel---so results are fast, even with large datasets and complex queries2.

The news portal can use S3 Standard-IA to store its metadata about posts and comments, which are not frequently looked up or updated. This way, the portal can benefit from the low storage cost of S3 Standard-IA ($0.0125 per GB per month) and the high durability and availability of S31. The portal can also use Athena to query the data stored in S3 using SQL, without having to set up any servers or databases. The portal only pays for the amount of data scanned by each query ($5 per TB scanned) and can optimize the query cost by partitioning, compressing, and converting the data into columnar formats2.

Therefore, option C is the most cost-effective solution for the news portal's use case. Option A is not cost-effective because DynamoDB on-demand capacity mode charges for read and write requests ($1.25 per million read requests and $1.25 per million write requests), regardless of how frequently the data is accessed3. Purchasing reserved capacity can reduce the cost, but it requires a minimum commitment of 100 units per region. Option B is not suitable because ElastiCache for Redis is an in-memory data store that provides sub-millisecond latency, but it is more expensive than S3 Standard-IA ($0.046 per GB per hour for cache.t2.micro node type). ElastiCache for Redis is also not designed for long-term data storage, but for caching frequently accessed data. Option D is not available because DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) is not a valid table class for DynamoDB. The only table classes for DynamoDB are On-Demand and Provisioned.


Page:    1 / 14   
Total 322 questions