Google Professional Cloud DevOps Engineer Exam Practice Test

Page: 1 / 14
Total 166 questions
Question 1

You deployed an application into a large Standard Google Kubernetes Engine (GKE) cluster. The application is stateless and multiple pods run at the same time. Your application receives inconsistent traffic. You need to ensure that the user experience remains consistent regardless of changes in traffic. and that the resource usage of the cluster is optimized.

What should you do?



Answer : B


Question 2

Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology.

Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do?



Answer : A

The best option to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices, is to install and configure Config Connector in Google Kubernetes Engine (GKE).

Config Connector is a Kubernetes add-on that allows you to manage Google Cloud resources through Kubernetes. You can use Config Connector to create, update, and delete Google Cloud resources using Kubernetes manifests. Config Connector also reconciles the state of the Google Cloud resources with the desired state defined in the manifests, ensuring that there is no configuration drift1.

Config Connector follows the GitOps methodology, as it allows you to store your infrastructure configuration in a Git repository, and use tools such as Anthos Config Management or Cloud Source Repositories to sync the configuration to your GKE cluster. This way, you can use Git as the source of truth for your infrastructure, and enable reviewable and version-controlled workflows2.

Config Connector can be installed and configured in GKE using either the Google Cloud Console or the gcloud command-line tool. You need to enable the Config Connector add-on for your GKE cluster, and create a Google Cloud service account with the necessary permissions to manage the Google Cloud resources. You also need to create a Kubernetes namespace for each Google Cloud project that you want to manage with Config Connector3.

By using Config Connector in GKE, you can give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You can also benefit from the features and advantages of Kubernetes, such as declarative configuration, observability, and portability4.


1: Overview | Artifact Registry Documentation | Google Cloud

2: Deploy Anthos on GKE with Terraform part 1: GitOps with Config Sync | Google Cloud Blog

3: Installing Config Connector | Config Connector Documentation | Google Cloud

4: Why use Config Connector? | Config Connector Documentation | Google Cloud

Question 3

You are deploying an application to Cloud Run. The application requires a password to start. Your organization requires that all passwords are rotated every 24 hours, and your application must have the latest password. You need to deploy the application with no downtime. What should you do?



Answer : B

The correct answer is B, Store the password in Secret Manager and mount the secret as a volume within the application.

Secret Manager is a service that allows you to securely store and manage sensitive data such as passwords, API keys, certificates, and tokens. You can use Secret Manager to rotate your secrets automatically or manually, and access them from your Cloud Run applications1.

There are two ways to use secrets from Secret Manager in Cloud Run:

As environment variables: You can set environment variables that point to secrets in Secret Manager. Cloud Run will resolve the secrets at runtime and inject them into the environment of your application. However, this method has some limitations, such as:

The environment variables are cached for up to 10 minutes, so you may not get the latest version of the secret immediately.

The environment variables are visible in plain text in the Cloud Console and the Cloud SDK, which may expose sensitive information.

The environment variables are limited to 4 KB of data, which may not be enough for some secrets.2

As file system volumes: You can mount secrets from Secret Manager as files in a volume within your application. Cloud Run will create a tmpfs volume and write the secrets as files in it. This method has some advantages, such as:

The files are updated every 30 seconds, so you can get the latest version of the secret faster.

The files are not visible in the Cloud Console or the Cloud SDK, which provides better security.

The files can store up to 64 KB of data, which allows for larger secrets.3

Therefore, for your use case, it is better to use the second method and mount the secret as a file system volume within your application. This way, you can ensure that your application has the latest password, and you can deploy it with no downtime.

To mount a secret as a file system volume in Cloud Run, you can use the following command:

gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=/path/to/file=secretName:version

where:

SERVICE is the name of your Cloud Run service.

IMAGE_URL is the URL of your container image.

/path/to/file is the path where you want to mount the secret file in your application.

secretName is the name of your secret in Secret Manager.

You can also use the Cloud Console to mount secrets as file system volumes. For more details, see Mounting secrets from Secret Manager.


1: Overview | Secret Manager Documentation | Google Cloud

2: Using secrets as environment variables | Cloud Run Documentation | Google Cloud

3: Mounting secrets from Secret Manager | Cloud Run Documentation | Google Cloud

Question 4

You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to Continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests. What should you do?



Answer : D

Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure.Cloud Build can import source code from Google Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives1.Cloud Build can also run integration tests as part of your build steps2.

You can use Cloud Build to run tests in a specific folder by specifying the path to the folder in thedirfield of your build step3. For example, if you have a folder namedteststhat contains your integration tests, you can use the following build step to run them:

steps:

- name: 'gcr.io/cloud-builders/go'

args: ['test', '-v']

dir: 'tests'

Copy

You can use Cloud Build to trigger builds for every GitHub pull request by using the Cloud Build GitHub app.The app allows you to automatically build on Git pushes and pull requests and view your build results on GitHub and Google Cloud console4.You can configure the app to run builds on specific branches, tags, or paths5. For example, if you want to run builds on pull requests that target themasterbranch, you can use the following trigger configuration:

includedFiles:

- '**'

name: 'pull-request-trigger'

github:

name: 'my-repo'

owner: 'my-org'

pullRequest:

branch: '^master$'

Using Cloud Build to run tests in a specific folder and trigger builds for every GitHub pull request is a good way to continuously test your feature branch and ensure that all code is tested before changes are accepted. This way, you can catch any errors or bugs early and prevent them from affecting the main branch.

Using a Jenkins server for CI/CD pipelines is not a bad option, but it would require more setup and maintenance than using Cloud Build, which is fully managed by Google Cloud. Periodically running all tests in the feature branch is not as efficient as running tests for every pull request, as it may delay the feedback loop and increase the risk of conflicts or failures.

Using Cloud Build to run the tests after a pull request is merged is not a good practice, as it may introduce errors or bugs into the main branch that could have been prevented by testing before merging.

Asking the pull request reviewers to run the integration tests before approving the code is not a reliable way of ensuring code quality, as it depends on human intervention and may be prone to errors or oversights.


1:Overview | Cloud Build Documentation | Google Cloud

2:Running integration tests | Cloud Build Documentation | Google Cloud

3: Build configuration overview | Cloud Build Documentation | Google Cloud

4:Building repositories from GitHub | Cloud Build Documentation | Google Cloud

5: Creating GitHub app triggers | Cloud Build Documentation | Google Cloud

Question 5

Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do?



Answer : C


Question 6

You are reviewing your deployment pipeline in Google Cloud Deploy You must reduce toil in the pipeline and you want to minimize the amount of time it takes to complete an end-to-end deployment What should you do?

Choose 2 answers



Answer : A, E

The best options for reducing toil in the pipeline and minimizing the amount of time it takes to complete an end-to-end deployment are to create a trigger to notify the required team to complete the next step when manual intervention is required and to automate promotion approvals from the development environment to the test environment. A trigger is a resource that initiates a deployment when an event occurs, such as a code change, a schedule, or a manual request. You can create a trigger to notify the required team to complete the next step when manual intervention is required by using Cloud Build or Cloud Functions. This way, you can reduce the waiting time and human errors in the pipeline. A promotion approval is a process that allows you to approve or reject a deployment from one environment to another, such as from development to test. You can automate promotion approvals from the development environment to the test environment by using Google Cloud Deploy or Cloud Build. This way, you can speed up the deployment process and avoid manual steps.


Question 7

Your team is building a service that performs compute-heavy processing on batches of data The data is processed faster based on the speed and number of CPUs on the machine These batches of data vary in size and may arrive at any time from multiple third-party sources You need to ensure that third parties are able to upload their data securely. You want to minimize costs while ensuring that the data is processed as quickly as possible What should you do?



Answer : C

The best option for ensuring that third parties are able to upload their data securely and minimizing costs while ensuring that the data is processed as quickly as possible is to provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket; create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger; write code so that the function can scale up a Compute Engine autoscaling managed instance group; use an image pre-loaded with the data processing software that terminates the instances when processing completes. A Cloud Storage bucket is a resource that allows you to store and access data in Google Cloud. You can provide a Cloud Storage bucket so that third parties can upload batches of data securely and conveniently. You can also provide appropriate IAM access to the bucket by using roles and policies to control who can read or write data to the bucket. A Cloud Function is a serverless function that executes code in response to an event, such as a change in a Cloud Storage bucket. A google.storage.object.finalize trigger is a type of trigger that fires when a new object is created or an existing object is overwritten in a Cloud Storage bucket. You can create a Cloud Function with a google.storage.object.finalize trigger so that the function runs whenever a new batch of data is uploaded to the bucket. You can write code so that the function can scale up a Compute Engine autoscaling managed instance group, which is a group of VM instances that automatically adjusts its size based on load or custom metrics. You can use an image pre-loaded with the data processing software that terminates the instances when processing completes, which means that the instances only run when there is data to process and stop when they are done. This way, you can minimize costs while ensuring that the data is processed as quickly as possible.


Page:    1 / 14   
Total 166 questions