Google Professional Cloud Security Engineer Exam Practice Test

Page: 1 / 14
Total 249 questions
Question 1

Your organization is building a real-time recommendation engine using ML models that process live user activity data stored in BigQuery and Cloud Storage. Each new model developed is saved to Artifact Registry. This new system deploys models to Google Kubernetes Engine and uses Pub/Sub for message queues. Recent industry news has been reporting attacks exploiting ML model supply chains. You need to enhance the security in this serverless architecture, specifically against risks to the development and deployment pipeline. What should you do?



Answer : B

To enhance the security of your machine learning (ML) model supply chain within a serverless architecture, it's crucial to implement measures that protect both the development and deployment pipelines.

Option A: While limiting external dependencies and rotating encryption keys are good security practices, they do not directly address the risks associated with the ML model supply chain.

Option B: Implementing container image vulnerability scanning during development and pre-deployment helps identify and mitigate known vulnerabilities in your container images. Enforcing Binary Authorization ensures that only trusted and verified images are deployed in your environment. This combination directly strengthens the security of the ML model supply chain by validating the integrity of container images before deployment.

Option C: Sanitizing training data and applying role-based access controls are important security practices but do not specifically safeguard the deployment pipeline against compromised container images.

Option D: While strict firewall rules and intrusion detection systems enhance network security, they do not specifically address vulnerabilities within the container images or the deployment process.

Therefore, Option B is the most effective approach, as it directly addresses the security of the development and deployment pipeline by ensuring that only vetted and secure container images are used in your environment.


Container Scanning Overview

Binary Authorization Overview

Question 2

Your organization wants to publish yearly reports of your website usage analytics. You must ensure that no data with personally identifiable information (PII) is published by using the Cloud Data Loss Prevention (Cloud DLP) API. Data integrity must be preserved. What should you do?



Answer : B

To ensure that no personally identifiable information (PII) is published in your yearly website usage analytics reports while preserving data integrity, the Cloud Data Loss Prevention (Cloud DLP) API can be utilized to identify and transform PII within your datasets.

Option A: Encrypting PII does not remove it from the reports; it merely obscures it, which may not be sufficient for compliance or privacy requirements.

Option B: Discovering and transforming PII ensures that sensitive information is either masked, tokenized, or otherwise obfuscated, effectively removing PII from the reports while maintaining the overall structure and utility of the data.

Option C: Detecting and deleting PII could lead to loss of valuable data and may disrupt the integrity of the reports.

Option D: Quarantining PII data implies isolating it, which doesn't address the need to publish reports without PII.

Therefore, Option B is the most appropriate approach, as it leverages the Cloud DLP API to identify and transform PII, ensuring that the published reports are free from sensitive information while preserving data integrity.


Cloud DLP Overview

De-identifying Sensitive Data

Question 3

You have just created a new log bucket to replace the _Default log bucket. You want to route all log entries that are currently routed to the _Default log bucket to this new log bucket in the most efficient manner. What should you do?



Answer : D

In Google Cloud's Logging service, log entries are automatically routed to the _Default log bucket unless configured otherwise. When you create a new log bucket and intend to redirect all log entries from the _Default bucket to this new bucket, the most efficient approach is to modify the existing _Default sink to point to the new log bucket.

Option A: Creating a new user-defined sink with filters replicated from the _Default sink is redundant and may lead to configuration complexities.

Option B: Implementing exclusion filters on the _Default sink and then creating a new sink introduces unnecessary steps and potential for misconfiguration.

Option C: Disabling the _Default sink would stop all log routing to it, but creating a new sink to replicate its functionality is inefficient.

Option D: Editing the _Default sink to change its destination to the new log bucket ensures a seamless transition of log routing without additional configurations.

Therefore, Option D is the most efficient and straightforward method to achieve the desired log routing.


Routing and Storage Overview

Configure Default Log Router Settings

Question 4

A batch job running on Compute Engine needs temporary write access to a Cloud Storage bucket. You want the batch job to use the minimum permissions necessary to complete the task. What should you do?



Answer : B

To provide temporary write access to a Cloud Storage bucket with the minimum permissions necessary, you should:

Identify the Compute Engine instance's default service account: Each Compute Engine instance has a default service account that is used to interact with other Google Cloud services.

Assign the storage.objectCreator role: This predefined IAM role grants permissions to create objects in a Cloud Storage bucket, which is sufficient for temporary write access. It does not grant permissions to read or delete objects, thus adhering to the principle of least privilege.

Avoid using full permissions or long-lived keys: Options A and C suggest using broader permissions than necessary or embedding long-lived keys, which could pose a security risk if compromised.

Service account impersonation (Option D)is not necessary for this task and would be more appropriate for scenarios where you need to assume a different identity with different permissions.


Google Cloud documentation on IAM roles for Cloud Storage, which lists the storage.objectCreator role as providing permissions to create objects without granting full administrative access to the bucket1.

Best practices for access control in Cloud Storage recommend using the least privilege necessary and avoiding the use of long-lived service account keys2.

Question 5

Your organization wants to be compliant with the General Data Protection Regulation (GDPR) on Google Cloud You must implement data residency and operational sovereignty in the EU.

What should you do?

Choose 2 answers



Answer : A, C

https://cloud.google.com/architecture/framework/security/data-residency-sovereignty#manage_your_operational_sovereignty

To ensure compliance with GDPR and implement data residency and operational sovereignty in the EU, the following steps can be taken:

Limit Physical Location of Resources: Use the Organization Policy Service to enforce the resource locations constraint. This ensures that all new resources are created within the specified regions (EU in this case).

Configure Organization Policy: Set up an organization policy that restricts the locations where new resources can be created. This is done through the Google Cloud Console or via the gcloud command-line tool.

Example:

gcloud resource-manager org-policies allow constraints/gcp.resourceLocations [europe-west1,europe-west2] --organization=YOUR_ORG_ID

Key Access Justifications (KAJ): Use Key Access Justifications to limit Google personnel's access to encryption keys based on attributes like their geographic location or citizenship.

Set Up KAJ: Implement KAJ policies to ensure that only authorized personnel within the EU can access encryption keys.


Organization Policy Service

Key Access Justifications

Question 6

You are developing a new application that uses exclusively Compute Engine VMs Once a day. this application will execute five different batch jobs Each of the batch jobs requires a dedicated set of permissions on Google Cloud resources outside of your application. You need to design a secure access concept for the batch jobs that adheres to the least-privilege principle

What should you do?



Answer : B


Question 7

You are setting up a new Cloud Storage bucket in your environment that is encrypted with a customer managed encryption key (CMEK). The CMEK is stored in Cloud Key Management Service (KMS). in project "pr j -a", and the Cloud Storage bucket will use project "prj-b". The key is backed by a Cloud Hardware Security Module (HSM) and resides in the region europe-west3. Your storage bucket will be located in the region europe-west1. When you create the bucket, you cannot access the key. and you need to troubleshoot why.

What has caused the access issue?



Answer : D

When you use a customer-managed encryption key (CMEK) to secure a Cloud Storage bucket, the key and the bucket must be located in the same region. In this case, the key is in europe-west3 and the bucket is in europe-west1, which is why you're unable to access the key.


Page:    1 / 14   
Total 249 questions