Your organization recently re-architected your cloud environment to use Network Connectivity Center. However, an error occurred when you tried to add a new VPC named vpc-dev as a spoke. The error indicated that there was an issue with an existing spoke and the IP space of a VPC named vpc-pre-prod. You must complete the migration quickly and efficiently. What should you do?
Answer : A
The most efficient way to resolve the conflict is to temporarily remove the conflicting vpc-pre-prod spoke, add the vpc-dev spoke, and then re-add vpc-pre-prod. This ensures that the migration happens quickly without the need to change IP ranges or delete resources.
You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.
What is the most likely cause of this problem?
Answer : B
Your organization is migrating workloads from AWS to Google Cloud. Because a particularly critical workload will take longer to migrate, you need to set up Google Cloud CDN and point it to the existing application at AWS. What should you do?
Answer : B
To configure Cloud CDN for an application hosted outside of Google Cloud (e.g., in AWS), you need to use an internet network endpoint group (NEG). An internet NEG allows you to point to external endpoints using their FQDN or IP address. Cloud CDN works with external HTTP(S) Load Balancers, and you enable CDN on the backend service associated with the load balancer. A Network Load Balancer (passthrough) does not support Cloud CDN.
Exact Extract:
'To enable Cloud CDN for content hosted outside of Google Cloud, you must use an external HTTP(S) Load Balancer with an internet network endpoint group (NEG).'
'An internet NEG specifies one or more external endpoints that can be reached by an external HTTP(S) Load Balancer. You can specify endpoints using an IP address and port, or a fully qualified domain name (FQDN) and port.'
Your company recently migrated to Google Cloud in a Single region. You configured separate Virtual Private Cloud (VPC) networks for two departments. Department A and Department B. Department A has requested access to resources that are part Of Department Bis VPC. You need to configure the traffic from private IP addresses to flow between the VPCs using multi-NIC virtual machines (VMS) to meet security requirements Your configuration also must
* Support both TCP and UDP protocols
* Provide fully automated failover
* Include health-checks
Require minimal manual Intervention In the client VMS
Which approach should you take?
Answer : D
The correct answer is D. Create an instance template and a managed instance group. Configure two separate internal TCP/UDP load balancers for each protocol (TCP/UDP), and configure the client VMs to use the internal load balancers' virtual IP addresses.
This answer is based on the following facts:
The other options are not correct because:
Option A is not suitable. Creating the VMs in the same zone does not provide high availability or failover. Using static routes with IP addresses as next hops requires manual intervention when NVAs are added or removed.
Option B is not optimal. Creating the VMs in different zones provides high availability, but not failover. Using static routes with instance names as next hops requires manual intervention when NVAs are added or removed.
Option C is not feasible. Creating an instance template and a managed instance group provides high availability and reliability, but using a single internal load balancer does not support both TCP and UDP protocols. You cannot define a custom static route with an internal load balancer as the next hop.
You are increasing your usage of Cloud VPN between on-premises and GCP, and you want to support more traffic than a single tunnel can handle. You want to increase the available bandwidth using Cloud VPN.
What should you do?
You are responsible for connectivity between AWS. Google Cloud, and an on-premises data center. Soon, the application team will deploy a data replication service that will move approximately 900 TB of data between Google Cloud and AWS daily. This data is sensitive and must be encrypted in transit. Your data center already has connections to both AWS and Google Cloud through 10 Gbps circuits. You need to configure additional connectivity between these environments and ensure the highest performance and lowest latency to meet business requirements. You also need to keep the existing connectivity topology to the on-premises data center the same. What should you do?
Answer : A
The core requirement is to move a massive amount of sensitive data (900 TB daily) directly between Google Cloud and AWS with highest performance, lowest latency, and in-transit encryption, while maintaining existing on-premises connectivity.
Option A directly addresses this by recommending Cross-Cloud Interconnect with 100 Gbps circuits between AWS and Google Cloud. Cross-Cloud Interconnect is designed for high-throughput, low-latency connectivity between different cloud providers. The crucial part for sensitive data and encryption is 'configuring IPsec encryption on both sides of the connection,' as Cross-Cloud Interconnect itself provides a private path but not inherent encryption. Cloud Router and BGP are essential for dynamic route exchange. This option focuses on the direct cloud-to-cloud path for the high volume data transfer.
Options B and C involve upgrading the existing connections to the on-premises data center and routing all traffic through it. While this could work, it adds an unnecessary hop and likely higher latency for direct cloud-to-cloud traffic, making it less optimal for 'highest performance and lowest latency' between clouds. Additionally, removing existing 10Gbps circuits is not necessary and might impact the existing topology if not done carefully.
Option D suggests MACsec, which provides Layer 2 encryption. While good for physical security, for data replication services with sensitive data, IPsec (Layer 3 encryption) is more commonly used and flexible for end-to-end encryption across a routed network, and is typically preferred for data integrity and confidentiality over an IP network. Also, MACsec requires specific hardware support and is typically implemented at the interconnect termination points, not necessarily end-to-end for an application. Given the sensitive nature of the data and the large volume, IPsec provides the necessary transport-level encryption.
Exact Extract:
'Cross-Cloud Interconnect enables direct connectivity between your Google Cloud VPC networks and other cloud provider networks. It provides high-bandwidth, low-latency connections, ideal for large-scale data transfers between clouds.'
'For sensitive data, you can implement IPsec VPN tunnels over Cross-Cloud Interconnect connections to provide encryption in transit. This ensures data confidentiality and integrity over the dedicated interconnect.'
'Cloud Router dynamically exchanges routes between your Google Cloud VPC network and your other cloud network over the Cross-Cloud Interconnect connection using BGP.'Reference: Google Cloud Cross-Cloud Interconnect Documentation - Overview, Encryption options for hybrid connectivity
You manage two VPCs: VPC1 and VPC2, each with resources spread across two regions. You connected the VPCs with HA VPN in both regions to ensure redundancy. You've observed that when one VPN gateway fails, workloads that are located within the same region but different VPCs lose communication with each other. After further debugging, you notice that VMs in VPC2 receive traffic but their replies never get to the VMs in VPC1. You need to quickly fix the issue. What should you do?
Answer : C
The problem description indicates that VMs in VPC2 receive traffic but their replies don't reach VPC1, especially when a VPN gateway fails. This strongly suggests an asymmetric routing issue, where VPC2's routing table might not be aware of all necessary routes to send return traffic to VPC1, particularly in a multi-region setup with failover. By default, VPC networks are in regional dynamic routing mode, meaning they only learn routes from Cloud Routers in the same region. To ensure that routes learned from one region (where the active VPN tunnel might be) are available globally across the VPC, you need to enable global dynamic routing mode in the VPC that is experiencing the return traffic issue (VPC2 in this case). This allows VPC2 to learn and apply routes from Cloud Routers in all regions, ensuring that even if a VPN tunnel fails in one region, the routes learned from the active tunnel in another region are still available for return traffic.
Exact Extract:
'A VPC network's dynamic routing mode controls whether routes learned by Cloud Routers in one region are available to VMs in other regions. By default, VPC networks are in regional dynamic routing mode, which means Cloud Routers in a region only advertise routes to and learn routes from other Cloud Routers in the same region. This can lead to asymmetric routing issues in multi-region deployments.'