IBM C1000-130 IBM Cloud Pak for Integration V2021.2 Administration Exam Practice Test

Page: 1 / 14
Total 113 questions
Question 1

The following deployment topology has been created for an API Connect deploy-ment by a client.

Which two statements are true about the topology?



Answer : A, E

IBM API Connect, as part of IBM Cloud Pak for Integration (CP4I), supports various deployment topologies, including Active/Active and Active/Passive configurations across multiple data centers. Let's analyze the provided topology carefully:

Backup Strategy (Option A - Correct)

The API Manager and Developer Portal components are stateful and require regular backups.

Since the topology spans across two sites, these backups should be replicated to the second site to ensure disaster recovery (DR) and high availability (HA).

This aligns with IBM's best practices for multi-data center deployment of API Connect.

Deployment Mode for API Manager & Portal (Option B - Incorrect)

The question suggests that API Manager and Portal are deployed across two sites.

If it were an Active/Passive deployment, only one site would be actively handling requests, while the second remains idle.

However, in IBM's recommended architectures, API Manager and Portal are usually deployed in an Active/Active setup with proper failover mechanisms.

Cluster Type (Option C - Incorrect)

A distributed Kubernetes cluster across multiple sites would require an underlying multi-cluster federation or synchronization.

IBM API Connect is usually deployed on separate Kubernetes clusters per data center, rather than a single distributed cluster.

Therefore, this topology does not represent a distributed Kubernetes cluster across sites.

Failover Behavior (Option D - Incorrect)

Kubernetes cannot automatically detect failures in Data Center 1 and migrate services to Data Center 2 unless specifically configured with multi-cluster HA policies and disaster recovery.

Instead, IBM API Connect HA and DR mechanisms would handle failover via manual or automated orchestration, but not via Kubernetes native services.

Gateway and Analytics Deployment (Option E - Correct)

API Gateway and Analytics services are typically deployed in Active/Active mode for high availability and load balancing.

This means that traffic is dynamically routed to the available instance in both sites, ensuring uninterrupted API traffic even if one data center goes down.

Final Answer:

A. Regular backups of the API Manager and Portal have to be taken, and these backups should be replicated to the second site. E. This represents an Active/Active deployment for Gateway and Analytics services.


IBM API Connect Deployment Topologies

IBM Documentation -- API Connect Deployment Models

High Availability and Disaster Recovery in IBM API Connect

IBM API Connect HA & DR Guide

IBM Cloud Pak for Integration Architecture Guide

IBM Cloud Pak for Integration Docs

Question 2

OpenShift Pipelines can be used to automate the build of custom images in a CI/CD pipeline and they are based on Tekton.

What type of component is used to create a Pipeline?



Answer : B

OpenShift Pipelines, which are based on Tekton, use various components to define and execute CI/CD workflows. The fundamental building block for creating a Pipeline in OpenShift Pipelines is a Task.

Key Tekton Components:

Task ( Correct Answer)

A Task is the basic unit of work in Tekton.

Each Task defines a set of steps (commands) that are executed in containers.

Multiple Tasks are combined into a Pipeline to form a CI/CD workflow.

Pipeline (uses multiple Tasks)

A Pipeline is a collection of Tasks that define the entire CI/CD workflow.

Each Task in the Pipeline runs in sequence or in parallel as specified.

Why the Other Options Are Incorrect?

Option

Explanation

Correct?

A .TaskRun

Incorrect -- A TaskRun is an execution instance of a Task, but it does not define the Pipeline itself.

C . TPipe

Incorrect -- No such Tekton component called TPipe exists.

D . Pipe

Incorrect -- The correct term is Pipeline, not 'Pipe'. OpenShift Pipelines do not use this term.

Final Answer:

B . Task

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

OpenShift Pipelines (Tekton) Documentation

Tekton Documentation -- Understanding Tasks

IBM Cloud Pak for Integration -- CI/CD with OpenShift Pipelines


Question 3

Which statement is true regarding tracing in Cloud Pak for Integration?



Answer : D

In IBM Cloud Pak for Integration (CP4I), distributed tracing allows administrators to monitor the flow of requests across multiple services. This feature helps in diagnosing performance issues and debugging integration flows.

Tracing must be enabled during the initial deployment of an integration capability instance.

Once deployed, tracing settings cannot be changed dynamically without redeploying the instance.

This ensures that tracing configurations are properly set up and integrated with observability tools like OpenTelemetry, Jaeger, or Zipkin.

Analysis of the Options:

A . If tracing has not been enabled, the administrator can turn it on without the need to redeploy the integration capability. (Incorrect)

Tracing cannot be enabled after deployment. It must be configured during the initial deployment process.

B . Distributed tracing data is enabled by default when a new capability is instantiated through the Platform Navigator. (Incorrect)

Tracing is not enabled by default. The administrator must manually enable it during deployment.

C . The administrator can schedule tracing to run intermittently for each specified integration capability. (Incorrect)

There is no scheduling option for tracing in CP4I. Once enabled, tracing runs continuously based on the chosen settings.

D . Tracing for an integration capability instance can be enabled only when deploying the instance. (Correct)

This is the correct answer. Tracing settings are defined at deployment and cannot be modified afterward without redeploying the instance.

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

IBM Cloud Pak for Integration - Tracing and Monitoring

Enabling Distributed Tracing in IBM CP4I

IBM OpenTelemetry and Jaeger Tracing Integration


Question 4

An administrator is checking that all components and software in their estate are licensed. They have only purchased Cloud Pak for Integration (CP41) li-censes.

How are the OpenShift master nodes licensed?



Answer : B

In IBM Cloud Pak for Integration (CP4I) v2021.2, licensing is based on Virtual Processor Cores (VPCs), and it includes entitlement for OpenShift usage. However, OpenShift master nodes (control plane nodes) do not consume license entitlement, because:

OpenShift licensing only applies to worker nodes.

The master nodes (control plane nodes) manage cluster operations and scheduling, but they do not run user workloads.

IBM's Cloud Pak licensing model considers only the worker nodes for licensing purposes.

Master nodes are essential infrastructure and are excluded from entitlement calculations.

IBM and Red Hat do not charge for OpenShift master nodes in Cloud Pak deployments.

Explanation of Incorrect Answers:

A . CP4I licenses include entitlement for the entire OpenShift cluster that they run on, and the administrator can count against the master nodes. Incorrect

CP4I licenses do cover OpenShift, but only for worker nodes where workloads are deployed.

Master nodes are excluded from licensing calculations.

C . The administrator will need to purchase additional OpenShift licenses to cover the master nodes. Incorrect

No additional OpenShift licenses are required for master nodes.

OpenShift licensing only applies to worker nodes that run applications.

D . CP4I licenses include entitlement for 3 cores of OpenShift per core of CP4I. Incorrect

The standard IBM Cloud Pak licensing model provides 1 VPC of OpenShift for 1 VPC of CP4I, not a 3:1 ratio.

Additionally, this applies only to worker nodes, not master nodes.

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

IBM Cloud Pak Licensing Guide

IBM Cloud Pak for Integration Licensing Details

Red Hat OpenShift Licensing Guide


Question 5

The OpenShift Logging Operator monitors a particular Custom Resource (CR). What is the name of the Custom Resource used by the OpenShift Logging Opera-tor?



Answer : A

In IBM Cloud Pak for Integration (CP4I) v2021.2, which runs on Red Hat OpenShift, logging is managed through the OpenShift Logging Operator. This operator is responsible for collecting, storing, and forwarding logs within the cluster.

The OpenShift Logging Operator monitors a specific Custom Resource (CR) named ClusterLogging, which defines the logging stack configuration.

How the ClusterLogging Custom Resource Works:

The ClusterLogging CR is used to configure and manage the cluster-wide logging stack, including components like:

Fluentd (Log collection and forwarding)

Elasticsearch (Log storage and indexing)

Kibana (Log visualization)

Administrators define log collection, storage, and forwarding settings using this CR.

Example of a ClusterLogging CR Definition:

apiVersion: logging.openshift.io/v1

kind: ClusterLogging

metadata:

name: instance

namespace: openshift-logging

spec:

managementState: Managed

logStore:

type: elasticsearch

retentionPolicy:

application:

maxAge: 7d

collection:

type: fluentd

This configuration sets up an Elasticsearch-based log store with Fluentd as the log collector.

Why Answer A (ClusterLogging) is Correct?

The OpenShift Logging Operator monitors the ClusterLogging CR to manage logging settings.

It defines how logs are collected, stored, and forwarded across the cluster.

IBM Cloud Pak for Integration uses this CR when integrating OpenShift's logging system.

Explanation of Incorrect Answers:

B . DefaultLogging Incorrect

There is no such resource named DefaultLogging in OpenShift.

The correct resource is ClusterLogging.

C . ElasticsearchLog Incorrect

Elasticsearch is the default log store, but it is managed within ClusterLogging, not as a separate CR.

D . LoggingResource Incorrect

This is not an actual OpenShift CR related to logging.

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

OpenShift Logging Overview

Configuring OpenShift Cluster Logging

IBM Cloud Pak for Integration - Logging and Monitoring


Question 6

Which component requires ReadWriteMany(RWX) storage in a Cloud Pak for Inte-gration deployment?



Answer : B

In an IBM Cloud Pak for Integration (CP4I) v2021.2 deployment, certain components require ReadWriteMany (RWX) storage to allow multiple pods to read and write data concurrently.

Why Option B (CouchDB for Asset Repository) is Correct:

CouchDB is used as the Asset Repository in CP4I to store configuration and metadata for IBM Automation Assets.

It requires persistent storage that can be accessed by multiple instances simultaneously.

RWX storage is necessary because multiple pods may need concurrent access to the same database storage in a distributed deployment.

Common RWX storage options in OpenShift include NFS, Portworx, or CephFS.

Explanation of Incorrect Answers:

A . MQ multi-instance Incorrect

IBM MQ multi-instance queue managers require ReadWriteOnce (RWO) storage because only one active instance at a time can write to the storage.

MQ HA deployments typically use Replicated Data Queue Manager (RDQM) or Persistent Volumes with RWO access mode.

C . API Connect Incorrect

API Connect stores most of its configurations in databases like MongoDB but does not specifically require RWX storage for its primary operation.

It uses RWO or ReadOnlyMany (ROX) storage for its internal components.

D . Event Streams Incorrect

Event Streams (based on Apache Kafka) uses RWO storage for high-performance message persistence.

Each Kafka broker typically writes to its own dedicated storage, meaning RWX is not required.

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

IBM Cloud Pak for Integration Storage Requirements

CouchDB Asset Repository in CP4I

IBM MQ Multi-Instance Setup

OpenShift RWX Storage Options


Question 7

Which two authentication types are supported for single sign-on in Founda-tional Services?



Answer : B, D

In IBM Cloud Pak for Integration (CP4I) v2021.2, Foundational Services provide authentication and access control mechanisms, including Single Sign-On (SSO) integration. The two supported authentication types for SSO are:

OpenShift Authentication

IBM Cloud Pak for Integration leverages OpenShift authentication to integrate with existing identity providers.

OpenShift authentication supports OAuth-based authentication, allowing users to sign in using an OpenShift identity provider, such as LDAP, OIDC, or SAML.

This method enables seamless user access without requiring additional login credentials.

Enterprise SAML (Security Assertion Markup Language)

SAML authentication allows integration with enterprise identity providers (IdPs) such as IBM Security Verify, Okta, Microsoft Active Directory Federation Services (ADFS), and other SAML 2.0-compatible IdPs.

It provides federated identity management for SSO across enterprise applications, ensuring secure access to Cloud Pak services.

Why the other options are incorrect:

A . Basic Authentication -- Incorrect

Basic authentication (username and password) is not used for Single Sign-On (SSO). SSO mechanisms require identity federation through OpenID Connect (OIDC) or SAML.

C . PublicKey -- Incorrect

PublicKey authentication (such as SSH key-based authentication) is used for system-level access, not for SSO in Foundational Services.

E . Local User Registry -- Incorrect

While local user registries can store credentials, they do not provide SSO capabilities. SSO requires federated identity providers like OpenShift authentication or SAML-based IdPs.

IBM Cloud Pak for Integration (CP4I) v2021.2 Administration Reference:

IBM Cloud Pak Foundational Services Authentication Guide

OpenShift Authentication and Identity Providers

IBM Cloud Pak for Integration SSO Configuration


Page:    1 / 14   
Total 113 questions