iSQI CT-AI Certified Tester AI Testing Exam Practice Test

Page: 1 / 14
Total 80 questions
Question 1

You have access to the training data that was used to train an AI-based system. You can review this information and use it as a guideline when creating your tests. What type of characteristic is this?



Answer : C

AI-based systems can sometimes behave like black boxes, where the internal decision-making process is unclear. Transparency refers to the ability to inspect and understand the training data, algorithms, and decision-making process of the AI system.

Why is Option C Correct?

Transparency ensures that testers and stakeholders can review how an AI system was trained.

Access to training data is a key factor in transparency because it allows testers to analyze biases, completeness, and representativeness of the dataset.

Transparency is an essential characteristic of explainable AI (XAI).

Having access to training data means that testers can investigate how data influences AI behavior.

Regulatory and ethical AI guidelines emphasize transparency.

Many AI ethics frameworks, such as GDPR and Trustworthy AI guidelines, recommend transparency to ensure fair and explainable AI decision-making.

Why Other Options are Incorrect?

(A) Autonomy

Autonomy refers to an AI system's ability to make decisions independently without human intervention. However, having access to training data does not relate to autonomy, which is more about self-learning and decision-making without human control.

(B) Explorability

Explorability refers to the ability to test AI systems interactively to understand their behavior, but it does not directly relate to accessing training data.

(D) Accessibility

Accessibility refers to the ease with which people can use the system, not the ability to inspect the training data.

Reference from ISTQB Certified Tester AI Testing Study Guide

Transparency is the ease with which the training data and algorithm used to generate a model can be understood. 'Transparency: This is considered to be the ease with which the algorithm and training data used to generate the model can be determined.'

Thus, option C is the correct answer, as transparency involves access to training data, allowing testers to understand AI decision-making processes.


Question 2

A transportation company operates three types of delivery vehicles in its fleet. The vehicles operate at different speeds (slow, medium, and fast). The transportation company is attempting to optimize scheduling and has created an AI-based program to plan routes for its vehicles using records from the medium-speed vehicle traveling to selected destinations. The test team uses this data in metamorphic testing to test the accuracy of the estimated travel times created by the AI route planner with the actual routes and times.

Which of the following describes the next phase of metamorphic testing?



Answer : A

Metamorphic Testing (MT) is a testing technique that verifies AI-based systems by generating follow-up test cases based on existing test cases. These follow-up test cases adhere to a Metamorphic Relation (MR), ensuring that if the system is functioning correctly, changes in input should result in predictable changes in output.

Why Option A is Correct?

Metamorphic testing works by transforming source test cases into follow-up test cases

Here, the source test case involves testing the medium-speed vehicle's travel time.

The follow-up test cases are derived by extrapolating travel times for fast and slow vehicles using predictable relationships based on speed differences.

MR states that modifying input should result in a predictable change in output

Since the speed of the vehicle is a known factor, it is possible to predict the new arrival times and verify whether they follow expected trends.

This is a direct application of metamorphic testing principles

In route optimization systems, metamorphic testing often applies transformations to speed, distance, or conditions to verify expected outcomes.

Why Other Options are Incorrect?

(B) Decomposing each route into traffic density and vehicle power

While useful for statistical analysis, this approach does not generate follow-up test cases based on a defined metamorphic relation (MR).

(C) Selecting dissimilar routes and transforming them into a fast or slow route

This does not follow metamorphic testing principles, which require predictable transformations.

(D) Running fast vehicles on long routes and slow vehicles on short routes

This method does not maintain a controlled MR and introduces too many uncontrolled variables.

Reference from ISTQB Certified Tester AI Testing Study Guide

Metamorphic testing generates follow-up test cases based on a source test case. 'MT is a technique aimed at generating test cases which are based on a source test case that has passed. One or more follow-up test cases are generated by changing (metamorphizing) the source test case based on a metamorphic relation (MR).'

MT has been used for testing route optimization AI systems. 'In the area of AI, MT has been used for testing image recognition, search engines, route optimization and voice recognition, among others.'

Thus, option A is the correct answer, as it aligns with the principles of metamorphic testing by modifying input speeds and verifying expected results.


Question 3

A mobile app start-up company is implementing an AI-based chat assistant for e-commerce customers. In the process of planning the testing, the team realizes that the specifications are insufficient.

Which testing approach should be used to test this system?



Answer : A

When testing an AI-based chat assistant for e-commerce customers, the lack of sufficient specifications makes it difficult to use structured test techniques. The ISTQB CT-AI Syllabus recommends exploratory testing in such cases:

Why Exploratory Testing?

Exploratory testing is useful when specifications are incomplete or unclear.

AI-based systems, particularly those using natural language processing (NLP), may not behave deterministically, making scripted test cases ineffective.

The tester interacts dynamically with the system, identifying unexpected behaviors not documented in the specification.

Analysis of Answer Choices:

A (Exploratory testing) Correct, as it is the best approach when specifications are incomplete.

B (Static analysis) Incorrect, as static analysis checks code without execution, which is not helpful for AI chatbots.

C (Equivalence partitioning) Incorrect, as this technique requires well-defined inputs and outputs, which are missing due to insufficient specifications.

D (State transition testing) Incorrect, as state-based testing requires knowledge of valid and invalid transitions, which is difficult with a chatbot lacking a clear specification.

Thus, Option A is the correct answer, as exploratory testing is the best approach when dealing with insufficient specifications in AI-based systems.

Certified Tester AI Testing Study Guide Reference:

ISTQB CT-AI Syllabus v1.0, Section 7.7 (Selecting a Test Approach for an ML System)

ISTQB CT-AI Syllabus v1.0, Section 9.6 (Experience-Based Testing of AI-Based Systems).


Question 4

Which of the following is correct regarding the layers of a deep neural network?



Answer : B

A deep neural network (DNN) is a type of artificial neural network that consists of multiple layers between the input and output layers. The ISTQB Certified Tester AI Testing (CT-AI) Syllabus outlines the following characteristics of a DNN:

Structure of a Deep Neural Network:

A DNN comprises at least three types of layers:

Input layer: Receives the input data.

Hidden layers: Perform complex feature extraction and transformations.

Output layer: Produces the final prediction or classification.

Analysis of Answer Choices:

A (Only input and output layers) Incorrect, as a DNN must have at least one hidden layer.

B (At least one internal hidden layer) Correct, as a neural network must have hidden layers to be considered deep.

C (Minimum of five layers required) Incorrect, as there is no strict definition that requires at least five layers.

D (Output layer is not connected to other layers) Incorrect, as the output layer must be connected to the hidden layers.

Thus, Option B is the correct answer, as a deep neural network must have at least one hidden layer.

Certified Tester AI Testing Study Guide Reference:

ISTQB CT-AI Syllabus v1.0, Section 6.1 (Neural Networks and Deep Neural Networks)

ISTQB CT-AI Syllabus v1.0, Section 6.2 (Structure of Deep Neural Networks).


Question 5

When verifying that an autonomous AI-based system is acting appropriately, which of the following are MOST important to include?



Answer : C

When verifying autonomous AI-based systems, a critical aspect is ensuring that they maintain an appropriate level of autonomy while only requesting human intervention when necessary. If an AI system unnecessarily asks for human input, it defeats the purpose of autonomy and can:

Slow down operations.

Reduce trust in the system.

Indicate improper confidence thresholds in decision-making.

This is particularly crucial in autonomous vehicles, AI-driven financial trading, and robotic process automation, where excessive human intervention would hinder performance.

Why are the other options incorrect?

A . Test cases to verify that the system automatically confirms the correct classification of training data This is relevant for verifying training consistency but not for autonomy validation.

B . Test cases to detect the system appropriately automating its data input While relevant, data automation does not directly address the verification of autonomy.

D . Test cases to verify that the system automatically suppresses invalid output data This focuses on output filtering rather than decision-making autonomy.

Thus, the most critical test case for verifying autonomous AI-based systems is ensuring that it does not unnecessarily request human intervention.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 8.2 - Testing Autonomous AI-Based Systems states that it is crucial to test whether the system requests human intervention only when necessary and does not disrupt autonomy.


Question 6

Consider an AI system in which the complex internal structure has been generated by another software system. Why would the tester choose to do black-box testing on this particular system?



Answer : D

In AI-based systems, particularly those where the internal structure has been generated by another software system, the complexity often makes it difficult for human testers to analyze the inner workings. As per the ISTQB Certified Tester AI Testing (CT-AI) Syllabus:

Black-box testing is particularly useful when dealing with AI systems that have been generated by another system because:

It allows testing without requiring knowledge of the internal logic.

The AI model may be too complex for human testers to comprehend, making white-box testing ineffective.

Black-box testing evaluates the inputs and outputs, ensuring functional correctness without needing insight into how the system reaches a decision.

Why other options are incorrect?

A (Test automation and black-box testing): While automation is possible, black-box testing is not primarily about automation but about abstracting the internal complexity.

B (Understanding the logic of the software): This contradicts the premise of black-box testing, which is designed to test functionality without needing to understand the inner workings.

C (Checking transparency of the algorithm): Black-box testing does not check algorithm transparency---that would require white-box testing or explainability techniques.

Thus, the best choice is Option D, as black-box testing removes the need to analyze the internal structure of AI systems, making it the most appropriate testing method in this case.

Certified Tester AI Testing Study Guide Reference:

ISTQB CT-AI Syllabus v1.0, Section 8.5 (Challenges Testing Complex AI-Based Systems)

ISTQB CT-AI Syllabus v1.0, Section 8.6 (Testing the Transparency, Interpretability, and Explainability of AI-Based Systems)


Question 7

An engine manufacturing facility wants to apply machine learning to detect faulty bolts. Which of the following would result in bias in the model?



Answer : A

Bias in AI models often originates from incomplete or non-representative training data. In this case, if the training dataset purposely excludes specific faulty conditions, the machine learning model will fail to learn and detect these conditions in real-world scenarios.

This results in:

Sample bias, where the training data is not fully representative of all possible faulty conditions.

Algorithmic bias, where the model prioritizes certain defect types while ignoring others.

Why are the other options incorrect?

B . Selecting training data by purposely including all known faulty conditions This would help reduce bias by improving model generalization.

C . Selecting testing data from a different dataset than the training dataset This is a good practice to evaluate model generalization but does not inherently introduce bias.

D . Selecting testing data from a boat manufacturer's bolt longevity data While using unrelated data can create poor model accuracy, it does not directly introduce bias unless systematic patterns in the incorrect dataset lead to unfair decision-making.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 8.3 - Testing for Algorithmic, Sample, and Inappropriate Bias states that sample bias can occur if the training dataset is not fully representative of the expected data space, leading to biased predictions.


Page:    1 / 14   
Total 80 questions