iSQI Certified Tester AI Testing CT-AI Exam Practice Test

Page: 1 / 14
Total 80 questions
Question 1

In a certain coffee producing region of Colombia, there have been some severe weather storms, resulting in massive losses in production. This caused a massive drop in stock price of coffee.

Which ONE of the following types of testing SHOULD be performed for a machine learning model for stock-price prediction to detect influence of such phenomenon as above on price of coffee stock.

SELECT ONE OPTION



Answer : C

Type of Testing for Stock-Price Prediction Models: Concept drift refers to the change in the statistical properties of the target variable over time. Severe weather storms causing massive losses in coffee production and affecting stock prices would require testing for concept drift to ensure that the model adapts to new patterns in data over time.

Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 7.6 Testing for Concept Drift, which explains the need to test for concept drift in models that might be affected by changing external factors.


Question 2

Before deployment of an AI based system, a developer is expected to demonstrate in a test environment how decisions are made. Which of the following characteristics does decision making fall under?



Answer : A

Explainability in AI-based systems refers to the ease with which users can determine how the system reaches a particular result. It is a crucial aspect when demonstrating AI decision-making, as it ensures that decisions made by AI models are transparent, interpretable, and understandable by stakeholders.

Before deploying an AI-based system, a developer must validate how decisions are made in a test environment. This process falls under the characteristic of explainability because it involves clarifying how an AI model arrives at its conclusions, which helps build trust in the system and meet regulatory and ethical requirements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 2.7: Transparency, Interpretability, and Explainability)

'Explainability is considered to be the ease with which users can determine how the AI-based system comes up with a particular result'.

'Most users are presented with AI-based systems as 'black boxes' and have little awareness of how these systems arrive at their results. This ignorance may even apply to the data scientists who built the systems. Occasionally, users may not even be aware they are interacting with an AI-based system'.

ISTQB CT-AI Syllabus (Section 8.6: Testing the Transparency, Interpretability, and Explainability of AI-based Systems)

'Testing the explainability of AI-based systems involves verifying whether users can understand and validate AI-generated decisions. This ensures that AI systems remain accountable and do not make incomprehensible or biased decisions'.

Contrast with Other Options:

Autonomy (B): Autonomy relates to an AI system's ability to operate independently without human oversight. While decision-making is a key function of autonomy, the focus here is on demonstrating the reasoning behind decisions, which falls under explainability rather than autonomy.

Self-learning (C): Self-learning systems adapt based on previous data and experiences, which is different from making decisions understandable to humans.

Non-determinism (D): AI-based systems are often probabilistic and non-deterministic, meaning they do not always produce the same output for the same input. This can make testing and validation more challenging, but it does not relate to explaining the decision-making process.

Conclusion: Since the question explicitly asks about the characteristic under which decision-making falls when being demonstrated before deployment, explainability is the correct choice because it ensures that AI decisions are transparent, understandable, and accountable to stakeholders.


Question 3

Which of the following is correct regarding the layers of a deep neural network?



Answer : B

A deep neural network (DNN) is a type of artificial neural network that consists of multiple layers between the input and output layers. The ISTQB Certified Tester AI Testing (CT-AI) Syllabus outlines the following characteristics of a DNN:

Structure of a Deep Neural Network:

A DNN comprises at least three types of layers:

Input layer: Receives the input data.

Hidden layers: Perform complex feature extraction and transformations.

Output layer: Produces the final prediction or classification.

Analysis of Answer Choices:

A (Only input and output layers) Incorrect, as a DNN must have at least one hidden layer.

B (At least one internal hidden layer) Correct, as a neural network must have hidden layers to be considered deep.

C (Minimum of five layers required) Incorrect, as there is no strict definition that requires at least five layers.

D (Output layer is not connected to other layers) Incorrect, as the output layer must be connected to the hidden layers.

Thus, Option B is the correct answer, as a deep neural network must have at least one hidden layer.

Certified Tester AI Testing Study Guide Reference:

ISTQB CT-AI Syllabus v1.0, Section 6.1 (Neural Networks and Deep Neural Networks)

ISTQB CT-AI Syllabus v1.0, Section 6.2 (Structure of Deep Neural Networks).


Question 4

A transportation company operates three types of delivery vehicles in its fleet. The vehicles operate at different speeds (slow, medium, and fast). The transportation company is attempting to optimize scheduling and has created an AI-based program to plan routes for its vehicles using records from the medium-speed vehicle traveling to selected destinations. The test team uses this data in metamorphic testing to test the accuracy of the estimated travel times created by the AI route planner with the actual routes and times.

Which of the following describes the next phase of metamorphic testing?



Answer : A

Metamorphic Testing (MT) is a testing technique that verifies AI-based systems by generating follow-up test cases based on existing test cases. These follow-up test cases adhere to a Metamorphic Relation (MR), ensuring that if the system is functioning correctly, changes in input should result in predictable changes in output.

Why Option A is Correct?

Metamorphic testing works by transforming source test cases into follow-up test cases

Here, the source test case involves testing the medium-speed vehicle's travel time.

The follow-up test cases are derived by extrapolating travel times for fast and slow vehicles using predictable relationships based on speed differences.

MR states that modifying input should result in a predictable change in output

Since the speed of the vehicle is a known factor, it is possible to predict the new arrival times and verify whether they follow expected trends.

This is a direct application of metamorphic testing principles

In route optimization systems, metamorphic testing often applies transformations to speed, distance, or conditions to verify expected outcomes.

Why Other Options are Incorrect?

(B) Decomposing each route into traffic density and vehicle power

While useful for statistical analysis, this approach does not generate follow-up test cases based on a defined metamorphic relation (MR).

(C) Selecting dissimilar routes and transforming them into a fast or slow route

This does not follow metamorphic testing principles, which require predictable transformations.

(D) Running fast vehicles on long routes and slow vehicles on short routes

This method does not maintain a controlled MR and introduces too many uncontrolled variables.

Reference from ISTQB Certified Tester AI Testing Study Guide

Metamorphic testing generates follow-up test cases based on a source test case. 'MT is a technique aimed at generating test cases which are based on a source test case that has passed. One or more follow-up test cases are generated by changing (metamorphizing) the source test case based on a metamorphic relation (MR).'

MT has been used for testing route optimization AI systems. 'In the area of AI, MT has been used for testing image recognition, search engines, route optimization and voice recognition, among others.'

Thus, option A is the correct answer, as it aligns with the principles of metamorphic testing by modifying input speeds and verifying expected results.


Question 5

A wildlife conservation group would like to use a neural network to classify images of different animals. The algorithm is going to be used on a social media platform to automatically pick out pictures of the chosen animal of the month. This month's animal is set to be a wolf. The test team has already observed that the algorithm could classify a picture of a dog as being a wolf because of the similar characteristics between dogs and wolves. To handle such instances, the team is planning to train the model with additional images of wolves and dogs so that the model is able to better differentiate between the two.

What test method should you use to verify that the model has improved after the additional training?



Answer : D

Back-to-back testing is used to compare two different versions of an ML model, which is precisely what is needed in this scenario.

The model initially misclassified dogs as wolves due to feature similarities.

The test team retrains the model with additional images of dogs and wolves.

The best way to verify whether this additional training improved classification accuracy is to compare the original model's output with the newly trained model's output using the same test dataset.

Why Other Options Are Incorrect:

A (Metamorphic Testing): Metamorphic testing is useful for generating new test cases based on existing ones but does not directly compare different model versions.

B (Adversarial Testing): Adversarial testing is used to check how robust a model is against maliciously perturbed inputs, not to verify training effectiveness.

C (Pairwise Testing): Pairwise testing is a combinatorial technique for reducing the number of test cases by focusing on key variable interactions, not for validating model improvements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 9.3: Back-to-Back Testing)

'Back-to-back testing is used when an updated ML model needs to be compared against a previous version to confirm that it performs better or as expected'.

'The results of the newly trained model are compared with those of the prior version to ensure that changes did not negatively impact performance'.

Conclusion:

To verify that the model's performance improved after retraining, back-to-back testing is the most appropriate method as it compares both model versions. Hence, the correct answer is D.


Question 6

An engine manufacturing facility wants to apply machine learning to detect faulty bolts. Which of the following would result in bias in the model?



Answer : A

Bias in AI models often originates from incomplete or non-representative training data. In this case, if the training dataset purposely excludes specific faulty conditions, the machine learning model will fail to learn and detect these conditions in real-world scenarios.

This results in:

Sample bias, where the training data is not fully representative of all possible faulty conditions.

Algorithmic bias, where the model prioritizes certain defect types while ignoring others.

Why are the other options incorrect?

B . Selecting training data by purposely including all known faulty conditions This would help reduce bias by improving model generalization.

C . Selecting testing data from a different dataset than the training dataset This is a good practice to evaluate model generalization but does not inherently introduce bias.

D . Selecting testing data from a boat manufacturer's bolt longevity data While using unrelated data can create poor model accuracy, it does not directly introduce bias unless systematic patterns in the incorrect dataset lead to unfair decision-making.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 8.3 - Testing for Algorithmic, Sample, and Inappropriate Bias states that sample bias can occur if the training dataset is not fully representative of the expected data space, leading to biased predictions.


Question 7

Which of the following aspects is a challenge when handling test data for an AI-based system?



Answer : A

Handling test data in AI-based systems presents numerous challenges, particularly in terms of data privacy and confidentiality. AI models often require vast amounts of training data, some of which may contain personal, sensitive, or confidential information. Ensuring compliance with data protection laws (e.g., GDPR, CCPA) and implementing secure data-handling practices is a major challenge in AI testing.

Why is Option A Correct?

Data Privacy Regulations

AI-based systems frequently process personal data, such as images, names, and transaction details, leading to privacy concerns.

Compliance with regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) requires proper anonymization, encryption, or redaction of sensitive data before using it for testing.

Data Security Challenges

AI models may leak confidential information if proper security measures are not in place.

Protecting training and test data from unauthorized access is crucial to maintaining trust and compliance.

Legal and Ethical Considerations

Organizations must obtain legal approval before using certain datasets, especially those containing health records, financial data, or personally identifiable information (PII).

Testers may need to employ synthetic data or data masking techniques to minimize exposure risks.

Why Other Options are Incorrect?

(B) Output data or intermediate data

While analyzing output data is important, it does not pose a significant challenge compared to handling personal or confidential test data.

(C) Video frame speed or aspect ratio

These are technical challenges in processing AI models but do not fall under data privacy or ethical considerations.

(D) Data frameworks or machine learning frameworks

Choosing an appropriate ML framework (e.g., TensorFlow, PyTorch) is important, but it is not a major challenge related to test data handling.

Reference from ISTQB Certified Tester AI Testing Study Guide

Handling personal or confidential data is a critical challenge in AI testing 'Personal or otherwise confidential data may need special techniques for sanitization, encryption, or redaction. Legal approval for use may also be required.'

Thus, option A is the correct answer, as data privacy and confidentiality are major challenges when handling test data for AI-based systems.


Page:    1 / 14   
Total 80 questions