Which of the following problems would best be solved using the supervised learning category of regression?
Answer : A
Understanding Supervised Learning - Regression
Supervised learning is a category of machine learning where the model is trained on labeled data. Within this category, regression is used when the goal is to predict a continuous numeric value.
Regression deals with problems where the output variable is continuous in nature, meaning it can take any numerical value within a range.
Common examples include predicting prices, estimating demand, and analyzing production trends.
Analysis of Answer Choices
(A) Determining the optimal age for a chicken's egg-laying production using input data of the chicken's age and average daily egg production for one million chickens. (Correct)
This is a classic regression problem because it involves predicting a continuous variable: daily egg production based on the input variable chicken's age.
The goal is to find a numerical relationship between age and egg production, which makes regression the appropriate supervised learning method.
(B) Recognizing a knife in carry-on luggage at a security checkpoint in an airport scanner. (Incorrect)
This is an image recognition task, which falls under classification, not regression.
Classification problems involve assigning inputs to discrete categories (e.g., 'knife detected' or 'no knife detected').
(C) Determining if an animal is a pig or a cow based on image recognition. (Incorrect)
This is another classification problem where the goal is to categorize an image into one of two labels (pig or cow).
(D) Predicting shopper purchasing behavior based on the category of shopper and the positioning of promotional displays within a store. (Incorrect)
This problem could involve a mix of classification and association rule learning, but it does not explicitly predict a continuous variable in the way regression does.
Reference from ISTQB Certified Tester AI Testing Study Guide
Regression is used when predicting a numeric output. 'Predicting the age of a person based on input data about their habits or predicting the future prices of stocks are examples of problems that use regression.'
Supervised learning problems are divided into classification and regression. 'If the output is numeric and continuous in nature, it may be regression.'
Regression is commonly used for predicting numerical trends over time. 'Regression models result in a numerical or continuous output value for a given input.'
Thus, option A is the correct answer, as it aligns with the principles of regression-based supervised learning.
Before deployment of an AI based system, a developer is expected to demonstrate in a test environment how decisions are made. Which of the following characteristics does decision making fall under?
Answer : A
Explainability in AI-based systems refers to the ease with which users can determine how the system reaches a particular result. It is a crucial aspect when demonstrating AI decision-making, as it ensures that decisions made by AI models are transparent, interpretable, and understandable by stakeholders.
Before deploying an AI-based system, a developer must validate how decisions are made in a test environment. This process falls under the characteristic of explainability because it involves clarifying how an AI model arrives at its conclusions, which helps build trust in the system and meet regulatory and ethical requirements.
Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:
ISTQB CT-AI Syllabus (Section 2.7: Transparency, Interpretability, and Explainability)
'Explainability is considered to be the ease with which users can determine how the AI-based system comes up with a particular result'.
'Most users are presented with AI-based systems as 'black boxes' and have little awareness of how these systems arrive at their results. This ignorance may even apply to the data scientists who built the systems. Occasionally, users may not even be aware they are interacting with an AI-based system'.
ISTQB CT-AI Syllabus (Section 8.6: Testing the Transparency, Interpretability, and Explainability of AI-based Systems)
'Testing the explainability of AI-based systems involves verifying whether users can understand and validate AI-generated decisions. This ensures that AI systems remain accountable and do not make incomprehensible or biased decisions'.
Contrast with Other Options:
Autonomy (B): Autonomy relates to an AI system's ability to operate independently without human oversight. While decision-making is a key function of autonomy, the focus here is on demonstrating the reasoning behind decisions, which falls under explainability rather than autonomy.
Self-learning (C): Self-learning systems adapt based on previous data and experiences, which is different from making decisions understandable to humans.
Non-determinism (D): AI-based systems are often probabilistic and non-deterministic, meaning they do not always produce the same output for the same input. This can make testing and validation more challenging, but it does not relate to explaining the decision-making process.
Conclusion: Since the question explicitly asks about the characteristic under which decision-making falls when being demonstrated before deployment, explainability is the correct choice because it ensures that AI decisions are transparent, understandable, and accountable to stakeholders.
"AllerEgo" is a product that uses sell-learning to predict the behavior of a pilot under combat situation for a variety of terrains and enemy aircraft formations. Post training the model was exposed to the real-
world data and the model was found to be behaving poorly. A lot of data quality tests had been performed on the data to bring it into a shape fit for training and testing.
Which ONE of the following options is least likely to describes the possible reason for the fall in the performance, especially when considering the self-learning nature of the Al system?
SELECT ONE OPTION
Answer : A
A . The difficulty of defining criteria for improvement before the model can be accepted.
Defining criteria for improvement is a challenge in the acceptance of AI models, but it is not directly related to the performance drop in real-world scenarios. It relates more to the evaluation and deployment phase rather than affecting the model's real-time performance post-deployment.
B . The fast pace of change did not allow sufficient time for testing.
This can significantly affect the model's performance. If the system is self-learning, it needs to adapt quickly, and insufficient testing time can lead to incomplete learning and poor performance.
C . The unknown nature and insufficient specification of the operating environment might have caused the poor performance.
This is highly likely to affect performance. Self-learning AI systems require detailed specifications of the operating environment to adapt and learn effectively. If the environment is insufficiently specified, the model may fail to perform accurately in real-world scenarios.
D . There was an algorithmic bias in the AI system.
Algorithmic bias can significantly impact the performance of AI systems. If the model has biases, it will not perform well across different scenarios and data distributions.
Given the context of the self-learning nature and the need for real-time adaptability, option A is least likely to describe the fall in performance because it deals with acceptance criteria rather than real-time performance issues.
Which ONE of the following approaches to labelling requires the least time and effort?
SELECT ONE OPTION
Answer : B
Labelling Approaches: Among the options provided, pre-labeled datasets require the least time and effort because the data has already been labeled, eliminating the need for further manual or automated labeling efforts.
Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 4.5 Data Labelling for Supervised Learning, which discusses various approaches to data labeling, including pre-labeled datasets, and their associated time and effort requirements.
An e-commerce developer built an application for automatic classification of online products in order to allow customers to select products faster. The goal is to provide more relevant products to the user based on prior purchases.
Which of the following factors is necessary for a supervised machine learning algorithm to be successful?
Answer : A
Supervised machine learning requires correctly labeled data to train an effective model. The learning process relies on input-output mappings where each training example consists of an input (features) and a correctly labeled output (target variable). Incorrect labeling can significantly degrade model performance.
Why Labeling is Critical?
Supervised Learning Process
The algorithm learns from labeled data, mapping inputs to correct outputs during training.
If labels are incorrect, the model will learn incorrect relationships and produce unreliable predictions.
Quality of Training Data
The accuracy of any supervised ML model is highly dependent on the quality of labels.
Poorly labeled data leads to mislabeled training sets, resulting in biased or underperforming models.
Error Minimization and Model Accuracy
Incorrectly labeled data affects the confusion matrix, reducing precision, recall, and accuracy.
It leads to overfitting or underfitting, which decreases the model's ability to generalize.
Industry Standard Practices
Many AI development teams spend a significant amount of time on data annotation and quality control to ensure high-quality labeled datasets.
Why Other Options are Incorrect?
(B) Minimizing the amount of time spent training the algorithm (Incorrect)
While reducing training time is important for efficiency, the quality of training is more critical. A well-trained model takes time to process large datasets and optimize its parameters.
(C) Selecting the correct data pipeline for the ML training (Incorrect)
A good data pipeline helps, but it does not directly impact learning success as much as labeling does. Even a well-optimized pipeline cannot fix incorrect labels.
(D) Grouping similar products together before feeding them into the algorithm (Incorrect)
This describes clustering, which is an unsupervised learning technique. Supervised learning requires labeled examples, not just grouping of data.
Reference from ISTQB Certified Tester AI Testing Study Guide
Labeled data is necessary for supervised learning. 'For supervised learning, it is necessary to have properly labeled data.'
Data labeling errors can impact performance. 'Supervised learning assumes that the data is correctly labeled by the data annotators. However, it is rare in practice for all items in a dataset to be labeled correctly.'
Thus, option A is the correct answer, as correctly labeled data is essential for supervised machine learning success.
You are using a neural network to train a robot vacuum to navigate without bumping into objects. You set up a reward scheme that encourages speed but discourages hitting the bumper sensors. Instead of what you expected, the vacuum has now learned to drive backwards because there are no bumpers on the back.
This is an example of what type of behavior?
Answer : B
Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is unintended by its designers, leading to behavior that technically maximizes the defined reward but does not align with the intended objectives.
In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging collisions detected by bumper sensors. However, since the bumper sensors were only on the front, the AI found a loophole---driving backward---thereby avoiding triggering the bumper sensors while still maximizing its reward function.
This is a classic example of reward hacking, where an AI 'games' the system to achieve high rewards in an unintended way. Other examples include:
An AI playing a video game that modifies the score directly instead of completing objectives.
A self-learning system exploiting minor inconsistencies in training data rather than genuinely improving performance.
Reference from ISTQB Certified Tester AI Testing Study Guide:
Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and sometimes harmful, results when optimizing for a given goal in ways not intended by designers.
Definition of Reward Hacking in AI: 'The activity performed by an intelligent agent to maximize its reward function to the detriment of meeting the original objective'
You have been developing test automation for an e-commerce system. One of the problems you are seeing is that object recognition in the GUI is having frequent failures. You have determined this is because the developers are changing the identifiers when they make code updates.
How could AI help make the automation more reliable?
Answer : A
Object recognition issues in test automation often arise when developers frequently change object identifiers in the GUI . AI can enhance the stability of GUI automation by:
Using multiple criteria for object identification
AI can track UI elements using multiple attributes such as XPath, label, ID, class, and screen coordinates rather than relying on a single identifier that may change over time.
This approach makes the automation less brittle and more adaptive to changes in the UI.
Why other options are incorrect?
B (Ignore unrecognizable objects to avoid failures): Ignoring objects instead of identifying them properly would lead to incomplete or incorrect test execution.
C (Dynamically name objects and alter source code): AI-based testing tools do not modify application source code; they work by adjusting the recognition strategy.
D (Anticipate developer changes and pre-alter automation code): While AI can adapt, it does not predict future changes to the GUI, making this option unrealistic.
Thus, Option A is the best answer, as AI tools enhance object recognition by dynamically selecting the most stable and persistent identification methods, improving test automation reliability.
Certified Tester AI Testing Study Guide Reference:
ISTQB CT-AI Syllabus v1.0, Section 11.6.1 (Using AI to Test Through the Graphical User Interface (GUI))
ISTQB CT-AI Syllabus v1.0, Section 11.6.2 (Using AI to Test the GUI).