CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (''LLM''). In particular, ABC intends to use its historical customer data---including applications, policies, and claims---and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed t
A . human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women's loan applications due primarily to women historically receiving lower salaries than men.
Each of the following steps would support fairness testing by the compliance team during the first month in production EXCEPT?
Answer : B
Providing the loan applicants with information about the model capabilities and limitations would not directly support fairness testing by the compliance team. Fairness testing focuses on evaluating the model's decisions for biases and ensuring equitable treatment across different demographic groups, rather than informing applicants about the model.
Which of the following is an example of a high-risk application under the EU Al Act?
Answer : C
The EU AI Act categorizes certain applications of AI as high-risk due to their potential impact on fundamental rights and safety. High-risk applications include those used in critical areas such as employment, education, and essential public services. A government-run social scoring tool, which assesses individuals based on their social behavior or perceived trustworthiness, falls under this category because of its profound implications for privacy, fairness, and individual rights. This contrasts with other AI applications like resume scanning tools or customer service chatbots, which are generally not classified as high-risk under the EU AI Act.
Scenario:
A European AI technology company was found to be non-compliant with certain provisions of the EU AI Act. The regulator is considering penalties under the enforcement provisions of the regulation.
According to the EU AI Act, which of the following non-compliance examples could lead to fines of up to 15 million or 3% of annual worldwide turnover (whichever is higher)?
Answer : B
The correct answer is B. The EU AI Act assigns a tiered penalty system based on the severity of the violation. A breach of obligations related to high-risk AI systems falls into the mid-tier category, triggering fines of 15 million or 3% of annual global turnover.
From the AIGP ILT Guide -- EU AI Act Module:
''Providers of high-risk AI systems must comply with strict documentation, testing, monitoring, and registration obligations. Breaches of these result in significant fines of up to 15 million or 3% of turnover.''
AI Governance in Practice Report 2024 supports this:
''Non-compliance with obligations under Title III (high-risk systems) leads to financial penalties under Article 71(3) of the EU AI Act.''
Note: The highest penalty (35 million or 7%) applies to prohibited AI uses, not to obligations for high-risk systems.
A company developed Al technology that can analyze text, video, images and sound to tag content, including the names of animals, humans and objects.
What type of Al is this technology classified as?
Answer : B
A multi-modal model is an AI system that can process and analyze multiple types of data, such as text, video, images, and sound. This type of AI integrates different data sources to enhance its understanding and decision-making capabilities. In the given scenario, the AI technology that tags content including names of animals, humans, and objects falls under this category. Reference: AIGP BODY OF KNOWLEDGE, which outlines the capabilities and use cases of multi-modal models.
Scenario:
An organization is building a compliance program to ensure responsible AI deployment. It aims to align operations with AI risk frameworks and mitigate legal, ethical, and operational risks, while still promoting innovation.
Which of the following would be the least likely step for an organization to take when designing an integrated compliance strategy for responsible AI?
Answer : D
The correct answer is D. While modernization through software may support efficiency, it is not a foundational or essential component of designing an integrated strategy.
From the AI Governance in Practice Report 2024:
''Integrated strategies rely on senior management support, ethical reviews, and stakeholder engagement... The use of tools and platforms may come later as an operational enhancement.''
Also confirmed in AIGP Body of Knowledge:
''Key components of a governance framework include leadership buy-in, ethical analysis, and stakeholder input. Tools are supporting elements---not strategic drivers.''
The OECD's Ethical Al Governance Framework is a self-regulation model that proposes to prevent societal harms by?
Answer : D
The OECD's Ethical AI Governance Framework aims to ensure that AI development and deployment are carried out ethically while fostering innovation. The framework includes principles like transparency, accountability, and human rights protections to prevent societal harm. It does not focus solely on technical design or post-deployment monitoring (C), nor does it establish industry-specific requirements (B). While explainability is important, the primary goal is to balance innovation with ethical considerations (D).
All of the following are penalties and enforcements outlined in the EU Al Act EXCEPT?
Answer : C
The EU AI Act outlines specific penalties and enforcement mechanisms to ensure compliance with its regulations. Among these, fines for violations of banned AI applications can be as high as 35 million or 7% of the global annual turnover of the offending organization, whichever is higher. Proportional caps on fines are applied to SMEs and startups to ensure fairness. General Purpose AI rules are to apply after a 6-month period as a specific provision to ensure that stakeholders have adequate time to comply. However, there is no provision for an 'AI Pact' acting as a transitional bridge until the regulations are fully enacted, making option C the correct answer.