Cloud Kicks uses Einstein to generate predictions but is not seeing accurate results. What is a potential reason for this?
Answer : B
AI models rely on high-quality data to produce accurate and reliable predictions. Poor data quality---such as missing values, inconsistent formatting, or biased data---can negatively impact AI performance.
Option A (Incorrect): If Cloud Kicks is using Einstein AI, it is unlikely that they are using the wrong product, as Einstein is designed for predictive analytics. The issue is more likely related to data quality or model training.
Option B (Correct): Poor data quality is one of the most common reasons for inaccurate AI predictions. If the input data contains errors, biases, or incomplete information, the AI model will generate flawed insights. Regular data cleaning and preprocessing are essential for improving prediction accuracy.
Option C (Incorrect): Having too much data does not necessarily result in inaccurate predictions. In fact, more data can improve model performance if properly structured and cleaned. However, if the data is noisy or unstructured, it may lead to inconsistencies.
What is one way to achieve transparency in AI?
Answer : C
Transparency in AI refers to making AI decisions understandable and accountable to users and stakeholders. It involves explaining how AI models make decisions and ensuring that users can question or challenge AI outcomes.
Option A (Incorrect): While establishing an ethical and unbiased culture is essential for responsible AI development, it does not directly contribute to AI transparency. Transparency requires clear communication and user engagement.
Option B (Incorrect): Communicating AI goals and objectives is helpful but insufficient on its own. Transparency also includes revealing AI decision-making processes and allowing user oversight.
Option C (Correct): Allowing users to give feedback regarding AI inferences ensures transparency by making AI decision-making accountable. Users can report errors, biases, or misunderstandings, helping improve AI fairness and reliability.
What is an example of ethical debt?
Answer : A
Ethical debt refers to the long-term negative consequences of prioritizing speed or convenience over responsible AI development practices. Ethical debt accumulates when AI systems are deployed despite known ethical concerns, such as bias, privacy violations, or transparency issues.
Option A (Correct): Launching an AI feature after discovering harmful bias is a clear example of ethical debt because it disregards the ethical obligation to ensure fairness and non-discrimination in AI outcomes. Ignoring bias can lead to systemic issues that are difficult and costly to correct later.
Option B (Incorrect): Violating a data privacy law and failing to pay fines is a legal issue rather than an example of ethical debt. While related, ethical debt pertains more to AI decision-making and development choices.
Option C (Incorrect): Delaying an AI product launch to retrain an AI model is a responsible action that helps avoid ethical debt, rather than an example of it. This demonstrates an effort to mitigate bias and improve AI fairness before deployment.
How does poor data quality affect predictive and generative AI models?
Answer : A
Poor data quality significantly impacts the performance of predictive and generative AI models by leading to inaccurate and unreliable results. Factors such as incomplete data, incorrect data, or poorly formatted data can mislead AI models during the learning phase, causing them to make incorrect assumptions, learn inappropriate patterns, or generalize poorly to new data. This inaccuracy can be detrimental in applications where precision is critical, such as in predictive analytics for sales forecasting or customer behavior analysis. Salesforce emphasizes the importance of data quality for AI model effectiveness in their AI best practices guide, which can be reviewed on Salesforce AI Best Practices.
Which AI tool is a web of connections, guided by weights and biases?
Answer : A
Neural networks are a key AI tool designed as a web of interconnected nodes, similar to the human brain's structure. Each connection, or synapse, in a neural network is guided by weights and biases that are adjusted during the learning process. These weights and biases determine the strength and influence of one node over another, facilitating complex pattern recognition and decision-making processes. Neural networks are extensively used in machine learning for tasks like image and speech recognition, among others. For more on neural networks in the context of Salesforce AI, see the Salesforce AI documentation on Neural Networks.
What is an example of ethical debt?
Answer : B
''Launching an AI feature after discovering a harmful bias is an example of ethical debt. Ethical debt is a term that describes the potential harm or risk caused by unethical or irresponsible decisions or actions related to AI systems. Ethical debt can accumulate over time and have negative consequences for users, customers, partners, or society. For example, launching an AI feature after discovering a harmful bias can create ethical debt by exposing users to unfair or inaccurate results that may affect their trust, satisfaction, or well-being.''
Cloud Kicks wants to use Einstein Prediction Builder to determine a customer's likelihood of buying specific products; however, data quality is a...
How can data quality be assessed quality?
Answer : C
''Leveraging data quality apps from AppExchange is how data quality can be assessed. Data quality is the degree to which data is accurate, complete, consistent, relevant, and timely for the AI task. Data quality can affect the performance and reliability of AI systems, as they depend on the quality of the data they use to learn from and make predictions. Leveraging data quality apps from AppExchange means using third-party applications or solutions that can help measure, monitor, or improve data quality in Salesforce.''