Which of the following are use cases of generative adversarial networks?
Answer : A, B, C, D
Generative Adversarial Networks (GANs) are widely used in several creative and image generation tasks, including:
A . Photo repair: GANs can be used to restore missing or damaged parts of images.
B . Generating face images: GANs are known for their ability to generate realistic face images.
C . Generating a 3D model from a 2D image: GANs can be used in applications where 2D images are converted into 3D models.
D . Generating images from text: GANs can also generate images based on text descriptions, as seen in tasks like text-to-image synthesis.
All of the provided options are valid use cases of GANs.
HCIA AI
Deep Learning Overview: Discusses the architecture and use cases of GANs, including applications in image generation and creative content.
AI Development Framework: Covers the role of GANs in various generative tasks across industries.
Huawei Cloud EI provides knowledge graph, OCR, machine translation, and the Celia (virtual assistant) development platform.
Answer : A
Huawei Cloud EI (Enterprise Intelligence) provides a variety of AI services and platforms, including knowledge graph, OCR (Optical Character Recognition), machine translation, and the Celia virtual assistant development platform. These services enable businesses to integrate AI capabilities such as language processing, image recognition, and virtual assistant development into their systems.
AI inference chips need to be optimized and are thus more complex than those used for training.
Answer : B
AI inference chips are generally simpler than training chips because inference involves running a trained model on new data, which requires fewer computations compared to the training phase. Training chips need to perform more complex tasks like backpropagation, gradient calculations, and frequent parameter updates. Inference, on the other hand, mostly involves forward pass computations, making inference chips optimized for speed and efficiency but not necessarily more complex than training chips.
Thus, the statement is false because inference chips are optimized for simpler tasks compared to training chips.
HCIA AI
Cutting-edge AI Applications: Describes the difference between AI inference and training chips, focusing on their respective optimizations.
Deep Learning Overview: Explains the distinction between the processes of training and inference, and how hardware is optimized accordingly.
HarmonyOS can provide AI capabilities for external systems only through the integrated HMS Core.
Answer : B
HarmonyOS provides AI capabilities not only through HMS Core (Huawei Mobile Services Core), but also through other system-level integrations and AI frameworks. While HMS Core is one way to offer AI functionalities, HarmonyOS also has native support for AI processing that can be accessed by external systems or applications beyond HMS Core.
Thus, the statement is false as AI capabilities are not limited solely to HMS Core in HarmonyOS.
HCIA AI
Introduction to Huawei AI Platforms: Covers HarmonyOS and the various ways it integrates AI capabilities into external systems.
The mean squared error (MSE) loss function cannot be used for classification problems.
Answer : A
The mean squared error (MSE) loss function is primarily used for regression problems, where the goal is to minimize the difference between the predicted and actual continuous values. For classification problems, where the target output is categorical (e.g., binary or multi-class labels), loss functions like cross-entropy are more suitable, as they are designed to handle the probabilistic interpretation of outputs in classification tasks.
Using MSE for classification could lead to inefficient training because it doesn't capture the probabilistic relationships required for classification tasks.
Which of the following statements are true about the k-nearest neighbors (k-NN) algorithm?
Answer : B, D
The k-nearest neighbors (k-NN) algorithm is a non-parametric algorithm used for both classification and regression. In classification tasks, it typically uses majority voting to assign a label to a new instance based on the most common class among its nearest neighbors. The algorithm works by calculating the distance (often using Euclidean distance) between the query point and the points in the dataset, and then assigning the query point to the class that is most frequent among its k nearest neighbors.
For regression tasks, k-NN can predict the outcome based on the mean of the values of the k nearest neighbors, although this is less common than its classification use.
Which of the following algorithms presents the most chaotic landscape on the loss surface?
Answer : A
Stochastic Gradient Descent (SGD) presents the most chaotic landscape on the loss surface because it updates the model parameters for each individual training example, which can introduce a significant amount of noise into the optimization process. This leads to a less smooth and more chaotic path toward the global minimum compared to methods like batch gradient descent or mini-batch gradient descent, which provide more stable updates.