What is required for stages, without credentials, to limit data exfiltration after a storage integration and associated stages are created?
Answer : D
According to the Snowflake documentation1, stages without credentials are a way to create external stages that use storage integrations to access data files in cloud storage without providing any credentials to Snowflake. Storage integrations are objects that define a trust relationship between Snowflake and a cloud provider, allowing Snowflake to authenticate and authorize access to the cloud storage. To limit data exfiltration after a storage integration and associated stages are created, the following account-level parameters can be set:
* REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION: This parameter enforces that all external stages must be created using a storage integration. This prevents users from creating external stages with inline credentials or URLs that point to unauthorized locations.
* REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION: This parameter enforces that all operations on external stages, such as PUT, GET, COPY, and LIST, must use a storage integration. This prevents users from performing operations on external stages with inline credentials or URLs that point to unauthorized locations.
* PREVENT_UNLOAD_TO_INLINE_URL: This parameter prevents users from unloading data from Snowflake tables to inline URLs that do not use a storage integration. This prevents users from exporting data to unauthorized locations.
Therefore, the correct answer is option D, which sets all these parameters to true. Option A is incorrect because it sets PREVENT_UNLOAD_TO_INLINE_URL to false, which allows users to unload data to inline URLs that do not use a storage integration. Option B is incorrect because it sets both REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION and REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION to false, which allows users to create and operate on external stages without using a storage integration. Option C is incorrect because it sets all the parameters to false, which does not enforce any restrictions on data exfiltration.
The following commands were executed:
Grant usage on database PROD to role PROD_ANALYST;
Grant usage on database PROD to role PROD_SUPERVISOR;
Grant ALL PRIVILEGES on schema PROD. WORKING to role PROD_ANALYST;
Grant ALL PRIVILEGES on schema PROD. WORKING to role PROD_SUPERVISOR;
Grant role PROD ANALYST to user A;
Grant role PROD SUPERVISOR to user B;
What authority does each user have on the WORKING schema?
Answer : D
What are benefits of creating and maintaining resource monitors in Snowflake? (Select THREE).
Answer : C, D, F
According to the Snowflake documentation1, resource monitors are a feature that helps you manage and control Snowflake costs by monitoring and setting limits on your compute resources. Resource monitors do not consume any credits or add any load to the virtual warehouses they monitor1. Resource monitors can also have multiple triggers that specify different actions (such as suspending or notifying) when certain percentages of the credit quota are reached2. Resource monitors can be applied to either the entire account or a specific set of individual warehouses1. The other options are not benefits of resource monitors. The cost of running a resource monitor is negligible, not 10% of a credit3. Multiple resource monitors cannot be applied to a single virtual warehouse; only one resource monitor can be assigned to a warehouse at a time2. Resource monitor governance is not tightly controlled; account administrators can enable users with other roles to view and modify resource monitors using SQL2.
A company has set up a new Snowflake account. An Identity Provider (IdP) has been configured for both Single Sign-On (SSO) and SCIM provisioning.
What maintenance is required to ensure that the SCIM provisioning process continues to operate without errors?
Answer : C
According to the Snowflake documentation1, the authentication process for SCIM provisioning uses an OAuth Bearer token and this token is valid for six months. Customers must keep track of their authentication token and can generate a new token on demand. If the token expires, the SCIM provisioning process will fail. Therefore, the token must be regenerated before it expires. The other options are not required for SCIM provisioning.
When does auto-suspend occur for a multi-cluster virtual warehouse?
Answer : C
According to the Multi-cluster Warehouses documentation, auto-suspend is a feature that allows a warehouse to automatically suspend itself after a specified period of inactivity. For a multi-cluster warehouse, auto-suspend applies to the entire warehouse, not to individual clusters. Therefore, auto-suspend occurs when the minimum number of clusters is running and there is no activity for the specified period of time. The other options are incorrect because:
* A. Auto-suspend does not occur when there has been no activity on any cluster for the specified period of time. This would imply that each cluster has its own auto-suspend timer, which is not the case. The warehouse has a single auto-suspend timer that is reset by any activity on any cluster.
* B. Auto-suspend does not occur after a specified period of time when an additional cluster has started on the maximum number of clusters specified for a warehouse. This would imply that the auto-suspend timer is affected by the number of clusters running, which is not the case. The auto-suspend timer is only affected by the activity on the warehouse, regardless of the number of clusters running.
* D. Auto-suspend does apply for multi-cluster warehouses, as explained above. It is a feature that can be enabled or disabled for any warehouse, regardless of the number of clusters.
What roles or security privileges will allow a consumer account to request and get data from the Data Exchange? (Select TWO).
Answer : C, D
According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share. The other options are incorrect because:
* A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.
* B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a pre-defined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.
* E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.
A Snowflake customer is experiencing higher costs than anticipated while migrating their data warehouse workloads from on-premises to Snowflake. The migration
workloads have been deployed on a single warehouse and are characterized by a large number of small INSERTs rather than bulk loading of large extracts. That single
warehouse has been configured as a single cluster, 2XL because there are many parallel INSERTs that are scheduled during nightly loads.
How can the Administrator reduce the costs, while minimizing the overall load times, for migrating data warehouse history?
Answer : C
According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries. The other options are incorrect because:
* A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.
* B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.
* D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query performance and data quality.