What information is required from the Identity Provider (IdP) to enable federated authentication in Snowflake? (Select TWO).
Answer : B, D
To enable federated authentication (aka SSO via SAML 2.0) in Snowflake, the integration with an Identity Provider (IdP) must be configured. This setup involves configuring external authentication via SAML, and Snowflake needs specific information from the IdP.
Required Information from IdP:
URL Endpoint for SAML Requests (B)
This is often referred to as the SSO URL or SAML 2.0 Endpoint (HTTP).
It's the URL that Snowflake redirects users to for authentication.
In Snowflake's SAML configuration, this is required as the SAML2_ISSUER or SAML2_SSO_URL.
Authentication Certificate (D)
This is the X.509 certificate issued by the IdP.
It's used by Snowflake to validate the digital signature of the SAML assertions sent by the IdP.
It ensures that the SAML response is authentic and not tampered with.
Why Other Options Are Incorrect:
A . IdP account details
Not needed. Snowflake doesn't require credentials or internal details from the IdP. It relies on assertions sent via SAML, not stored accounts.
C . SAML response format
Snowflake adheres to SAML 2.0 standard, and expects a compliant format. There's no need to specify format explicitly --- it's part of the standard protocol.
E . IdP encryption key
Not required by Snowflake. Snowflake verifies SAML assertions via signature validation, not encryption using the IdP's private key.
SnowPro Administrator Reference:
Snowflake Documentation --- Federated Authentication Setup
https://docs.snowflake.com/en/user-guide/security-fed-auth-use
https://docs.snowflake.com/en/user-guide/security-fed-auth-config
Required IdP Metadata for Snowflake SAML Configuration:
SAML2_SSO_URL: SAML 2.0 POST binding endpoint
SAML2_X509_CERT: Public cert used to validate IdP signatures
An Administrator loads data into a staging table every day. Once loaded, users from several different departments perform transformations on the data and load it into
different production tables.
How should the staging table be created and used to MINIMIZE storage costs and MAXIMIZE performance?
Answer : B
According to the Snowflake documentation1, a transient table is a type of table that does not support Time Travel or Fail-safe, which means that it does not incur any storage costs for maintaining historical versions of the data or backups for disaster recovery. A transient table can be dropped at any time, and the data is not recoverable. A transient table can also have a retention time of 0 days, which means that the data is deleted immediately after the table is dropped or truncated. Therefore, creating the staging table as a transient table with a retention time of 0 days can minimize the storage costs and maximize the performance, as the data is only loaded and transformed once, and then deleted after the production tables are populated. Option A is incorrect because creating the staging table as an external table, which references data files stored in a cloud storage location, can incur additional costs and complexity for data transfer and synchronization, and may not provide the best performance for data loading and transformation. Option C is incorrect because creating the staging table as a temporary table, which is automatically dropped when the session ends or the user logs out, can cause data loss or inconsistency if the session is interrupted or terminated before the production tables are populated. Option D is incorrect because creating the staging table as a permanent table, which supports Time Travel and Fail-safe, can incur additional storage costs for maintaining historical versions of the data and backups for disaster recovery, and may not provide the best performance for data loading and transformation.
What is required for stages, without credentials, to limit data exfiltration after a storage integration and associated stages are created?
Answer : D
According to the Snowflake documentation1, stages without credentials are a way to create external stages that use storage integrations to access data files in cloud storage without providing any credentials to Snowflake. Storage integrations are objects that define a trust relationship between Snowflake and a cloud provider, allowing Snowflake to authenticate and authorize access to the cloud storage. To limit data exfiltration after a storage integration and associated stages are created, the following account-level parameters can be set:
* REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION: This parameter enforces that all external stages must be created using a storage integration. This prevents users from creating external stages with inline credentials or URLs that point to unauthorized locations.
* REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION: This parameter enforces that all operations on external stages, such as PUT, GET, COPY, and LIST, must use a storage integration. This prevents users from performing operations on external stages with inline credentials or URLs that point to unauthorized locations.
* PREVENT_UNLOAD_TO_INLINE_URL: This parameter prevents users from unloading data from Snowflake tables to inline URLs that do not use a storage integration. This prevents users from exporting data to unauthorized locations.
Therefore, the correct answer is option D, which sets all these parameters to true. Option A is incorrect because it sets PREVENT_UNLOAD_TO_INLINE_URL to false, which allows users to unload data to inline URLs that do not use a storage integration. Option B is incorrect because it sets both REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION and REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION to false, which allows users to create and operate on external stages without using a storage integration. Option C is incorrect because it sets all the parameters to false, which does not enforce any restrictions on data exfiltration.
A Snowflake account is configured with SCIM provisioning for user accounts and has bi-directional synchronization for user identities. An Administrator with access to SECURITYADMIN uses the Snowflake UI to create a user by issuing the following commands:
use role USERADMIN;
create or replace role DEVELOPER_ROLE;
create user PTORRES PASSWORD = 'hello world!' MUST_CHANGE_PASSWORD = FALSE
default_role = DEVELOPER_ROLE;
The new user named PTORRES successfully logs in, but sees a default role of PUBLIC in the web UI. When attempted, the following command fails:
use DEVELOPER_ROLE;
Why does this command fail?
Answer : C
According to the Snowflake documentation1, creating a user with a default role does not automatically grant that role to the user. The user must be explicitly granted the role by the role owner or a higher-level role. Therefore, the USERADMIN role, which created the DEVELOPER_ROLE, needs to explicitly grant the DEVELOPER_ROLE to the new user PTORRES using the GRANT ROLE command. Otherwise, the user PTORRES will not be able to use the DEVELOPER_ROLE and will see the default role of PUBLIC in the web UI. Option A is incorrect because the DEVELOPER_ROLE does not need to be granted to SYSADMIN before user PTORRES can use the role. Option B is incorrect because the new role can take effect immediately after it is created and granted to the user, and does not depend on the USERADMIN role logging out. Option D is incorrect because the new role will not be affected by the identity provider synchronization, as it is created and managed in Snowflake.
What are benefits of using Snowflake organizations? (Select TWO).
Answer : B, E
According to the Snowflake documentation1, organizations are a feature that allows linking the accounts owned by a business entity, simplifying account management and billing, replication and failover, data sharing, and other account administration tasks. Some of the benefits of using organizations are:
* Administrators can monitor and understand usage across all accounts in the organization using the ORGANIZATION_USAGE schema, which provides historical usage data for all accounts in the organization via views in a shared database named SNOWFLAKE2. This can help to optimize costs and performance across the organization.
* Administrators have the ability to create accounts in any available cloud provider or region using the CREATE ACCOUNT command, which allows specifying the cloud platform and region for the new account3. This can help to meet the business needs and compliance requirements of the organization.
Option A is incorrect because administrators cannot change Snowflake account editions on-demand based on need, but rather have to contact Snowflake Support to request an edition change4. Option C is incorrect because administrators cannot simplify data movement across all accounts within the organization, but rather have to enable account database replication for both the source and target accounts, and use the ALTER DATABASE ... ENABLE REPLICATION TO ACCOUNTS command to promote a local database to serve as the primary database and enable replication to the target accounts5. Option D is incorrect because user administration is not simplified across all accounts within the organization, but rather requires creating and managing users, roles, and privileges for each account separately, unless using a federated authentication method such as SSO or SCIM.
A large international company with many operating regions requires data to be shared bi-directionally among all offices (head office to regional offices and regional offices among themselves). This company is a Snowflake account holder with European operations deployed in Microsoft Azure (single region) while North American regional offices are using AWS (single region) as their deployment cloud. This setup is required to comply with Personal Identifiable Information (PII) regulations in some of the European countries. The corporate head office is in Europe.
How can this data be shared bi-directionally, while MINIMIZING costs?
Answer : D
According to the Snowflake documentation1, data sharing is a feature that allows sharing selected objects in a database in one account with other accounts in the same organization, without copying or transferring any data. Data sharing is supported across regions and across cloud platforms, but it requires enabling account database replication for both the source and target accounts2. Data replication is a feature that allows replicating objects from a source account to one or more target accounts in the same organization, providing read-only access for the replicated objects. Data replication is also supported across regions and across cloud platforms, but it incurs additional storage costs for the replicated data2. Therefore, the best way to share data bi-directionally among all offices, while minimizing costs, is to use data sharing among offices in the same region, which does not require replication or additional storage, and use replication among offices across the continents, which provides near real-time access to the shared data. Option A is incorrect because using data replication everywhere would increase the costs associated with additional storage and compute resources for the replicated data. Option B is incorrect because using the PUT command to move files to an Amazon S3 bucket and Azure Blobs, and using an external file management application to move files within the corporate VPC, would not leverage the benefits of Snowflake's data sharing and replication features, and would also incur additional costs and complexity for data transfer and synchronization. Option C is incorrect because moving all the Snowflake accounts to a single region would violate the PII regulations in some of the European countries, and would also incur additional costs and complexity for data migration and consolidation.
A team of developers created a new schema for a new project. The developers are assigned the role DEV_TEAM which was set up using the following statements:
USE ROLE SECURITYADMIN;
CREATE ROLE DEV TEAM;
GRANT USAGE, CREATE SCHEMA ON DATABASE DEV_DB01 TO ROLE DEV_TEAM;
GRANT USAGE ON WAREHOUSE DEV_WH TO ROLE DEV_TEAM;
Each team member's access is set up using the following statements:
USE ROLE SECURITYADMIN;
CREATE ROLE JDOE_PROFILE;
CREATE USER JDOE LOGIN NAME = 'JDOE' DEFAULT_ROLE='JDOE_PROFILE';
GRANT ROLE JDOE_PROFILE TO USER JDOE;
GRANT ROLE DEV_TEAM TO ROLE JDOE_PROFILE;
New tables created by any of the developers are not accessible by the team as a whole.
How can an Administrator address this problem?
Answer : C
According to the Snowflake documentation1, future grants are a way to automatically grant privileges on future objects of a specific type that are created in a database or schema. By setting up future grants on the newly-created schemas, the administrator can ensure that any tables created by the developers in those schemas will be accessible by the DEV_TEAM role, without having to grant privileges on each table individually. Option A is incorrect because assigning ownership privilege to DEV_TEAM on the newly-created schema does not grant privileges on the tables in the schema, only on the schema itself. Option B is incorrect because assigning usage privilege on the virtual warehouse DEV_WH to the role JDOE_PROFILE does not affect the access to the tables in the schemas, only the ability to use the warehouse. Option D is incorrect because setting up the new schema as a managed-access schema does not grant privileges on the tables in the schema, but rather requires explicit grants for each table.