Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Snowflake Interview Questions and Answers
Snowflake Overview and Architecture
Explain about Snowflake architecture?
What is the Snowflake Data Warehouse?
How does Snowflake Work?
What are the three layers of Snowflake architecture?
Is Snowflake an MPP database?
Explain about the different table Types available in Snowflake?
Which Snowflake edition should you use if you want to enable time travel for up to 90 days?
What are Micro-partitions?
Can you create Transient Views in Snowflake?
Explain about the differences and similarities between Transient and Temporary tables?
By default, clustering keys are created for every table, how can you disable this option?
What is the default type of table created in Snowflake?
How many servers are present in X-Large Warehouse
As Snowflake should use one of the cloud provider (like AWS or Azure) as part of its architecture, why can't the AWS database Amazon Redshift can be used instead of the Snowflake warehouse?
What view types can be created in Snowflake but not in traditional databases?
Is Snowflake a Data Lake?
What are the key benefits you have noticed after migrating to Snowflake from a traditional on-premise database?
When you execute a query, how does Snowflake retrieves the data as compared to the traditional databases?
Explain the difference between External Stages and Internal Name Stages?
Explain the difference between User and Table Stages?
You are working in a Investment Bank and you want to explore on Snowflake and decided to create a free Snowflake account. Your Manager has Instructed you to use Virtual Private Snowflake Edition for the free trail as this edition provides dedicated servers for your company? What do you suggest?
What are the constraints which are enforced in Snowflake?
What is unique about Snowflake Vs Other Warehouses?
Real-Time Scenarios
You have created a Virtual Warehouse with a warehouse size of 2X-Large, and extracted data from different data sources applied Address Validation standardization using third-party tools before loading the data into the warehouse. As part of the compliance requirement, you need to make sure to compare the source data address with the address loaded in the target after applying the address standardization and load the records which don't match into a temporary table. The temporary table which is created in the Prod environment has to be cloned to the test environment for the development team to review the data and then provide the results to the Business Team? What is your recommendation?
You have recently created your Snowflake account, and created few jobs which extract data from an SAP HANA, and during one of the Product Release testing, there are some failures due to which some of the virtual warehouses (2X-Large) are not available? What do you recommend?
You have observed that a store procedure which is getting executed daily at 7 AM as part of your batch process is consuming resources and the CPU I/O is showing as 90%, and the other jobs which are getting executed are impacted due to the store procedure. How can you quickly resolve the issue with the store procedure?
You have migrated from Teradata to Snowflake, one of the main issues which you faced in the old system is with the MPI system data. The MPI data is sent by the source system daily at 01:00 AM and the ETL Process will take around 5-6 hours and loads the data into the target tables between 06:00 AM to 07:00 AM. After the ETL process is complete no other process will modify the data in these tables until the business users check and confirm the Ledger Transactions. After the users confirm, the Indicators in the target table are updated and data will be loaded to the downstream tables. The Issue which is faced by the business users while accessing the data in Teradata is, each user has to wait for 01-02 hours to get the required General Ledger Stats, and sometimes when multiple queries are executed the CPU I/O usage was high. Business users want to get the query results immediately as these are static SQL’s which they use on daily basis. How these issues can be fixed in the newly migrated Snowflake database?
You have migrated from Teradata to Snowflake, in the old system (Teradata) few ETL batch loads are scheduled to execute using Teradata Tpump load utility, TPump uses row hash locks, meaning users can run queries while it’s updating the Teradata Warehouse. In the new system(Snowflake), you should also use above approach. i.e. Users should be able to access the data, while the loads are being executed on the Snowflake Warehouses without any issues. What is your recommendation?
You are using the Snowflake connector in Informatica Cloud (Data Integration tool) to process some data as per your batch requirements, you have extracted data from different data sets and loaded the data into the stage tables, from stage tables the data will be loaded to your warehouse. The data in the stage tables are always truncated and reloaded for every load. In Snowflake, you can define the stage tables type as
Some queries are getting executed on a warehouse and you have executed Alter Warehouse statement to resize the warehouse, how this will effect the queries which are already in execution state?
Its a best practice to disable the fail-safe for temporary tables, these tables exist only for the duration of a session and are not queriable by any other user, disabling fail-safe will help in reducing failsafe storage for temporary tables? What do you recommend?
Your company has recently procured Snowflake Standard edition, as per the Initial plan you have planned to migrate applications one by one and then upgrade the Snowflake to Enterprise Edition, but all the applications are dependent on each other, so have migrated all the applications at the same time to Snowflake Standard Edition. As queries are submitted to the warehouse, Snowflake has queued most of the queries due to Insufficient resources which are causing issues in users accessing different applications. How can you resolve the above issue at the earliest?
A new business analyst has joined your project, as part of the on-boarding process you have sent him some queries to generate some reports, the query took around 5 minutes to get executed, the same query, when executed by other business analyst's, has returned the results immediately? What could be the Issue?
You are working in an Insurance company, and you have planned a major deployment on the weekend which includes extracting historical data from PowerExchange and load it into one of the Snowflake database tables, the load took around 20 hours to complete and the data is validated to be released to the users on Monday morning so that the users can complete the review as part of the compliance process for the newly launched MedSupp Policies. One of the Incremental ETL Jobs which you have deployed as part of this major deployment has the Truncate Target Table option enabled and the data which is loaded PowerExchange is deleted when the job executed on Monday. What is the best approach to recover the historical data at the earliest which was accidentally deleted?
You have created a warehouse using the command create or replace warehouse OriginalWH initially_suspended=true; What will be the size of the warehouse?
You are changing the scaling policy for a warehouse from Standard to Economy. You want to make sure the SQL statements from the application can be queued for only 180 seconds and if there are any queries which run for more than 360 seconds should be canceled by the system. Which parameters should you configure for this requirement?
You are executing some queries on Medium size Warehouse, the queries are getting executed for a longer period of time than expected. You are planning to re-size the warehouse to X-Large size? can you resize the warehouse when the queries are still executing?
You have created an External Table(E_Prev_MPI) in which you have loaded all the Historical data from an MPI source system, you need to join the E_Prev_MPI table with one of the tables in Warehouse (W_Curr_MPI) which has the current snapshot of data, and if there are any matching records, you need to update the E_Prev_MPI.Matched column to 'Yes' There are a lot of performance issues while performing the update so you have created Partition on the E_Prev_MPI table. Is this the best approach?
You are working in a medical services company, as per the guidelines of the legal team, any objects containing PII data must not be visible to those who do not require access to the PII data. What is the best approach for the above requirement?
You are working in a major telecom company, you collect transactional data from different switches which generate huge(1 TB) CDR (call detail records) volume every day. All the CDR records are loaded to a summary table ( which is present in the Snowflake warehouse of size 4X_Large) and different reports are generated based on the daily revenue generated on the calls for each region. The queries are getting executed for a long time to generate the daily reports which are based on the transaction date and region, what is the best approach to optimize the queries to generate the reports faster
Your organization has planned to procure Snowflake and has decided to migrate the application in a phase by phase manner. In the first phase, you have planned to Include some non-critical applications and the requests from these applications can be queued up to 24 hours till they are processed. You want to keep the costs low in the first phase which can be increased going forward. Select the best Warehouse for the above requirement.
Tableau reports are configured to query the data based on the Joins from multiple Snowflake tables which have a size of 120 GB each, the reports will be accessed by the senior management to make critical decisions. Some of the users who are accessing the reports, have faced severe performance issue to load the Dashboard and access the reports. What can be done so that the users can get fast response time for accessing the dashboards and reports?
You are trying to debug a production issue due to which some of the reports are showing incorrect numbers. The issue is with data loaded in a summary table for a particular policy(AMT00877TR5) is showing Incorrectly, the table from which the data is fetched is of SCD-Type 1, by the time you checked the data, the tables were already updated with the latest data. You are not sure about the root cause of the issue. How can you check the data before it was updated?
You have created some reports which access huge volume of data, the reports are configured to perform range and equality searches and business users generate the report on every first business day of the month. The reports are generated based on the data available in the Snowflake tables, and the report generation process is very slow. You have been Instructed to not add any more storage costs due to some project constraints. What can be done to improve the performance of the reports?
A query is executed from the client and the query result exists in the result cache and the underlying data is not changed. The query results are returned from
A manufacturing company decided to implement data sharing and share data about the progress of orders directly to the consumers of their products. However, a customer must only be able to see the orders they have placed.
You have scheduled a job to re-cluster a table on the weekend, but the DBA Team will suspended all the virtual warehouse's on the weekend. What error will you get when your job is triggered?
You have a ETL process, which is currently getting executed on Oracle which is installed on a single cluster, the process starts at 7 PM and it will take around 10 hours to complete. You are now migrating to Snowflake, you should provide clear estimate on how much time the job will take to complete in the new Snowflake Prod environment, your manager is concerned on the credits ($$) charged that will be charged by Snowflake. How will you provide the stats for this?
The batch load will be completed at 3 AM you have kept a buffer of 2 hours and you have scheduled the clone jobs to copy data from Prod to Test Environment every Monday morning at 5 AM. Due to some issue the batch process executed till 7 AM and you did not hold the clone jobs. When the clone jobs are executed, how the data will be cloned from Prod to Test when the data load is in progress?
You have created a network policy but the policy is not enforced in Snowflake, what could be the issue?
You have recently joined a company which is using Teradata database, some of the Architects proposed that they should migrate their database from Teradata to Snowflake but the customer is not clear on the benefits they get by moving to Snowflake as both Teradata and Snowflake are Massive Parallel processing (MPP) systems. What is your recommendation to the customer on using Snowflake compared to the Teradata database?
Most of the customers who use Snowflake talk about zero-copy clone equating to zero-cost development, do you agree with this?
Why Snowflake has the option to create the Primary and Foreign key constraints when these can not be enforced?
You have created a standard multi-cluster warehouse with Maximum clusters as 10 and Minimum Clusters as 3, lets say the warehouse is using 8 clusters and users have executed several queries which all are cached and users are able to see the cache results faster, now lets say the warehouse has scaled down from using 8 clusters to 4 clusters, will the cache files will be reused when the users execute the same query?
You are trying to perform some data loads using Snowpipe, the load is taking longer than excepted so you stopped the existing load and increased the size of Virtual Warehouse to X-Large, when you restart the load does the load resume from the point where it was last stopped?
In Snowflake, you can use sample clause to limit the number of records from a table, what are some of the use cases where you have used sample clause apart from fetching the sample records from a table?
You have a list of tables around 150, in Snowflake, you need to get the names of the corresponding table stages, and send them to the downstream team, how can you get the table stage names?
In Snowflake, you can limit the records being fetched by sample clause, which sample clause will impact the query performance and why?
You are loading the data to a table based on which HealthCare reports are generated, the load doesn't have any issue and its getting completed in less than 2 minutes, but when users are accessing the reports they are severe performance issues, the reports are filtered by two specific columns, you have suggested to use the Clustering keys or Search Optimization, but the Snowflake Administrator is not in favor of creating Clustering keys or Search Optimization or Materialized views. How can you solve this issue?
Billing
How Snowflake charges for Data Storage?
What is the difference between Replication and Cloning?
In Secure Data Sharing, who pays for the compute and storage resources?
You are a Data Provider, and you are sharing your data to non-Snowflake users. How can you control the data usage by the Data Consumers?
You have just procured Snowflake and created 4 warehouses, how much will snowflake charge for this?
You are using Snowflake throughout the year every day for 3 hours, and loading the data into Virtual Warehouse which has a size of X-Small. You are now trying to estimate the yearly credits of the Snowflake usage, your ETL analyst based on the ETL schedule has specified that the snowflake was used for 260 days. How many credits are charged by Snowflake for this usage?
Does Snowflake charge any credits when you perform re-clustering on a table?
You have used Snowflake trail account, and decided to proceed with migrating your data to Snowflake Enterprise Edition. You have clear stats on the required storage. Which purchase plan do you recommend?
How did you optimize the incurred costs in Snowflake?
Explain how on-Demand vs Pre-Purchased Capacity will impact your Project Budget?
You need to provide high level estimates on the cost of using Snowflake with different Cloud providers using different regions? Which utility can you use to get these stats?
How choosing the Incorrect storage type will Impact your budget?
You have selected AWS as your cloud provider, where can you check on how much AWS is charging your account for the Snowflake usage?
How can you check the consumed credits in your account?
Snowflake Performance and Tuning
Explain about clustering keys?
You have few tables in your warehouse which are less than 10GB in size, what is the best way to Improve the query performance with out increase the cost?
How will you determine if you need to define clustering key for a table?
In which Scenarios, Snowflake reuses the query results instead of retrieving the data from the base table?
What does scaling out in Snowflake means?
When should you consider adding Search Optimization to a table?
Explain about the caches which are used by Snowflake?
How will you check if there are any slow running queries and your approach to fix the slow running queries?
How Does Warehouse Caching Impact Queries?
You have added cluster keys to a table, how can you determine if the cluster keys are being used?
How do you select clustering Key?
You have observed that some of the warehouse's are incurring too much query time in Snowflake, how can you find which queries are creating the issue and what steps do you take to resolve this issue?
What is constant Micro-partition, and what will be the overlap depth of constant micro partition?
Does the table structure Impact Snowflake performance?
You have created a warehouse, with X-SMALL, some of the developers have issue with queries which are in queued status for longer period of time, you have increased the warehouse size to Medium, after couple of hours the development team is still facing the same issue, you then increased the warehouse size to X-Large, 2X-Large and finally by resizing the warehouse to 4X-Large, you were able to resolve the issue? How the above issue could have been resolved instead of resizing the warehouses, what if, if the issue is not resolved even after resizing the warehouse to 4X-Large?
Snowflake Functions and Parameters
What is the difference between Scalar function and Aggregate Functions?
Did you use any parameters which can be used as both a session and an object type parameter?
Do you configure DATA_RETENTION_TIME_IN_DAYS parameter at Account level or object level?
What is the difference between RESULT_SCAN and LAST_QUERY_ID, and how did you implement these in your project?
Is it possible to abort the query when you close the snowflake browser so that the warehouse resources are not consumed?
You have configure STATEMENT_TIMEOUT_IN_SECONDS value to 10800,what does it mean and how can you unset this parameter?
If you configure the DATA_RETENTION_TIME_IN_DAYS to zero, how does it effect the Clone and Time Travel?
Which symbol is used to represent User Stage?
Snowflake Storage and Protection
How can you check table-level storage utilization information, including tables that have been dropped, but are still incurring storage costs.
Where is Data Stored within Snowflake?
You have Time Travel set to keep your table data for 30 days. How long would the fail-safe period will protect the table data?
How does Snowflake physically store data?
You are planning to migrate your database from Oracle to Snowflake, how will you determine an Initial Warehouse Size?
Did you Clone any objects in Snowflake?
Which Objects Can Be Cloned?
Can you include time travel option when you Clone the object?
Will cloning a object clone all the dependent children objects?
If you have recover data in Snowflake, which option do you prefer Time Travel or Fail Safe?
How can you save on Storage Costs?
Can you clone a transient table to permanent table and vice versa?
What is Snowflake’s hierarchical key model?
What are the similarities and difference between Time-Travel and Fail-Safe in Snowflake?
Explain about different layers/caches from which data can be retrieved?
What object types can be stored within a schema?
Snowflake Account and Security
What is the Role which is automatically granted to the first user created on the Snowflake system?
You are planning to procure Snowflake software and your company has to process extremely sensitive data that must comply with HIPAA regulations. Your management has decided not to spend more money on Snowflake Which Snowflake edition should you procure?
You are sharing data via Secure Data Share,can you restrict your users to view only selected tables or views?
You have logged in Snowflake UI as SYSADMIN, how can you switch the role and how many roles are available in Snowflake?
You have logged in as SYSADMIN, but you are not able to see the Account and Notifications tab, how can you fix this issue?
Can you grant ACCOUNTADMIN role to multiple users?
Which role is assigned to every user by default?
Explain about Discretionary Access Control?
What privileges does SYSADMIN have by default?
Snowflake Data Sharing
Which database objects can be shared in Snowflake via Secure Data Sharing?
You have recently joined an Insurance company, the DBA team copies the Prod data to the test servers every Monday morning for the development team and then the sensitive data is masked using Informatica Test Data Management Tool in the test servers before releasing the test servers data. Is this the best approach?
CompanyA has shared data with CompanyB, in Snowflake Terminology,Company A is_______ and CompanyB is ___________
You are trying to configured Provider/Consumer relationship through share between two accounts as per the below requirements.
What is Secure Data Sharing in Snowflake?
How Does Secure Data Sharing Work?
What is a Secure Share in Snowflake?
What is difference between sharing data with existing Snowflake customers versus non-Snowflake customers?
Can you create streams on shared tables?
What is an External Stage?
You are trying to Clone a table from a Data Share using the below sql, but you are getting SQL compilation error, what could be the issue?
As a data provider you have created a data share, with how many consumers can you share the data share?
You have created a regular view and trying to share with a consumer account but you are getting an error stating the view has more than 80 columns and cannot be shared, how can you fix this error?
Snowflake Virtual Warehouses
What is a Virtual Warehouse?
How do you categorize your Virtual Warehouses?
Snowflake is automatically adding some alpha numeric characters after the table name, what does these characters mean?
What location options are available when you create a Stage in Snowflake?
You have created a virtual warehouse by executing the statement shown in the below picture, the warehouse is suspended automatically after 10 minutes. User has executed a query after the warehouse is suspended, will user get any error as the warehouse is in suspended state?
What is the difference between Standard and Economy scaling policies?
Users are able to create two tables with the same name in the Snowflake, how can you enforce rules to make sure this does not happen?
You have recently joined a company which is using Snowflake, you have observed that the warehouses are configured to auto-suspend after one minute to save the compute cost? Is this the best approach? What is your recommendation?
When you execute a query, how snowflake determines from which cache the data has to be fetched, explain the process in detail?
You have created a warehouse with auto_resume = true, assuming when ever a user executes a query on the warehouse, the warehouse will be resumed automatically from the suspended state, the warehouse is currently in the suspended state and when the user executed the query the warehouse was not resumed. what could be the issue?
Users have reported that their queries are taking longer timer to get the results, to resolve the issue you have collected all the queries and created a new Virtual Warehouse with the below configuration, and you have created a snowsql job to executed all the queries, as per you the users will get the query results faster as Snowflake will retrieve the results from cache files instead of fetching the results from the data storage layer. But the users are still facing the same issue? What could have gone wrong?
What is the difference between Warehouse and Database?
What are the default schemas which will be created when you create a new database?
How many maximum number of clusters can be configured in a warehouse?
You have created a warehouse with 10 maximum clusters and 5 minimum clusters, when you start the warehouse how many clusters will be started by Snowflake?
What is Auto-Scale mode, and how Snowflake will determine on when to scale up or scale down?
You have created a warehouse with 10 maximum clusters and 5 minimum clusters, lets say currently there are 8 clusters in use, based on what factors Snowflake will determine to scale down the clusters (i.e. from 8 to 5)?
In Development environment, how your virtual warehouses are configured?
Snowflake Compute
In the Prod environment, there are some issues with query performance and you have identified that you need to alter the existing cluster key with new columns, as part of this same release you have dropped some columns on which the previous clustery key was defined, the changes were successfully deployed and the query performance with the new cluster keys is as expected. After two weeks of the deployment, there are some issues and you have to revert the changes. But the DBA team has Informed you that using Time Travel feature you can get the data snap shot as of a particular time but the dropped columns have to be manually restored. How would you resolve this prod issue at the earliest?
When Snowflake is executing Automatic Clustering, does it block any DML statements while the data is being reclustered?
Name few serverless features which uses Snowflake Compute Resources?
You are currently using sql server, your data is modelled in to dimensions and facts. You are now migrating to Snowflake, since storage cost is cheap and when ever you perform joins between the tables it will be charged under compute, will it be best approach to avoid dimension tables and store all the required attributes in one table?
In what increment are you billed for compute?
Can you set a compute quota on a specific user?
What is the advantage of using an external stage, rather than copying data in directly from the cloud storage?
How can you control compute costs?
In Snowflake, you can clone tables, securely share data using secure views, create normal views, Improve query performance by creating clustering keys or search optimization search, how does creating materialized view which consume compute resources will be helpful?
Why Snowflake does not use any indexes on the tables?
You have executed a query, how can you check if the query results are returned from the Cache or if the results are fetched from the Table?
How many queries does snowflake queues before it spins up additional cluster?
Snowflake Data Protection
Why Fail-safe is better than Backup?
Where did you use Time Travel feature in your Project, apart from restoring the previous version of data? or Explain about a real time use on the Time Travel feature which you have used?
Does Snowflake encrypts data similar to the Hadoop encryption?
Can you apply any network polices at User level?
You have procured Snowflake Enterprise Edition on AWS cloud, how can you SSH to the AWS VPC to check the connectivity from AWS VPC to your on-premise network?
You have created a table with primary key constraints on PolicyKey and PartyKey, after one week you have updated the table constraints as Coveragekey and ParticipantKey and started loading the data, they are some issues with the data load, you want to use the Snowflake Time Travel feature and restore the previous version of the table. What constraints will be restore by Snowflake?
Snowflake Data Movement
Explain about different ways of creating Clustering keys in Snowflake?
Explain the difference between structured and Semi-structured data?
What is Snowflake COPY?
What is the difference between ETL and ELT? And what is your preference while using Snowflake?
How fast would the data update for Data Consumers?
You are a Data Consumer, and Data is shared with you via Secure Share by a Data Provider. What actions can you perform on the Shared Data?
You are migrating data from sql server to snowflake, and for all the dimension tables you have configured an identity column as primary key and these columns are reference across multiple fact tables. How can handle this when you have migrated the tables to Snowflake, as Snowflake does not have the Identity key concept?
Can you filter data when using the COPY INTO command?
Snowflake Commands & SQLS
What are the required parameters in COPY command?
You need to create a Python-based command line program to connect to a particular database and then perform some data loading into specific tables.
To resize a warehouse, which statement has to be executed?
Can a single storage integration command support multiple external stages?
You are using COPY INTO command to load data from Staged files in to one of the existing table, how can you validate Staged data files and check if there are any errors with out loading the data in to the existing table
You have recently migrated from Teradata to Snowflake, some of the business users have 100-200 sqls which they execute in Teradata from time to time, how can you convert these sqls to be executed on Snowflake with out any errors?
How can you can clear the virtual warehouse cache?
Can you disable the Snowflake Database Results Cache?
You have created a table with the default Data Retention period(DATA_RETENTION_TIME_IN_DAYS), can you change the default Data Retention period after the table is created?
Can you create a Clone table by using OFFSET option?
Can you disable Time Travel for a Table?
You have deployed some SQL Scripts to Prod and there are some table drop scripts included by mistake, which dropped few of the critical tables, without using TimeTravel or FailSafe is there any way you can recover the data?
You are trying to create a new sequence in Snowflake, you are using create or replace sequence function and specified the currval as 2 and the Interval as 2. How the sequence numbers will be generated when you use the newly created Sequence in your job?
How can you create a stream on the external tables?
Which system function should you use to check the offset of a particular Stream?
What is sfsql?
Which function do you use to add three years to a particular date?
Can you create networking policies using sql?
How can you transpose rows to columns in Snowflake?
You have few insert statements which will load data from your stage warehouse to the target warehouse, you need to generate a GUID(Globally Unique Identifier) for each Insert statement. You have decided to use the queryId as GUID for your requirement. What is the best approach to get the queryId for each insert statement as part of the data load?
What are some of the SQL functions which are available in Snowflake but not in other relational databases?
How can you check the current role when you are using SnowSQL?
Snowflake Streams
What are Streams in Snowflake?
What is an offset in a Table Stream?
What are Stream Columns?
Which stream type is supported for External Tables?
What is the difference between Standard and Append-only stream types?
There are two records in a stream and you have processed one record at a time,as soon as the transaction completes for first record, the second record is also deleted from stream. What could be the issue?
You have cloned a database in which some streams are present, will the streams be cloned as part of the database clone?
SnowSQL
Can you use SNOWSQL using windows machine?
You have Installed SNOWSQL, how can you make sure the SNOWSQL has installed correctly on your machine?
You have downloaded SNOWSQL, and completed the installation, you are now trying to connect to Snowflake via the below SNOWSQL commands, but the screen is not displaying any error message or connection successful message, the mouse cursor is blinking continuously. What can be the issue?
You have logged in to Snowflake using SnowSQL, at the log-in prompt SnowSQL no database is selected by default. How can you use SnowSQL command and set the database to STAGE_IDWBI?
You have few ETL jobs which are executed as part of the ETL batch which are configured in Active Batch Scheduler, you should configure the ETL jobs in such way it will connect to the SnowSQL using the appropriate role, database and warehouse. Example :- the stage ETL jobs should access Snowflake using the Stage database and the ODS jobs should use the ODS database etc?
Can you change the role when you connect to Snowflake via SnowSQL?
There are some queries which are consuming lot of compute resources, you have identified the sqls are getting executed using SnowSQL scripts, how can you check the history of scripts which are executed externally using SnowSQL utility?
Can you store the passwords in encrypted format in SnowSQL Configuration file?
Snowflake UI
Name some components which can be downloaded from Snowflake UI?
You are executing queries in Worksheets as ACCOUNTADMIN, how can you switch role to USERADMIN using UI options?
You are checking the query execution history, what columns are available when you navigate to the Open History tab in the UI?
You are reviewing some of the queries in the query history tab, for some of the queries you have observed no cluster is being used and no rows are being returned, what could be the issue?
Based on the sqls which are executed from the ETL batches, SnowSQL and manual queries executed by users on a particular database, you need to identify all the SQLS which contains a particular string ('IDWBI Snowflake Training'), how can you get these details?
You are migrating from a traditional database to Snowflake, one of the feature which your users use to compile the query before it is executed. How can you validate or compile the query in Snowflake before it is executed?
What is the default size of warehouse which will be created when you create the warehouse using the web interface?
SnowPipe
What is the difference between Bulk Copy data load and Snowpipe?
Can you use Snowpipe to load User Stages?
Questions Posted by Students which were asked in their Snowflake Interview
How can you get the Snowflake Cloud ID's of the AWS Virtual Network (VNet) in which your Snowflake account is located?
Will there be any Impact on the compute resource, when you access data which is present in S3 by using external stage or by using the COPY INTO command?
What is an external stage, and how the metadata in the new (incremental) files which will get uploaded to the S3 bucket data will be refreshed in to Snowflake?
You are trying to create a new Stage in Snowflake UI, when you have selected the Schema Name there are multiple options available as shown the picture, which Schema Name should be selected?
What is Query Result Reuse?
Can you define constraints on transient tables?
What is INFORMATION_SCHEMA in Snowflake?
In Snowflake UI, when you click on databases icon and navigate to a particular database, you can see multiple tabs like Tables, Views, Schemas, what are the other options which are present in that section?
What is the default scaling policy selected when you create the warehouse using the web interface?
When you select the Scaling Policy, what factors do you consider?
Can you use CONNECT BY clause to Join Snowflake table with table created in External Stage?
Which stage will you create if you need to give source files access to multiple users and load the data into a single table?
How can you Join tables in Snowflake, which are in different databases?
Troubleshooting
You are trying to query an external table but you are getting an error - stage {1} cannot be found. What could be the reason for this error?
You are trying to query an external table but you are getting an error - External table {0} marked invalid. Stage {1} location altered. What could be the reason for this error?
You have created a multi cluster warehouse in Snowflake Standard edition for a POC project and executed a query, the query is taking more time to fetch results and you are suspecting the warehouse is running on a single cluster, how to check if the warehouse has scaled out?
You are trying to Clone a table from a Data Share using the below sql, but you are getting SQL compilation error, what could be the issue?
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock