
In this tutorial, you’ll learn how to connect your Streamlit app to Snowflake, a cloud data warehouse built for real-time analytics.
You’ll securely configure credentials, run SQL queries from Streamlit, visualize results, and even blend Snowflake tables with local data — creating a live, interactive data explorer powered entirely by Python.
This lesson is part of a series on Streamlit Apps:
To learn how to query, visualize, and export live warehouse data from Streamlit, just keep reading.
Welcome to the final lesson in the Streamlit tutorial series — the point where your local prototype evolves into a cloud-ready data application.
In the first two lessons, you learned how to build and extend a data explorer using static CSVs. That was perfect for learning Streamlit’s reactivity model, caching, and UI widgets. But in real analytics workflows, teams don’t analyze CSVs sitting on someone’s laptop — they connect to governed, live data sources that can scale securely across users and projects.
This is where Snowflake comes in. Snowflake is a cloud-based data warehouse built to handle massive datasets, enable secure sharing, and deliver blazing-fast SQL queries. By integrating it directly into your Streamlit app, you’ll transform your simple file-based dashboard into a dynamic data experience — one that interacts with actual business data in real time.
In this lesson, you’ll learn how to:
By the end of Part 2, you’ll have a fully functional Streamlit + Snowflake dashboard that connects securely, runs live queries, visualizes patterns, and exports results — all without needing a separate backend or API.
This is the same pattern you’ll use when building internal analytics portals, model-monitoring dashboards, or MLOps (machine learning operations) pipelines powered by governed warehouse data.
Before diving into code, let’s take a quick look at what makes Snowflake the go-to data warehouse for modern analytics and machine learning pipelines.
At its core, Snowflake is a fully managed cloud data platform — meaning you don’t have to worry about provisioning servers, maintaining storage, or scaling infrastructure. It separates compute (the “engine” that runs your queries) from storage (where your data lives), allowing you to scale each independently. When you need to run queries, you spin up a virtual warehouse; when you’re done, you can pause it to save costs.
This elasticity makes Snowflake ideal for applications that need to handle large, unpredictable workloads, such as Streamlit dashboards or MLOps tools that query model outputs on demand.
Some key advantages:
By connecting Streamlit to Snowflake, you bridge the gap between interactive data apps and enterprise-grade data infrastructure. Your app no longer relies on static uploads — it runs live SQL queries against a scalable warehouse, retrieves governed results, and displays them instantly to users.
This comparison highlights why local CSV workflows break down at scale. Snowflake replaces manual file handling with a governed, elastic, and real-time data platform that supports secure collaboration, fine-grained access control, and cost-efficient analytics.
Would you like immediate access to 3,457 images curated and labeled with hand gestures to train, explore, and experiment with … for free? Head over to Roboflow and get a free account to grab these hand gesture images.
Before writing any code, let’s make sure your environment is ready to connect Streamlit with Snowflake. You’ll only need a few tools and a free Snowflake account to get started.
If you don’t already have one, visit Snowflake’s free trial page and create a 30-day account.
Once signed in, open Snowsight (Snowflake’s modern web interface), where you can run SQL queries and manage compute resources.
Snowflake trial accounts include a sample database:
is Snowflake’s built-in sample schema based on the TPC-H benchmark dataset (scale factor 1), commonly used for demos and testing. You can safely query these tables without affecting your account or incurring significant costs.
Inside, you’ll find tables such as , , and .
We’ll use these to run test queries later in the Streamlit app.
Run this quick check in Snowsight:
If you see a UTC (Coordinated Universal Time) timestamp result, your warehouse and schema are configured correctly.
You can store credentials as environment variables or, more conveniently, in Streamlit’s built-in secrets file.
Create a file named at your project root:
Note: Never commit this file to GitHub. It should remain private on your local system.
Once your Snowflake trial account, dependencies, and credentials are ready, you’re set to move on.
Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.
And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!
By the end of this lesson, your Streamlit project will have evolved from a simple prototype into a live data app capable of querying cloud warehouses. The structure remains familiar — modular, organized, and easy to extend — but now includes utilities for secure credential handling and database connectivity.
Here’s the complete directory layout, with new components highlighted:
Before we connect Streamlit to Snowflake, let’s pause and look at the two backbone modules that quietly do most of the heavy lifting (i.e., and ).
These modules handle credential loading, database connection creation, and returning query results as Pandas DataFrames for visualization. Getting comfortable with them now will make the rest of the lesson effortless.
Let’s start with . This file defines the global configuration that every lesson in this series relies on. It’s small, simple, and incredibly useful.
The top of the file begins with a few imports that set the foundation:
Here’s what’s happening:
We import Python’s built-in to define a configuration container with minimal boilerplate. The module is used to read environment variables, while from lets us define fields that might or might not be set — handy for credentials (e.g., roles or schemas) that can sometimes be omitted.
Then we make two optional imports.
First, Streamlit: wrapped in a block to prevent import-time failures if Streamlit isn’t installed (e.g., when running utility tests or lesson scripts outside the Streamlit runtime). If the import fails, we simply assign and continue.
Next comes an optional import of . If the package is installed, it loads environment variables from a local file. This is mostly for convenience — developers often keep credentials there while experimenting locally. The script will still work fine without it.
Now we reach the heart of the file: the dataclass.
This class centralizes all configuration in one place. The flag means that once the settings object is created, none of its values can be changed — an important safeguard when Streamlit reruns scripts automatically.
At the top are generic app-level variables (e.g., and the default Iris dataset path). The rest are Snowflake credentials that start as and are dynamically populated.
Inside the class, there’s a special method called , which runs right after initialization. This is where the real magic happens.
When Streamlit is available, it first looks for credentials stored in , which is how Streamlit Cloud securely manages secrets. If those aren’t found, it falls back to reading from environment variables (e.g., , , and so on).
This layered approach is designed to support both cloud and local development environments
Next comes a small helper function called .
Its purpose is simple: try to fetch a value from Streamlit secrets first, and if that fails, fall back to the corresponding environment variable. This pattern makes the configuration portable — it works the same way whether you’re deploying in the cloud or testing locally on your machine.
With the helper in place, we can now populate all the Snowflake fields in the frozen dataclass. Because is immutable, normal assignment isn’t allowed — instead, the code uses to set each attribute safely within the scope.
This section is the real heart of the configuration system. It ensures that even though the object is frozen, all your credentials are loaded once — right when it’s created — and remain constant throughout the app’s lifecycle. Streamlit’s rerun model won’t reset or mutate these values, keeping your environment predictable and reducing accidental mutation.
At the end of the file, there’s one more useful property: .
This property is intended to be used by the app to decide whether to attempt a live Snowflake connection.
Finally, at the bottom, the file creates a single global instance of this dataclass:
This way, every script can import the same configuration with one line:
and immediately access values such as:
Now that credentials are loaded, we need a way to connect to Snowflake and run queries. That’s where comes in.
Like , it starts with a few critical imports:
The imports here mirror the same philosophy: minimal, intentional, and safe.
We use again to structure our credentials cleanly. lets us mark the role parameter as non-essential. Pandas is used because the Snowflake connector can return query results directly as a Pandas DataFrame, which is perfect for Streamlit.
Finally, the / around prevents import errors in environments where Snowflake dependencies aren’t installed yet. Instead of breaking the whole app, it simply sets . Later, the app can catch this error and inform the user gracefully.
Next comes the dataclass:
This class is a neat container for everything needed to connect to Snowflake. Without it, we’d have to pass 7 parameters into every connection call. The dataclass groups them neatly and provides type hints for clarity.
The function begins with a quick check to ensure the Snowflake connector is available. If not, it raises a clear error instead of failing later during connection.
The connection itself is straightforward: it uses the credentials stored in the dataclass to create a context (). Inside that context, a cursor executes the SQL query. Once the data is retrieved, it’s fetched directly into a Pandas DataFrame using , which is one of the most convenient features of Snowflake’s Python connector.
Finally, both the cursor and connection are closed in a block — this ensures resources are cleaned up properly even if something goes wrong mid-query.
Together, and form a small but powerful foundation.
The first ensures credentials are loaded securely and predictably. The second provides a clean, reusable way to query Snowflake without scattering connection logic throughout your codebase.
Next, we’ll put these modules to work by building our Snowflake-connected Streamlit app, starting with how to check the connection and run your first live query.

