Deploy your Snowflake infrastructure using Permifrost

Automated deployment is a key element in a cloud strategy. But did you treat your database models as infrastructure yet?

Snowflake is our go-to data platform at Tropos.io. In this blog post we’ll zoom in to account configuration in Snowflake using “Permifrost”, a tool we recently discovered and are excited about.

It can be tedious and time consuming to setup a new Snowflake configuration. Moreover, we often find ourselves applying the same configuration at different clients. Scripting these “blueprints” makes our life much easier and gives us more time to focus on the client’s actual problems.

This is where “Permifrost” – a small tool developed and open sourced by Gitlab – comes in handy. Let’s dive right in!

A brief intro to Permifrost

Permifrost takes a YAML specification file with the following structure:

databases:
 - db_name:
     shared: boolean
 ...
roles:
 - role_name:
     warehouses:
        - ...
     member_of:
        - ...
     privileges:
        databases:
           read:
             - ...
           write:
             - ...
        schemas:
        tables:
     owns:
        databases:
        schemas:
        tables:
  
users:
 - user_name:
     can_login: boolean
     member_of:
        - ...
warehouses:
 - warehouse_name:
     size:  x-small

The main sections in this file are: “databases”, “roles”, “users” and “warehouses”. Permifrost checks a Snowflake configuration against this specification file. It then executes the necessary SQL commands to rectify any detected deviations. Alternatively, it can also just output these SQL commands without directly executing them on the Snowflake database (using the “–dry” flag).

Most of the sections in this specification are rather self-explanatory, so we take a closer look at the “roles” section. This is also the most interesting part for most users.

Role permissions

The YAML specification file specifies permissions for databases, tables and schemas using simple “read” and “write” permissions. Thus, it abstracts away the particular “grant” mechanics for the different database objects. For example database/schema/table permissions for a specific role are specified as:

roles: 
  - role_name: 
    warehouses: 
      - ... 
    member_of: 
      - ... 
    privileges: 
      databases: 
        read: 
          - db_1
          - db_2
          - ...
        write: 
          - ... 
      schemas: 
        read: 
          - db_1.* 
          - db_2.schema_name 
          - db_3.schema_partial_*
          - ... 
        write: 
          - ... 
      tables: 
        read: 
          - db_1.*.* 
          - db_2.schema_name.* 
          - db_2.schema_partial_*.* 
          - db_3.schema_name.table_name
          - ... 
        write: 
          - ... 
   owns: 
     databases: 
       - ... 
     schemas: 
       - ... 
     tables: 
       - ... 
...

Here, the asterisk “*” is a wildcard character that can be used to grant access to:

  • all current and future tables/schemas in a database (e.g., “db_1.*.*”) or all current and future tables in a specific schema (e.g., “db_2.schema_name.*”);
  • all current and future tables in schemas matching a pattern (e.g., “db_2.schema_partial_*.*).

The advantage of this YAML specification over a SQL script is that it improves readability and is easier to maintain.

Now that we understand the basics of the YAML specification file, let’s look at an interesting use case.

User management made easy: Permifrost + JinJa

User management is one aspect of a Snowflake configuration that can be tedious, repetitive and time consuming: we regularly have to add users + specify their roles, edit users’ roles and remove users over time.

The flexible YAML structure of Permifrost combined with a template engine such as Jinja in Python allows one to create blueprints that further simplify our life. For example, we can create the following user blueprint in a Permifrost specification file:

...
# Users
users:
{% for user, members in users.items() %}
    - {{ user }}:
      can_login: yes
      member_of:
      {% for member in members %}
        - {{ member }}
      {% endfor %}
{% endfor %}
...

Next, one can – for example – provide the users and their roles in a separate Excel file (provided by e.g., the client) and stitch the above blueprint and Excel file together using a Python script. The final result is a tailored Permifrost YAML specification file!

Conclusion

Permifrost is a nice tool that manages Snowflake configurations by abstracting away the difficult bits and keeping the essentials for us to focus on. This results in much more readable configurations in the form of YAML specification files.

We saw that one can easily create blueprint specification files by leveraging the power and flexibility of a template engine such as JinJa in Python to make e.g., user management easier.

We are further exploring the possibilities Permifrost offers and are excited to see how it further develops in the future!

Picture of Pieter-Jan Kerstens

Pieter-Jan Kerstens

Data engineer

Related articles

Discover how the Tropos.io dbt Maturity Model can transform your data transformation journey. Learn the stages, key metrics, and practical steps to scale your dbt implementation for reliable, real-time analytics and strategic impact.

Scaling Data Success: The Tropos.io dbt Maturity Model Explained

Meet Kali by Tropos, the first fully automated migration tool to convert legacy data pipelines to Snowflake and dbt, accelerating cloud migration and minimizing risk.

Meet Kali: A New Era of Data Pipeline Migration to Snowflake and dbt

Discover how Generative AI is transforming deviation management in life sciences by enhancing SOP compliance, data quality, and operational efficiency. Learn how AI-driven real-time reporting, integrated with Veeva Vault and Snowflake, ensures complete, structured records for better RCA and CAPA processes.

Unlock Seamless Compliance: How Generative AI is Revolutionizing Deviation Management in Life Sciences

Scroll to Top