• Online, Self-Paced
Course Description

Data is generally processed using a batch or stream methodology depending on how much time between data generation and processing is acceptable. The Snowflake feature Snowpipes processes data in micro-batches which fall in between these two scenarios.

In this course, you will cover the implementation of Snowpipes when data is sourced from an internal Snowflake stage. You will kick things off by looking at data ingestion options in Snowflake from a theoretical standpoint, including the differences between bulk data loading and Snowpipes. Then, you get hands-on to set up the infrastructure for data ingestion: an internal stage for CSV data, a destination table for a data load, and a pipe to carry out the load in micro-batches. Next, you will ingest the data into the destination table and explore how this process can be monitored by tracking the pipe status. Finally, you will implement a Snowflake task to trigger a Snowpipe at regular time intervals.

Learning Objectives

{"discover the key concepts covered in this course"}

Framework Connections

The materials within this course focus on the Knowledge Skills and Abilities (KSAs) identified within the Specialty Areas listed below. Click to view Specialty Area details within the interactive National Cybersecurity Workforce Framework.