This guide shows how to sink data from RisingWave into an Apache Iceberg table and make it available for querying in Databricks. You can use one of two catalog patterns for this integration:Documentation Index
Fetch the complete documentation index at: https://docs.risingwave.com/llms.txt
Use this file to discover all available pages before exploring further.
- Databricks Unity Catalog: Write data from RisingWave directly to a Databricks-managed Iceberg table.
- AWS Glue as a federated catalog: Write data from RisingWave to an Iceberg table that uses AWS Glue as its catalog, and then connect Databricks to Glue.
- Using Unity Catalog
- Using AWS Glue Catalog
This pattern is ideal when you want to manage your Iceberg tables centrally within the Databricks ecosystem. RisingWave acts as a streaming ETL engine, writing data directly into your Unity Catalog.How it worksRisingWave → Iceberg table on S3 → Databricks Unity Catalog
Step 1: Configure the catalog connection in RisingWave
To connect to Unity Catalog, you will need to provide your workspace credentials and endpoint.For a complete guide to the parameters, see the Databricks Unity Catalog topic.Step 2: Sink data from RisingWave to Unity Catalog
Create aSINK in RisingWave that writes to your Databricks-managed table. Note that currently, only append-only sinks are supported for this integration.vended_credentials is added in v2.7.0. For more information, see Vended credentials.