Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.risingwave.com/llms.txt

Use this file to discover all available pages before exploring further.

RisingWave is an event streaming platform. Results are continuously maintained by the pipeline as data arrives — applications query pre-computed, always-current state at low latency via standard SQL.

Live dashboards

Materialized views update incrementally as events arrive. Dashboards and APIs read pre-computed state — no scheduled refreshes, no stale snapshots.
-- MV stays current as stock trades arrive
CREATE MATERIALIZED VIEW stock_summary AS
SELECT
  symbol,
  AVG(price)  AS avg_price,
  MAX(price)  AS high,
  MIN(price)  AS low,
  SUM(volume) AS total_volume
FROM stock_trades
GROUP BY symbol;

-- Dashboard queries hit pre-computed state, not the raw stream
SELECT * FROM stock_summary WHERE symbol = 'AAPL';
For the full pipeline (Kafka source → MV → low-latency serve), see the Kafka to MV recipe.

Monitoring and alerting

Continuously evaluate incoming streams against thresholds. Detect anomalies and trigger downstream systems — sub-second latency from event to alert.
-- Detect cards used more than 5 times in 5 minutes with total spend > $5000
CREATE MATERIALIZED VIEW suspicious_transactions AS
SELECT
  card_number,
  COUNT(*)          AS transaction_count,
  SUM(purchase_amount) AS total_spent
FROM TUMBLE(transactions, purchase_time, INTERVAL '5 MINUTES')
GROUP BY card_number, window_end
HAVING COUNT(*) > 5 AND SUM(purchase_amount) > 5000;

-- Deliver alerts to Kafka
CREATE SINK alerts FROM suspicious_transactions
WITH (connector = 'kafka', properties.bootstrap.server = 'localhost:9092', topic = 'fraud-alerts')
FORMAT PLAIN ENCODE JSON (force_append_only = 'true');
For the full pipeline, see the Kafka to MV recipe.

Feature stores

Pre-aggregate features as materialized views over live event streams. At inference time, query the MV directly — features are always current, no recomputation triggered at query time.
-- Feature vectors maintained continuously as bidding events arrive
CREATE MATERIALIZED VIEW ad_features AS
SELECT
  ad_id,
  AVG(bid_amount)                              AS avg_bid,
  MAX(bid_amount)                              AS max_bid,
  COUNT(*)                                     AS bid_count,
  SUM(CASE WHEN bid_won THEN 1 ELSE 0 END)     AS win_count,
  AVG(response_time_ms)                        AS avg_response_ms
FROM bidding_events
WHERE event_time >= NOW() - INTERVAL '1 day'
GROUP BY ad_id;

-- At inference time: point query, no aggregation
SELECT * FROM ad_features WHERE ad_id = $1;
For the full pattern, see the Feature store recipe.

Real-time enrichment

Join live event streams with reference data from a CDC-connected database — in-flight, before delivery downstream. The JOIN is maintained incrementally; when reference data changes, the enriched output updates automatically.
-- Clickstream events from Kafka, enriched with customer data from PostgreSQL CDC
CREATE MATERIALIZED VIEW enriched_clicks AS
SELECT
  c.user_id,
  c.page_url,
  u.segment,
  u.region
FROM clickstream c
JOIN users u ON c.user_id = u.id;

-- Deliver enriched events downstream
CREATE SINK enriched_clicks_sink FROM enriched_clicks
WITH (connector = 'kafka', properties.bootstrap.server = 'localhost:9092', topic = 'enriched-clicks')
FORMAT PLAIN ENCODE JSON (force_append_only = 'true');
For the full pipeline (Kafka + CDC → JOIN → Kafka/Iceberg), see the Stream enrichment recipe.

Streaming lakehouses

Continuously ingest into Apache Iceberg tables with exactly-once delivery, automated compaction, and snapshot management — replacing a separate Debezium + Kafka + Flink + Iceberg writer stack.
-- CDC changes from PostgreSQL land in Iceberg continuously
CREATE SINK orders_lakehouse FROM orders
WITH (
  connector = 'iceberg',
  type = 'upsert',
  primary_key = 'id',
  catalog.type = 'storage',
  warehouse.path = 's3://my-lakehouse/warehouse',
  database.name = 'analytics',
  table.name = 'orders',
  create_table_if_not_exists = 'true'
);
For the full pipeline (CDC + Kafka → enriched MVs → Iceberg), see the Lakehouse ingestion recipe.