Step 1: Start RisingWave
RisingWave supports multiple deployment modes. For a comparison of all options, see Deployment modes overview. For convenience, this quick start guide uses the standalone deployment mode.Script installation
Open a terminal and run the followingcurl command.
Docker
Ensure Docker Desktop is installed and running in your environment.Homebrew
Ensure Homebrew is installed, and run the following commands:Step 2: Connect to RisingWave
Ensure you havepsql installed in your environment. To learn about how to install it, see Install psql without PostgreSQL.
Open a new terminal window and run:
Step 3: Insert some data
RisingWave supports both direct data insertion and streaming data ingestion from sources like message queues and database change streams. To keep things simple, we’ll demonstrate the approach of direct data insertion. Let’s create a table and insert some data. For instance, we can create a table namedcredit_card_transactions to store information about credit card usage.
Create the table
Insert five rows of data
Step 4: Analyze and query data
Next, we will detect potentially fraudulent behavior by identifying any card that spends more than $5000 within a one-minute window. To do this, we’ll use a materialized view. A materialized view in RisingWave is not a static snapshot or a one-time query. Instead, it’s a continuously maintained result that automatically stays up to date as new data arrives. You can think of it as a live dashboard behind a SQL query.Create a materialized view that detects high-spending cards in 1-minute windows
credit_card_transactions will automatically update fraud_alerts.
Query the current result
card_123 over the $5000 threshold.
Insert additional data to trigger fraud alert
Query the updated result
What’s next?
Congratulations! You’ve successfully started RisingWave and conducted some initial data analysis. To explore further, you can:- Follow another tutorial:
- Explore ready-to-run applications in the awesome stream processing GitHub repository.
- Dive deeper into our documentation to learn about ingesting data, performing data transformations, and delivering data to downstream systems.