Skip to content

Commit 19db154

Browse files
committed
docs(stream): clarify example headings
1 parent 2fe978d commit 19db154

File tree

1 file changed

+2
-2
lines changed
  • docs/en/guides/40-load-data/05-continuous-data-pipelines

1 file changed

+2
-2
lines changed

docs/en/guides/40-load-data/05-continuous-data-pipelines/01-stream.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ SELECT * FROM sensor_readings_stream; -- now empty
6767

6868
`WITH CONSUME` 只读取一次并清空增量,便于下一轮继续捕获新的 INSERT。
6969

70-
## Example 2: Standard Stream Basics
70+
## Example 2: Standard Stream (Updates & Deletes)
7171

7272
Switch to Standard mode when you must react to every mutation, including UPDATE or DELETE.
7373

@@ -109,7 +109,7 @@ Output:
109109

110110
Standard streams capture each change with context: updates show up as DELETE+INSERT on the same `sensor_id`, while standalone deletions/insertions appear individually. Append-Only streams stay empty because they track inserts only.
111111

112-
## Example 3: Incremental Stream Metrics
112+
## Example 3: Incremental Stream Join
113113

114114
Join multiple append-only streams to produce incremental KPIs. Because Databend streams keep new rows until they are consumed, you can run the same query after each load. Every execution drains only the new rows via [`WITH CONSUME`](/sql/sql-commands/query-syntax/with-consume), so updates that arrive at different times are still matched on the next iteration.
115115

0 commit comments

Comments
 (0)