From d72c9854ba5cb927e28c4a57d35d85456e413eb8 Mon Sep 17 00:00:00 2001 From: savynorem Date: Mon, 14 Aug 2023 11:10:30 -0400 Subject: [PATCH 1/9] updating streams data type page with bike story for CLI commands and hugo short codes for tabbed examples --- docs/data-types/streams.md | 680 ++++++++++++++++++++----------------- 1 file changed, 372 insertions(+), 308 deletions(-) diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md index 807cc994d4..60d691ce92 100644 --- a/docs/data-types/streams.md +++ b/docs/data-types/streams.md @@ -19,7 +19,7 @@ Examples of Redis stream use cases include: * Notifications (e.g., storing a record of each user's notifications in a separate stream) Redis generates a unique ID for each stream entry. -You can use these IDs to retrieve their associated entries later or to read and process all subsequent entries in the stream. +You can use these IDs to retrieve their associated entries later or to read and process all subsequent entries in the stream. Note that because these IDs are related to time, the ones shown here may vary and will be different from the IDs you see in your own Redis instance. Redis streams support several trimming strategies (to prevent streams from growing unbounded) and more than one consumption strategy (see `XREAD`, `XREADGROUP`, and `XRANGE`). @@ -34,40 +34,44 @@ See the [complete list of stream commands](https://redis.io/commands/?group=stre ## Examples -* Add several temperature readings to a stream -``` -> XADD temperatures:us-ny:10007 * temp_f 87.2 pressure 29.69 humidity 46 -"1658354918398-0" -> XADD temperatures:us-ny:10007 * temp_f 83.1 pressure 29.21 humidity 46.5 -"1658354934941-0" -> XADD temperatures:us-ny:10007 * temp_f 81.9 pressure 28.37 humidity 43.7 -"1658354957524-0" -``` - -* Read the first two stream entries starting at ID `1658354934941-0`: -``` -> XRANGE temperatures:us-ny:10007 1658354934941-0 + COUNT 2 -1) 1) "1658354934941-0" - 2) 1) "temp_f" - 2) "83.1" - 3) "pressure" - 4) "29.21" - 5) "humidity" - 6) "46.5" -2) 1) "1658354957524-0" - 2) 1) "temp_f" - 2) "81.9" - 3) "pressure" - 4) "28.37" - 5) "humidity" - 6) "43.7" -``` +* When our racers pass a checkpoint, we add a stream entry for each racer that includes the racer's name, speed, position, and location ID: +{{< clients-example stream_tutorial xadd >}} +> XADD race:france * rider Castilla speed 30.2 position 1 location_id 1 +"1691762745152-0" +> XADD race:france * rider Norem speed 28.8 position 3 location_id 1 +"1691765278160-0" +> XADD race:france * rider Prickett speed 29.7 position 2 location_id 1 +"1691765289770-0" +{{< /clients-example >}} + +* Read two stream entries starting at ID `1691765278160-0`: +{{< clients-example streams_tutorial xrange >}} +> XRANGE race:france 1691765278160-0 + COUNT 2 +1) 1) "1691765278160-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +2) 1) "1691765289770-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +{{< /clients-example >}} * Read up to 100 new stream entries, starting at the end of the stream, and block for up to 300 ms if no entries are being written: -``` -> XREAD COUNT 100 BLOCK 300 STREAMS temperatures:us-ny:10007 $ +{{< clients-example stream_tutorial xread_block >}} +> XREAD COUNT 100 BLOCK 300 STREAMS race:france $ (nil) -``` +{{< /clients-example >}} ## Performance @@ -84,21 +88,21 @@ See each command's time complexity for the details. Streams are an append-only data structure. The fundamental write command, called `XADD`, appends a new entry to the specified stream. -Each stream entry consists of one or more field-value pairs, somewhat like a record or a Redis hash: +Each stream entry consists of one or more field-value pairs, somewhat like a dictionary or a Redis hash: -``` -> XADD mystream * sensor-id 1234 temperature 19.8 -1518951480106-0 -``` +{{< clients-example stream_tutorial xadd_2 >}} +> XADD race:france * rider Castilla speed 29.9 position 1 location_id 2 +"1691765375865-0" +{{< /clients-example >}} -The above call to the `XADD` command adds an entry `sensor-id: 1234, temperature: 19.8` to the stream at key `mystream`, using an auto-generated entry ID, which is the one returned by the command, specifically `1518951480106-0`. It gets as its first argument the key name `mystream`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. +The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1691762745152-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. It is possible to get the number of items inside a Stream just using the `XLEN` command: -``` -> XLEN mystream -(integer) 1 -``` +{{< clients-example stream_tutorial xlen >}} +> XLEN race:france +(integer) 4 +{{< /clients-example >}} ### Entry IDs @@ -114,26 +118,26 @@ The format of such IDs may look strange at first, and the gentle reader may wond If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the `XADD` command can take an explicit ID instead of the `*` wildcard ID that triggers auto-generation, like in the following examples: -``` -> XADD somestream 0-1 field value +{{< clients-example streams_toturial xadd_id >}} +> XADD race:usa 0-1 racer Castilla 0-1 -> XADD somestream 0-2 foo bar +> XADD race:usa 0-2 racer Norem 0-2 -``` +{{< /clients-example >}} Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: -``` -> XADD somestream 0-1 foo bar +{{< clients-example streams_toturial xadd_bad_id >}} +> XADD race:usa 0-1 racer Prickett (error) ERR The ID specified in XADD is equal or smaller than the target stream top item -``` +{{< /clients-example >}} If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: -``` -> XADD somestream 0-* baz qux +{{< clients-example streams_toturial xadd_7 >}} +> XADD race:usa 0-* racer Prickett 0-3 -``` +{{< /clients-example >}} ## Getting data from Streams @@ -149,65 +153,132 @@ Redis Streams support all three of the query modes described above via different To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. -``` -> XRANGE mystream - + -1) 1) 1518951480106-0 - 2) 1) "sensor-id" - 2) "1234" - 3) "temperature" - 4) "19.8" -2) 1) 1518951482479-0 - 2) 1) "sensor-id" - 2) "9999" - 3) "temperature" - 4) "18.2" -``` +{{< clients-example streams_toturial xrange_all >}} +> XRANGE race:france - + +1) 1) "1691762745152-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1691765278160-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +3) 1) "1691765289770-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +4) 1) "1691765375865-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: -``` -> XRANGE mystream 1518951480106 1518951480107 -1) 1) 1518951480106-0 - 2) 1) "sensor-id" - 2) "1234" - 3) "temperature" - 4) "19.8" -``` - -I have only a single entry in this range, however in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. We start adding 10 items with `XADD` (I won't show that, lets assume that the stream `mystream` was populated with 10 items). To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. - -``` -> XRANGE mystream - + COUNT 2 -1) 1) 1519073278252-0 - 2) 1) "foo" - 2) "value_1" -2) 1) 1519073279157-0 - 2) 1) "foo" - 2) "value_2" -``` - -In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1519073279157-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1519073279157-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: - -``` -> XRANGE mystream (1519073279157-0 + COUNT 2 -1) 1) 1519073280281-0 - 2) 1) "foo" - 2) "value_3" -2) 1) 1519073281432-0 - 2) 1) "foo" - 2) "value_4" -``` - -And so forth. Since `XRANGE` complexity is *O(log(N))* to seek, and then *O(M)* to return M elements, with a small count the command has a logarithmic time complexity, which means that each step of the iteration is fast. So `XRANGE` is also the de facto *streams iterator* and does not require an **XSCAN** command. +{{< clients-example streams_toturial xrange_time >}} +> XRANGE race:france 1691765375864 1691765375866 +1) 1) "1691765375865-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +I have only a single entry in this range, however in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. + +{{< clients-example streams_toturial xrange_step_1 >}} +> XRANGE race:france - + COUNT 2 +1) 1) "1691762745152-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1691765278160-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1691765278160-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1691765278160-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: + +{{< clients-example streams_toturial xrange_step_2 >}} +> XRANGE race:france (1691765278160-0 + COUNT 2 +1) 1) "1691765289770-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +2) 1) "1691765375865-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Now that we've gotten 4 items out of a stream that only had 4 things in it, if we try to get more items, we'll get an empty array: + +{{< clients-example streams_toturial xrange_empty >}} +> XRANGE race:france (1691765375865-0 + COUNT 2 +(empty array) +{{< /clients-example >}} + +Since `XRANGE` complexity is *O(log(N))* to seek, and then *O(M)* to return M elements, with a small count the command has a logarithmic time complexity, which means that each step of the iteration is fast. So `XRANGE` is also the de facto *streams iterator* and does not require an **XSCAN** command. The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements in inverted order, so a practical use for `XREVRANGE` is to check what is the last item in a Stream: -``` -> XREVRANGE mystream + - COUNT 1 -1) 1) 1519073287312-0 - 2) 1) "foo" - 2) "value_10" -``` +{{< clients-example streams_toturial xrevrange >}} +> XREVRANGE race:france + - COUNT 1 +1) 1) "1691765375865-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} Note that the `XREVRANGE` command takes the *start* and *stop* arguments in reverse order. @@ -221,26 +292,38 @@ When we do not want to access items by a range in a stream, usually what we want The command that provides the ability to listen for new messages arriving into a stream is called `XREAD`. It's a bit more complex than `XRANGE`, so we'll start showing simple forms, and later the whole command layout will be provided. -``` -> XREAD COUNT 2 STREAMS mystream 0 -1) 1) "mystream" - 2) 1) 1) 1519073278252-0 - 2) 1) "foo" - 2) "value_1" - 2) 1) 1519073279157-0 - 2) 1) "foo" - 2) "value_2" -``` +{{< clients-example streams_toturial xread >}} +> XREAD COUNT 2 STREAMS race:france 0 +1) 1) "race:france" + 2) 1) 1) "1691762745152-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" + 2) 1) "1691765278160-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} The above is the non-blocking form of `XREAD`. Note that the **COUNT** option is not mandatory, in fact the only mandatory option of the command is the **STREAMS** option, that specifies a list of keys together with the corresponding maximum ID already seen for each stream by the calling consumer, so that the command will provide the client only with messages with an ID greater than the one we specified. -In the above command we wrote `STREAMS mystream 0` so we want all the messages in the Stream `mystream` having an ID greater than `0-0`. As you can see in the example above, the command returns the key name, because actually it is possible to call this command with more than one key to read from different streams at the same time. I could write, for instance: `STREAMS mystream otherstream 0 0`. Note how after the **STREAMS** option we need to provide the key names, and later the IDs. For this reason, the **STREAMS** option must always be the last option. +In the above command we wrote `STREAMS race:france 0` so we want all the messages in the Stream `race:france` having an ID greater than `0-0`. As you can see in the example above, the command returns the key name, because actually it is possible to call this command with more than one key to read from different streams at the same time. I could write, for instance: `STREAMS race:france race:italy 0 0`. Note how after the **STREAMS** option we need to provide the key names, and later the IDs. For this reason, the **STREAMS** option must always be the last option. Any other options must come before the **STREAMS** option. Apart from the fact that `XREAD` can access multiple streams at once, and that we are able to specify the last ID we own to just get newer messages, in this simple form the command is not doing something so different compared to `XRANGE`. However, the interesting part is that we can turn `XREAD` into a *blocking command* easily, by specifying the **BLOCK** argument: ``` -> XREAD BLOCK 0 STREAMS mystream $ +> XREAD BLOCK 0 STREAMS race:france $ ``` Note that in the example above, other than removing **COUNT**, I specified the new **BLOCK** option with a timeout of 0 milliseconds (that means to never timeout). Moreover, instead of passing a normal ID for the stream `mystream` I passed the special ID `$`. This special ID means that `XREAD` should use as last ID the maximum ID already stored in the stream `mystream`, so that we will receive only *new* messages, starting from the time we started listening. This is similar to the `tail -f` Unix command in some way. @@ -306,52 +389,46 @@ Now it's time to zoom in to see the fundamental consumer group commands. They ar ## Creating a consumer group -Assuming I have a key `mystream` of type stream already existing, in order to create a consumer group I just need to do the following: +Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: -``` -> XGROUP CREATE mystream mygroup $ +{{< clients-example streams_toturial xgroup_create >}} +> XGROUP CREATE race:france france_location $ OK -``` +{{< /clients-example >}} As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what was the *last message ID* when the group was just created. If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume *all* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. `XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: -``` -> XGROUP CREATE newstream mygroup $ MKSTREAM +{{< clients-example streams_toturial xgroup_create_mkstream >}} +> XGROUP CREATE race:italy italy_racers $ MKSTREAM OK -``` +{{< /clients-example >}} Now that the consumer group is created we can immediately try to read messages via the consumer group using the `XREADGROUP` command. We'll read from consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice or Bob. `XREADGROUP` is very similar to `XREAD` and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in `XREAD`. -Before reading from the stream, let's put some messages inside: - -``` -> XADD mystream * message apple -1526569495631-0 -> XADD mystream * message orange -1526569498055-0 -> XADD mystream * message strawberry -1526569506935-0 -> XADD mystream * message apricot -1526569535168-0 -> XADD mystream * message banana -1526569544280-0 -``` - -Note: *here message is the field name, and the fruit is the associated value, remember that stream items are small dictionaries.* - -It is time to try reading something using the consumer group: - -``` -> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > -1) 1) "mystream" - 2) 1) 1) 1526569495631-0 - 2) 1) "message" - 2) "apple" -``` +We'll add racers to the race:italy stream and try reading something using the consumer group: +Note: *here racer is the field name, and the name is the associated value, remember that stream items are small dictionaries.* + +{{< clients-example streams_toturial xgroup_read >}} +> XADD race:italy * racer Castilla +"1691766245113-0" +> XADD race:italy * racer Royce +"1691766256307-0" +> XADD race:italy * racer Sam-Bodden +"1691766261145-0" +> XADD race:italy * racer Prickett +"1691766685178-0" +> XADD race:italy * racer Norem +"1691766698493-0" +> XREADGROUP GROUP italy_racers Alice COUNT 1 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1691766245113-0" + 2) 1) "racer" + 2) "Castilla" +{{< /clients-example >}} `XREADGROUP` replies are just like `XREAD` replies. Note however the `GROUP ` provided above. It states that I want to read from the stream using the consumer group `mygroup` and I'm the consumer `Alice`. Every time a consumer performs an operation with a consumer group, it must specify its name, uniquely identifying this consumer inside the group. @@ -364,38 +441,38 @@ This is almost always what you want, however it is also possible to specify a re We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about apples: -``` -> XREADGROUP GROUP mygroup Alice STREAMS mystream 0 -1) 1) "mystream" - 2) 1) 1) 1526569495631-0 - 2) 1) "message" - 2) "apple" -``` +{{< clients-example streams_toturial xgroup_read_id >}} +> XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) 1) 1) "1691766245113-0" + 2) 1) "racer" + 2) "Castilla" +{{< /clients-example >}} However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: -``` -> XACK mystream mygroup 1526569495631-0 +{{< clients-example streams_toturial xack >}} +> XACK race:italy italy_racers 1691766245113-0 (integer) 1 -> XREADGROUP GROUP mygroup Alice STREAMS mystream 0 -1) 1) "mystream" - 2) (empty list or set) -``` +> XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) (empty array) +{{< /clients-example >}} Don't worry if you yet don't know how `XACK` works, the idea is just that processed messages are no longer part of the history that we can access. Now it's Bob's turn to read something: -``` -> XREADGROUP GROUP mygroup Bob COUNT 2 STREAMS mystream > -1) 1) "mystream" - 2) 1) 1) 1526569498055-0 - 2) 1) "message" - 2) "orange" - 2) 1) 1526569506935-0 - 2) 1) "message" - 2) "strawberry" -``` +{{< clients-example streams_toturial xgroup_read_bob >}} +> XREADGROUP GROUP italy_racers Bob COUNT 2 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1691766256307-0" + 2) 1) "racer" + 2) "Royce" + 2) 1) "1691766261145-0" + 2) 1) "racer" + 2) "Sam-Bodden" +{{< /clients-example >}} Bob asked for a maximum of two messages and is reading via the same group `mygroup`. So what happens is that Redis reports just *new* messages. As you can see the "apple" message is not delivered, since it was already delivered to Alice, so Bob gets orange and strawberry, and so forth. @@ -478,14 +555,14 @@ The first step of this process is just a command that provides observability of This is a read-only command which is always safe to call and will not change ownership of any message. In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. -``` -> XPENDING mystream mygroup +{{< clients-example streams_toturial xpending >}} +> XPENDING race:italy italy_racers 1) (integer) 2 -2) 1526569498055-0 -3) 1526569506935-0 +2) "1691766256307-0" +3) "1691766261145-0" 4) 1) 1) "Bob" 2) "2" -``` +{{< /clients-example >}} When called in this way, the command outputs the total number of pending messages in the consumer group (two in this case), the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. We have only Bob with two pending messages because the single message that Alice requested was acknowledged using `XACK`. @@ -498,31 +575,31 @@ XPENDING [[IDLE ] [ By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. -``` -> XPENDING mystream mygroup - + 10 -1) 1) 1526569498055-0 +{{< clients-example streams_toturial xpending_plus_minus >}} +> XPENDING race:italy italy_racers - + 10 +1) 1) "1691766256307-0" 2) "Bob" - 3) (integer) 74170458 + 3) (integer) 60644 4) (integer) 1 -2) 1) 1526569506935-0 +2) 1) "1691766261145-0" 2) "Bob" - 3) (integer) 74170458 + 3) (integer) 60644 4) (integer) 1 -``` +{{< /clients-example >}} Now we have the details for each message: the ID, the consumer name, the *idle time* in milliseconds, which is how many milliseconds have passed since the last time the message was delivered to some consumer, and finally the number of times that a given message was delivered. -We have two messages from Bob, and they are idle for 74170458 milliseconds, about 20 hours. +We have two messages from Bob, and they are idle for 60000+ milliseconds, about a minute. Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. -``` -> XRANGE mystream 1526569498055-0 1526569498055-0 -1) 1) 1526569498055-0 - 2) 1) "message" - 2) "orange" -``` +{{< clients-example streams_toturial xrange_pending >}} +> XRANGE race:italy 1691766256307-0 1691766256307-0 +1) 1) "1691766256307-0" + 2) 1) "racer" + 2) "Royce" +{{< /clients-example >}} -We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 20 hours of not processing messages, Bob will probably not recover in time, and it's time to *claim* such messages and resume the processing in place of Bob. To do so, we use the `XCLAIM` command. +We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 1 minute of not processing messages, Bob will probably not recover quickly, and it's time to *claim* such messages and resume the processing in place of Bob. To do so, we use the `XCLAIM` command. This command is very complex and full of options in its full form, since it is used for replication of consumer groups changes, but we'll use just the arguments that we need normally. In this case it is as simple as: @@ -533,20 +610,20 @@ XCLAIM ... Basically we say, for this specific key and group, I want that the message IDs specified will change ownership, and will be assigned to the specified consumer name ``. However, we also provide a minimum idle time, so that the operation will only work if the idle time of the mentioned messages is greater than the specified idle time. This is useful because maybe two clients are retrying to claim a message at the same time: ``` -Client 1: XCLAIM mystream mygroup Alice 3600000 1526569498055-0 -Client 2: XCLAIM mystream mygroup Lora 3600000 1526569498055-0 +Client 1: XCLAIM race:italy italy_racers Alice 60000 1691766256307-0 +Client 2: XCLAIM race:italy italy_racers Lora 60000 1691766256307-0 ``` However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). This is the result of the command execution: -``` -> XCLAIM mystream mygroup Alice 3600000 1526569498055-0 -1) 1) 1526569498055-0 - 2) 1) "message" - 2) "orange" -``` +{{< clients-example streams_toturial xclaim >}} +> XCLAIM race:italy italy_racers Alice 60000 1691766256307-0 +1) 1) "1691766256307-0" + 2) 1) "racer" + 2) "Royce" +{{< /clients-example >}} The message was successfully claimed by Alice, who can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. @@ -569,24 +646,25 @@ XAUTOCLAIM [COUNT count] [JUSTI So, in the example above, I could have used automatic claiming to claim a single message like this: -``` -> XAUTOCLAIM mystream mygroup Alice 3600000 0-0 COUNT 1 -1) 1526569498055-0 -2) 1) 1526569498055-0 - 2) 1) "message" - 2) "orange" -``` +{{< clients-example streams_toturial xautoclaim >}} +> XAUTOCLAIM race:italy italy_racers Alice 60000 0-0 COUNT 1 +1) "1691766261145-0" +2) 1) 1) "1691766256307-0" + 2) 1) "racer" + 2) "Royce" +{{< /clients-example >}} Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: -``` -> XAUTOCLAIM mystream mygroup Lora 3600000 1526569498055-0 COUNT 1 -1) 0-0 -2) 1) 1526569506935-0 - 2) 1) "message" - 2) "strawberry" -``` +{{< clients-example streams_toturial xautoclaim_cursor >}} +> XAUTOCLAIM race:italy italy_racers Lora 60000 1526569498055-0 COUNT 1 +1) "0-0" +2) 1) 1) "1691766261145-0" + 2) 1) "racer" + 2) "Sam-Bodden" +{{< /clients-example >}} + When `XAUTOCLAIM` returns the "0-0" stream ID as a cursor, that means that it reached the end of the consumer group pending entries list. That doesn't mean that there are no new idle pending messages, so the process continues by calling `XAUTOCLAIM` from the beginning of the stream. @@ -604,81 +682,67 @@ However we may want to do more than that, and the `XINFO` command is an observab This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. -``` -> XINFO STREAM mystream +{{< clients-example streams_toturial xinfo >}} +> XINFO STREAM race:italy 1) "length" - 2) (integer) 2 + 2) (integer) 5 3) "radix-tree-keys" 4) (integer) 1 5) "radix-tree-nodes" 6) (integer) 2 7) "last-generated-id" - 8) "1638125141232-0" - 9) "max-deleted-entryid" -10) "0-0" -11) "entries-added" -12) (integer) 2 -13) "groups" -14) (integer) 1 -15) "first-entry" -16) 1) "1638125133432-0" - 2) 1) "message" - 2) "apple" -17) "last-entry" -18) 1) "1638125141232-0" - 2) 1) "message" - 2) "banana" -``` + 8) "1691766698493-0" + 9) "groups" +10) (integer) 1 +11) "first-entry" +12) 1) "1691766245113-0" + 2) 1) "racer" + 2) "Castilla" +13) "last-entry" +14) 1) "1691766698493-0" + 2) 1) "racer" + 2) "Norem" +{{< /clients-example >}} The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. -``` -> XINFO GROUPS mystream -1) 1) "name" - 2) "mygroup" - 3) "consumers" - 4) (integer) 2 - 5) "pending" - 6) (integer) 2 - 7) "last-delivered-id" - 8) "1638126030001-0" - 9) "entries-read" - 10) (integer) 2 - 11) "lag" - 12) (integer) 0 -2) 1) "name" - 2) "some-other-group" - 3) "consumers" - 4) (integer) 1 - 5) "pending" - 6) (integer) 0 - 7) "last-delivered-id" - 8) "1638126028070-0" - 9) "entries-read" - 10) (integer) 1 - 11) "lag" - 12) (integer) 1 -``` +{{< clients-example streams_toturial xinfo_groups >}} +> XINFO GROUPS race:italy +1) 1) "name" + 2) "italy_racers" + 3) "consumers" + 4) (integer) 3 + 5) "pending" + 6) (integer) 2 + 7) "last-delivered-id" + 8) "1691766261145-0" +{{< /clients-example >}} As you can see in this and in the previous output, the `XINFO` command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking compatibility with older clients. Other commands that must be more bandwidth efficient, like `XPENDING`, just report the information without the field names. The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. -``` -> XINFO CONSUMERS mystream mygroup -1) 1) name +{{< clients-example streams_toturial xinfo_consumers >}}} +> XINFO CONSUMERS race:italy italy_racers +1) 1) "name" 2) "Alice" - 3) pending + 3) "pending" 4) (integer) 1 - 5) idle - 6) (integer) 9104628 -2) 1) name + 5) "idle" + 6) (integer) 130215 +2) 1) "name" 2) "Bob" - 3) pending + 3) "pending" + 4) (integer) 0 + 5) "idle" + 6) (integer) 2581506 +3) 1) "name" + 2) "Lora" + 3) "pending" 4) (integer) 1 - 5) idle - 6) (integer) 83841983 -``` + 5) "idle" + 6) (integer) 102218 +{{< /clients-example >}} In case you do not remember the syntax of the command, just ask the command itself for help: @@ -715,45 +779,45 @@ So basically Kafka partitions are more similar to using N different Redis keys, Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to store the history for, potentially, decades to come. Redis streams have some support for this. One is the **MAXLEN** option of the `XADD` command. This option is very simple to use: -``` -> XADD mystream MAXLEN 2 * value 1 -1526654998691-0 -> XADD mystream MAXLEN 2 * value 2 -1526654999635-0 -> XADD mystream MAXLEN 2 * value 3 -1526655000369-0 -> XLEN mystream +{{< clients-example stream_tutorial maxlen >}} +> XADD race:italy MAXLEN 2 * racer Jones +"1691769379388-0" +> XADD race:italy MAXLEN 2 * racer Wood +"1691769438199-0" +> XADD race:italy MAXLEN 2 * racer Henshaw +"1691769502417-0" +> XLEN race:italy (integer) 2 -> XRANGE mystream - + -1) 1) 1526654999635-0 - 2) 1) "value" - 2) "2" -2) 1) 1526655000369-0 - 2) 1) "value" - 2) "3" -``` +> XRANGE race:italy - + +1) 1) "1691769438199-0" + 2) 1) "racer" + 2) "Wood" +2) 1) "1691769502417-0" + 2) 1) "racer" + 2) "Henshaw" +{{< /clients-example >}} Using **MAXLEN** the old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. There is currently no option to tell the stream to just retain items that are not older than a given period, because such command, in order to run consistently, would potentially block for a long time in order to evict items. Imagine for example what happens if there is an insertion spike, then a long pause, and another insertion, all with the same maximum time. The stream would block to evict the data that became too old during the pause. So it is up to the user to do some planning and understand what is the maximum stream length desired. Moreover, while the length of the stream is proportional to the memory used, trimming by time is less simple to control and anticipate: it depends on the insertion rate which often changes over time (and when it does not change, then to just trim by size is trivial). However trimming with **MAXLEN** can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it's possible to use the command in the following special form: ``` -XADD mystream MAXLEN ~ 1000 * ... entry fields here ... +XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... ``` The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. There is also the `XTRIM` command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: -``` -> XTRIM mystream MAXLEN 10 -``` +{{< clients-example stream_tutorial xtrim >}} +> XTRIM race:italy MAXLEN 10 +{{< /clients-example >}} Or, as for the `XADD` option: -``` +{{< clients-example stream_tutorial xtrim2 >}} > XTRIM mystream MAXLEN ~ 10 -``` +{{< /clients-example >}} However, `XTRIM` is designed to accept different trimming strategies. Another trimming strategy is **MINID**, that evicts entries with IDs lower than the one specified. @@ -793,21 +857,21 @@ So when designing an application using Redis streams and consumer groups, make s Streams also have a special command for removing items from the middle of a stream, just by ID. Normally for an append only data structure this may look like an odd feature, but it is actually useful for applications involving, for instance, privacy regulations. The command is called `XDEL` and receives the name of the stream followed by the IDs to delete: -``` -> XRANGE mystream - + COUNT 2 -1) 1) 1526654999635-0 - 2) 1) "value" - 2) "2" -2) 1) 1526655000369-0 - 2) 1) "value" - 2) "3" -> XDEL mystream 1526654999635-0 +{{< clients-example stream_tutorial xdel >}} +> XRANGE race:italy - + COUNT 2 +1) 1) "1691769438199-0" + 2) 1) "racer" + 2) "Wood" +2) 1) "1691769502417-0" + 2) 1) "racer" + 2) "Henshaw" +> XDEL race:italy 1691769502417-0 (integer) 1 -> XRANGE mystream - + COUNT 2 -1) 1) 1526655000369-0 - 2) 1) "value" - 2) "3" -``` +> XRANGE race:italy - + COUNT 2 +1) 1) "1691769438199-0" + 2) 1) "racer" + 2) "Wood" +{{< /clients-example >}} However in the current implementation, memory is not really reclaimed until a macro node is completely empty, so you should not abuse this feature. From 2ed656648a2f3423052694e43e61515f0c822ef2 Mon Sep 17 00:00:00 2001 From: savynorem Date: Mon, 14 Aug 2023 11:22:53 -0400 Subject: [PATCH 2/9] updating wordlist for geo and streams --- wordlist | 998 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 502 insertions(+), 496 deletions(-) diff --git a/wordlist b/wordlist index 504bc19b16..257e9657de 100644 --- a/wordlist +++ b/wordlist @@ -1,481 +1,52 @@ - +3:00 AM +4:00 AM +5:00 AM +6:00 AM .rdb -0s -0x00060007 -0x00MMmmpp -100MB -100k -10GB -10k -128MB -12k -1GB -1s -1th -2GB -300ms -30ms -32MB -32bit -3GB -3MB -3am -3c3a0c -3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e -4GB -4am -4k -500MB -512MB -5GB -5am -60s -6am -6sync -80ms -85MB -8MB -8ms -90s -97a3a64667477371c4479320d683e4c8db5858b1 -A1 -ACKs -ACLs -AMD64 -AOF -AOFRW -AOF_START -APIs -ARGV -ASN -AUTOID -Aioredlock -Alibaba -Arity -Async -Asyncio -Atomicvar -Auth -B1 -B2 -B3 -BCC's -BDFL-style -birthyear -BPF -BPF's -BPF-optimized -Benchmarking -BigNumber -BitOp -Bitfields -Bitwise -brpop -C1 -C2 -C3 -C4 -C5 -CAS -CAs -CFIELD -CKQUORUM -CLI -CLI's -CP -CPUs -CRC -CRC-16 -CRC16 -CRC64 -CRDTs -CRLF -CRLF-terminated -CSV -CallReply -CentOS -Changelog -Chemeris -Citrusbyte -CloseKey -Cn -Collina's -Config -ContextFlags -Costin -Craigslist -Ctrl-a -DBs -DLM -DMA -dnf -DNF -DNS -DSL -Deauthenticate -Deauthenticates -Defrag -Deno -Diskless -DistLock -Dynomite -EBADF -EBS -EC2 -EDOM -EEXIST -EFBIG -EINVAL -Enduro -Ergonom -ENOENT -ENOTSUP -EOF -EP -EPEL -EPSG:3785 -EPSG:900913 -ERANGE -Enum -Eval -EventLoop -EventMachine -FLUSHCONFIG -Failover -Failover-based -Failovers -FlameGraph -FreeBSD -FreeString -Fsyncing -GDB -GEODEL -GET-MASTER-ADDR-BY-NAME -go-redis -GPG -Gbit -GeoHashes -Geohash -Geohashes -Geospatial -Github -Gottlieb -Gradle -HashMap -HLL -HLLs -HMAC-SHA256 -HVM -HW -Hacktoberfest -Hardcoded -HashMaps -HashSets -Haversine -Hexastore -hget -hgetall -hincrby -hmget -Hitmeister -Homebrew -Hotspot -hset -HyperLogLog -HyperLogLog. -HyperLogLogs -Hyperloglogs -hyperloglogs -incr -incrby -IOPs -IPC -IPs -IPv4 -IPv6 -IS-MASTER-DOWN-BY-ADDR -Identinal -IoT -incrby_get_mget -Itamar -Jedis -JedisCluster -JedisPool -JedisPooled -JDK -JKS -JSON -JSON-encoded -Janowski -Javadocs -Jemalloc -js -KEYSPACE -Keyspace -KeyspaceNotification -Kleppman's -Kleppmann -L3 -LDB -LF -LFU -LHF -LLOOGG -lmove_lrange -LRU -LRU's -LRU. -LUA -Leaderboards -Leau -Lehmann -Levelgraph -licensor -licensor's -LibLZF -Linode -Liveness -llen -lmove_lrange -lpop -lpop_rpop -lpush_rpush -lrange -ltrim -ltrim_end_of_list -Lua -Lua's -lua-debugging -Lua-to-Redis -Lucraft -M1 -MASTERDOWN -MERCHANTABILITY -MacBook -Matteo -Maxmemory -Memcache -MessagePack -mget -Movablekeys -Mrkris -mset -multisets -NAS -NATted -NFS -NIC -NICs -NOOP -NTP -NUMA -NX -NaN -Nehalem -node-redis -NoSQL -NodeJS -Noordhuis -NullArray -ODOWN -OOM -OR-ing -ORed -OSGEO:41001 -Ok -OpenBSD -OpenSSL -Opteron -ORM -PEL -PELs -PEM -PFAIL -PHPRedisMutex -PID -PMCs -PMU -POP3 -POSIX -POV -PRNG -PV -Parameterization -Pieter -Pipelining -Pool2 -Predis -Prev -Prioglio -Programm -Programmability -PubSub-related -Pubsub -Pubsub. -Pydantic -R1 -R2 -RC1 -RC3 -RC4 -RDB -RDB-saving -RedisInsight -REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD -REDISPORT -REPL -RESP2 -RESP2. -RESP3 -RESP3's -RESP3-typed -RESP3. -REdis -RHEL -RM_CreateCommand -RM_CreateStringFromString -RM_IsKeysPositionRequest -RM_KeyAtPosWithFlags -RM_SetCommandInfo -RM_.* -RPC -RSS -RTT -RU101 -RU102 -RU202 -RW -Rebranding -Reconfiguring -Reddit's -RediSearch -Redimension -Redis-rb -Redis-to-Lua -RedisCallReply -RedisConf -RedisHost. -RedisJSON -RedisModule.* -Redisson -Redistributions -Redlock -Redlock-cpp -Redlock-cs -Redlock-php -Redlock-py -Redlock-rb -Redlock4Net -Redsync -Reshard -Resharding -Resque -RetainString -Retwis -Retwis-J -Retwis-RB -Roshi -rpush -Rx/Tx -Rslock -S1 -S2 -S3 -S4 -SaaS -SCP -SDOWN -SHA-256 -SHA1 -SHA256 -SIGBUS -SIGFPE -SIGILL -SIGINT -SIGSEGV -SIGTERM -SSD -SSL -SVGs -SYNC_RDB_START -sadd -sadd_smembers -Sandboxed -Sanfilippo -Sanfilippo's -scard -ScarletLock -sdiff -Selectable -setnx_xx -Sharded -Shuttleworth -sinter -sismember -Slicehost -SmartOS -smismember -Snapchat -Snapcraft -Snapshotting -Solaris-derived -SomeOtherValue -Sonatype -SoundCloud -srem -StackOverflow -StringDMA -Subcommands -T1 -T2 -TCL -TCP -TLS -TLS-enabled -TTL -TTLs -Tthe -Twemproxy -UI -ULID -ULIds -ULIDs -URI -USD -UTF-8 -Unmodifiable -Unregister -Untrusted -Unwatches -VM -VMs -VMware -VPS -ValueN -Variadic -Virtualized -Vladev -WSL -WSL2 -Westmere -XMODEM -XSCAN -XYZ -Xen -Xen-specific -Xeon -YCSB -Yossi -Z1 -ZMODEM -ZPOP -ZSET -ZeroBrane -Zhao +ˈrɛd-ɪs +0s +0x00060007 +0x00MMmmpp +100k +100MB +10GB +10k +128MB +12k +1GB +1s +1th +2GB +300ms +30ms +32bit +32MB +3c3a0c +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e +3GB +3MB +4GB +4k +500MB +512MB +5GB +60s +6sync +80ms +85MB +8MB +8ms +90s +97a3a64667477371c4479320d683e4c8db5858b1 +A1 acknowledgement +ACKs acl acl-pubsub-default +ACLs ad-hoc +Aioredlock +Alibaba alice allchannels allkeys @@ -484,32 +55,48 @@ allkeys-random allocator allocator's allocators +AMD64 analytics antirez antirez's +AOF aof +AOF_START aof-1 aof-2 aof-N +AOFRW api apiname +APIs appendfsync appendonly applicative args +ARGV argv argvs +Arity arity +ASN +Async async +Asyncio atomicity +Atomicvar +Auth auth authenticateClientWithUser -autoloading -Autoloading -autoload -autoloader auto-reconnection autocomplete +AUTOID +autoload +autoloader +autoloading +Autoloading +B1 +B2 +B3 backend backported backslashed @@ -519,50 +106,100 @@ balancer bazzar bc bcc +BCC's bcc's +BDFL-style beforeSleep behaviour benchmarked +Benchmarking benchmarking big-endian +BigNumber +bikes:racing:france +bikes:racing:italy +bikes:racing:usa +bikes:rentable +birthyear bitfield bitfield +Bitfields bitfields +BitOp bitop +Bitwise bitwise bitwise bool +BPF +BPF-optimized +BPF's breakpoint broadcasted +brpop bt btree1-az +C1 +C2 +C3 +C4 +C5 +CallReply cancelled cardinalities cardinality +CAS +CAs casted cd +CentOS +CFIELD +Changelog changelogs charset +Chemeris cheprasov +Citrusbyte cjson +CKQUORUM cleartext +CLI cli +CLI's +CloseKey cluster-config-file Cmd cmsgpack +Cn codename codenamed +Collina's commandstats commnad +Config config config-file configEpoch configs const +ContextFlags +Costin +CP cpu cpu-profiling +CPUs +Craigslist +CRC +CRC-16 +CRC16 +CRC64 +CRDTs +CRLF +CRLF-terminated cron cryptographic +CSV +Ctrl-a ctx daemonize daemonized @@ -570,14 +207,18 @@ daemontools dataset datastore dbid +DBs de de-serialization de-serialize deallocated dearmor +Deauthenticate deauthenticate deauthenticated +Deauthenticates deduplicated +Defrag defrag defragging defragment @@ -585,6 +226,7 @@ defragmentable defragmentation defragmented del +Deno deny-oom deserialize deserialized @@ -593,11 +235,27 @@ desync desynchronize dev dir +Diskless diskless +DistLock distlock +DLM +DMA +dnf +DNF +DNS +DSL dup-sentinel -eBPF +Dynomite earts +EBADF +eBPF +EBS +EC2 +EDOM +EEXIST +EFBIG +EINVAL ele emented enable-protected-configs @@ -610,20 +268,34 @@ end-slot2 end-slotN endian endianness +Enduro +ENOENT +ENOTSUP +Enum enum enum_val enum_vals enums +EOF +EP +EPEL epel-release epoll +EPSG:3785 +EPSG:900913 +ERANGE +Ergonom errno error1 error2 errorstats ethernet +Eval eval eval-intro +EventLoop eventloop +EventMachine everysec executables expiries @@ -631,7 +303,9 @@ explainer explainers facto factorializing +Failover failover +Failover-based failover-detected failover-end failover-end-for-timeout @@ -639,6 +313,7 @@ failover-state-reconf-slaves failover-state-select-slave failover-state-send-slaveof-noone failover. +Failovers failovers fanout faq @@ -650,45 +325,97 @@ firewalling first-arg first-args firstkey +FlameGraph +FLUSHCONFIG fmt foo0 foo1 foo2 formatter +FreeBSD +FreeString frequencyonly fsSL fsync +fsynced +Fsyncing fsyncing +fsyncs func +Gbit +GDB gdb geo +geo_tutorial +geoadd +GEODEL +Geohash geohash geohash-encoded +GeoHashes +Geohashes +geosearch +Geospatial geospatial +GET-MASTER-ADDR-BY-NAME getkeys-api +Github github globals +go-redis +Gottlieb +GPG gpg +Gradle +Hacktoberfest hacktoberfest handleClientsWithPendingWrites +Hardcoded hardcoded hardlinks +HashMap +HashMaps HashSet +HashSets +Haversine Healthcheck healthchecks +Hexastore hexastore +hget +hgetall +hincrby hiredis +Hitmeister +HLL +HLLs +HMAC-SHA256 +hmget holdApplicationUntilProxyStarts +Homebrew hostname hostnames +Hotspot hotspots +hset +HVM +HW +HyperLogLog hyperloglog +HyperLogLog. +HyperLogLogs +Hyperloglogs +hyperloglogs i8 iamonds IANA +Identinal idletime idx idx'-th +incr +incrby +incrby_get_mget indexable ing init @@ -700,51 +427,95 @@ internals-vm intsets invalidations iojob +IOPs iostat +IoT ip ip:port +IPC +IPs +IPv4 +IPv6 +IS-MASTER-DOWN-BY-ADDR Istio +Itamar iterable iteratively +ition +Janowski +Javadocs +JDK +Jedis jedis +JedisCluster jedisClusterNodes +JedisPool +JedisPooled +Jemalloc jemalloc +JKS jpeg +js +JSON +JSON-encoded kB keepalive -keyN keylen +keyN keyname keynum keynumidx keyrings +KEYSPACE +Keyspace keyspace keyspace-notifications +KeyspaceNotification keyspec keystep keytool +Kleppman's +Kleppmann knockknock kqueue +L3 last-failover-wins -lastVoteEpoch lastkey +lastVoteEpoch late-defrag latencies latencystats launchd lazyfree-lazy-user-flush +LDB ldb leaderboard +Leaderboards leaderboards +Leau +Lehmann len lenptr +Levelgraph lexicographically +LF +LFU +LHF libc +LibLZF libssl-dev +licensor +licensor's linenoise linkTitle +Linode little-endian +Liveness liveness +llen +LLOOGG +lmove_lrange +lmove_lrange ln LoadX509KeyPair localhost @@ -754,30 +525,58 @@ logics loglevel lookups loopback +lpop +lpop_rpop lpush +lpush_rpush +lrange +LRU lru_cache -lsb-release +LRU. +LRU's lsb_release +lsb-release +ltrim +ltrim_end_of_list +LUA +Lua lua-api +lua-debugging lua-replicate-commands +Lua-to-Redis +Lua's lubs +Lucraft +M1 +MacBook macOS macroscopically malloc +MASTERDOWN +Matteo matteocollina +Maxmemory maxmemory +Memcache memcached memset memtest86 memtier_benchmark +MERCHANTABILITY +MessagePack metatag +mget middleware miranda misconfiguration misconfigured -moduleType modules-api-ref +moduleType +Movablekeys movablekeys +Mrkris +mset +multisets mutex mylist mymaster @@ -785,55 +584,96 @@ myuser myzset namespace namespacing +NaN +NAS natively +NATted +Nehalem netcat newjobs +NFS +NIC +NICs nils no-appendfsync-on-rewrite +node-redis node-redlock +NodeJS noeviction +non-reachability non-TCP non-TLS -non-reachability non-virtualized nonprintable +NOOP +Noordhuis nopass +NoSQL notify-keyspace-events notifyKeyspaceEvent NRedisStack +NTP +NullArray +nullarray num-items +NUMA numactl numkeys -nullarray +NX nx observability +ODOWN odown +Ok ok oldval oneof onwards +OOM +OpenBSD +OpenSSL openssl +Opteron optionals +OR-ing +ORed +ORM +OSGEO:41001 overcommit p50 p999 Packagist pades pageview +Parameterization parameterization parametrize params parsable +PEL +PELs +PEM perf perf_events performance-on-cpu +PFAIL php-redis-lock +PHPRedisMutex +PID pid pidfile +Pieter pipelined +Pipelining pipelining pkcs12 +PMCs pmessage +PMU +Pool2 +POP3 +POSIX +POV ppa:redislabs pre-conditions pre-configured @@ -842,28 +682,46 @@ pre-imported pre-loaded pre-populated pre-sharding +Predis prepend Prepend preprocessing prerequesits +Prev prev printf printf-alike +Prioglio privdata +PRNG probabilistically proc +Programm +Programmability programmability programmatically programmatically-generated pseudorandom PSR-4 +Pubsub pubsub +PubSub-related +Pubsub. +PV +Pydantic qsort -quickstarts queueing +quickstarts +R1 +R2 radix rc +RC1 +RC3 +RC4 +RDB rdb-preamble +RDB-saving rdd rdd-1 rdd-2 @@ -878,12 +736,17 @@ realtime reauthenticate rebalance rebalancing +Rebranding reconfigurations reconfigures +Reconfiguring reconfiguring reconnection reconnections +Reddit's +Redimension redirections +REdis redis redis-benchmark redis-check-aof @@ -896,12 +759,35 @@ redis-macOS-demo redis-om-python redis-om-python. redis-py +Redis-rb redis-rb-cluster redis-server redis-stable +Redis-to-Lua +RedisCallReply +RedisConf +RediSearch +Redises +RedisHost. +RedisInsight +RedisJSON +REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD +RedisModule.* redisObjectVM +REDISPORT +Redisson +Redistributions +Redlock +Redlock-cpp +Redlock-cs +Redlock-php +Redlock-py +Redlock-rb +Redlock4Net +Redsync registerAPI reimplements +REPL repl-diskless-load repo representable @@ -913,53 +799,128 @@ rescanned reseek resends resetpass +Reshard reshard resharded +Resharding resharding reshardings +RESP2 +RESP2. +RESP3 +RESP3-typed +RESP3. +RESP3's +Resque resque resync resynchronization resynchronizations resyncs +RetainString retcode returing +Retwis +Retwis-J +Retwis-RB +RHEL +RM_.* +RM_CreateCommand +RM_CreateStringFromString +RM_IsKeysPositionRequest +RM_KeyAtPosWithFlags +RM_SetCommandInfo roadmap robj +Roshi roundtrips +RPC rpc-perf rpop +rpush +Rslock +RSS rss rtckit +RTT +RU101 +RU102 +RU202 runid runlevels +RW +Rx/Tx +S1 +S2 +S3 +S4 +SaaS +sadd +sadd_smembers +Sandboxed sandboxed +Sanfilippo +Sanfilippo's scalable +scard +ScarletLock +SCP +sdiff +SDOWN sdown sds se seeked +Selectable semantical serverCron +setnx_xx +SHA-256 +SHA1 +SHA256 +Sharded sharded sharding +Shuttleworth si sidekiq +SIGBUS +SIGFPE +SIGILL +SIGINT signle +SIGSEGV +SIGTERM +sinter +sismember slave-reconf-done slave-reconf-inprog slave-reconf-sent +Slicehost slot1 slowlog smaps +SmartOS +smismember +Snapchat +Snapcraft snapd +Snapshotting snapshotting +Solaris-derived somekey +SomeOtherValue +Sonatype +SoundCloud spectrogram spellchecker-cli spiped spo sponsorships +srem +SSD +SSL +StackOverflow start-slot1 start-slot2 start-slotN @@ -969,14 +930,16 @@ status2 stdin storepass strace +StringDMA struct -structs -struct's struct-encoded +struct's +structs stunnel subcommand -subcommand's subcommand. +subcommand's +Subcommands subcommands subevent subevents @@ -984,15 +947,23 @@ suboptimal subsequence substring sudo +SVGs swapdb swappability +SYNC_RDB_START syncd syscall systemctl +T1 +T2 taskset +TCL tcmalloc +TCP tcp the-redis-keyspace +TLS +TLS-enabled tls-port tmp tmux @@ -1002,9 +973,17 @@ tradeoff tradeoffs transactional try-failover +Tthe +TTL +TTLs tty tunable +Twemproxy typemethods_ptr +UI +ULID +ULIds +ULIDs un-authenticated un-gated unclaimable @@ -1016,47 +995,74 @@ unix unlink unlinked unlinks +Unmodifiable unmodifiable unpause unreachability +Unregister unregister unregisters +Untrusted untrusted untuned +Unwatches unwatches urandom +URI +USD used_memory_scripts_eval userSession usr +UTF-8 utf8 utils v9 value-ptr +ValueN +Variadic variadic venv virginia +Virtualized virtualized +Vladev +VM vm vm-max-memory -vmSwapOneObject +VMs vmstat +vmSwapOneObject +VMware volatile-lru volatile-ttl +VPS vtype +WAITAOF +Westmere wget wherefrom whitespace whitespaces whos-using-redis WRONGTYPE +WSL +WSL2 +Xen +Xen-specific +Xeon xff +XMODEM +XSCAN +XYZ xzvf +YCSB +Yossi +Z1 +ZeroBrane zeroed-ACLs +Zhao ziplists -zset -ˈrɛd-ɪs -fsynced -fsyncs -WAITAOF -Redises -ition +ZMODEM +ZPOP +ZSET +zset \ No newline at end of file From 62dd1132514c4c25a5d039b5cc703b8d0f32543b Mon Sep 17 00:00:00 2001 From: savynorem Date: Tue, 15 Aug 2023 10:26:02 -0400 Subject: [PATCH 3/9] wordlist update with commands --- wordlist | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/wordlist b/wordlist index 257e9657de..8298ab47fd 100644 --- a/wordlist +++ b/wordlist @@ -151,6 +151,7 @@ cardinality CAS CAs casted +Castilla cd CentOS CFIELD @@ -332,6 +333,7 @@ foo0 foo1 foo2 formatter +france_location FreeBSD FreeString frequencyonly @@ -380,6 +382,7 @@ HashSets Haversine Healthcheck healthchecks +Henshaw Hexastore hexastore hget @@ -438,6 +441,7 @@ IPv4 IPv6 IS-MASTER-DOWN-BY-ADDR Istio +italy_racers Itamar iterable iteratively @@ -458,6 +462,7 @@ jpeg js JSON JSON-encoded +JUSTID kB keepalive keylen @@ -555,6 +560,8 @@ malloc MASTERDOWN Matteo matteocollina +maxlen +MAXLEN Maxmemory maxmemory Memcache @@ -567,9 +574,11 @@ MessagePack metatag mget middleware +MINID miranda misconfiguration misconfigured +MKSTREAM modules-api-ref moduleType Movablekeys @@ -580,6 +589,7 @@ multisets mutex mylist mymaster +mystream myuser myzset namespace @@ -608,6 +618,7 @@ nonprintable NOOP Noordhuis nopass +Norem NoSQL notify-keyspace-events notifyKeyspaceEvent @@ -689,6 +700,7 @@ preprocessing prerequesits Prev prev +Prickett printf printf-alike Prioglio @@ -714,6 +726,9 @@ queueing quickstarts R1 R2 +race:france +race:italy +race:usa radix rc RC1 @@ -857,6 +872,7 @@ S4 SaaS sadd sadd_smembers +Sam-Bodden Sandboxed sandboxed Sanfilippo @@ -930,6 +946,7 @@ status2 stdin storepass strace +stream_toturial StringDMA struct struct-encoded @@ -1047,12 +1064,59 @@ whos-using-redis WRONGTYPE WSL WSL2 +xack +XACK +xadd +XADD +xadd_2 +xadd_7 +xadd_bad_id +xadd_id +xautoclaim +XAUTOCLAIM +xautoclaim_cursor +xclaim +XCLAIM +xdel +XDEL Xen Xen-specific Xeon xff +XGROUP +xgroup_create +xgroup_create_mkstream +xgroup_read +xgroup_read_bob +xinfo +XINFO +xinfo_consumers +xinfo_groups +xlen +XLEN XMODEM +xpending +XPENDING +xpending_plus_minus +xrange +XRANGE +xrange_all +xrange_empty +xrange_pending +xrange_step_1 +xrange_step_2 +xrange_time +xread +XREAD +XREADGROUP +xgroup_read_id +xread_block +xrevrange +XREVRANGE XSCAN +xtrim +XTRIM +xtrim2 XYZ xzvf YCSB From 51d8fa50f6342a9bba7d5ed274d2f919d35e2db8 Mon Sep 17 00:00:00 2001 From: savynorem Date: Tue, 15 Aug 2023 10:32:47 -0400 Subject: [PATCH 4/9] streams_tutorial -> stream_tutorial --- docs/data-types/streams.md | 52 +++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md index 60d691ce92..d260ffbe63 100644 --- a/docs/data-types/streams.md +++ b/docs/data-types/streams.md @@ -45,7 +45,7 @@ See the [complete list of stream commands](https://redis.io/commands/?group=stre {{< /clients-example >}} * Read two stream entries starting at ID `1691765278160-0`: -{{< clients-example streams_tutorial xrange >}} +{{< clients-example stream_tutorial xrange >}} > XRANGE race:france 1691765278160-0 + COUNT 2 1) 1) "1691765278160-0" 2) 1) "rider" @@ -118,7 +118,7 @@ The format of such IDs may look strange at first, and the gentle reader may wond If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the `XADD` command can take an explicit ID instead of the `*` wildcard ID that triggers auto-generation, like in the following examples: -{{< clients-example streams_toturial xadd_id >}} +{{< clients-example stream_toturial xadd_id >}} > XADD race:usa 0-1 racer Castilla 0-1 > XADD race:usa 0-2 racer Norem @@ -127,14 +127,14 @@ If for some reason the user needs incremental IDs that are not related to time b Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: -{{< clients-example streams_toturial xadd_bad_id >}} +{{< clients-example stream_toturial xadd_bad_id >}} > XADD race:usa 0-1 racer Prickett (error) ERR The ID specified in XADD is equal or smaller than the target stream top item {{< /clients-example >}} If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: -{{< clients-example streams_toturial xadd_7 >}} +{{< clients-example stream_toturial xadd_7 >}} > XADD race:usa 0-* racer Prickett 0-3 {{< /clients-example >}} @@ -153,7 +153,7 @@ Redis Streams support all three of the query modes described above via different To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. -{{< clients-example streams_toturial xrange_all >}} +{{< clients-example stream_toturial xrange_all >}} > XRANGE race:france - + 1) 1) "1691762745152-0" 2) 1) "rider" @@ -195,7 +195,7 @@ To query the stream by range we are only required to specify two IDs, *start* an Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: -{{< clients-example streams_toturial xrange_time >}} +{{< clients-example stream_toturial xrange_time >}} > XRANGE race:france 1691765375864 1691765375866 1) 1) "1691765375865-0" 2) 1) "rider" @@ -210,7 +210,7 @@ Each entry returned is an array of two items: the ID and the list of field-value I have only a single entry in this range, however in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. -{{< clients-example streams_toturial xrange_step_1 >}} +{{< clients-example stream_toturial xrange_step_1 >}} > XRANGE race:france - + COUNT 2 1) 1) "1691762745152-0" 2) 1) "rider" @@ -234,7 +234,7 @@ I have only a single entry in this range, however in real data sets, I could que In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1691765278160-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1691765278160-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: -{{< clients-example streams_toturial xrange_step_2 >}} +{{< clients-example stream_toturial xrange_step_2 >}} > XRANGE race:france (1691765278160-0 + COUNT 2 1) 1) "1691765289770-0" 2) 1) "rider" @@ -258,7 +258,7 @@ In order to continue the iteration with the next two items, I have to pick the l Now that we've gotten 4 items out of a stream that only had 4 things in it, if we try to get more items, we'll get an empty array: -{{< clients-example streams_toturial xrange_empty >}} +{{< clients-example stream_toturial xrange_empty >}} > XRANGE race:france (1691765375865-0 + COUNT 2 (empty array) {{< /clients-example >}} @@ -267,7 +267,7 @@ Since `XRANGE` complexity is *O(log(N))* to seek, and then *O(M)* to return M el The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements in inverted order, so a practical use for `XREVRANGE` is to check what is the last item in a Stream: -{{< clients-example streams_toturial xrevrange >}} +{{< clients-example stream_toturial xrevrange >}} > XREVRANGE race:france + - COUNT 1 1) 1) "1691765375865-0" 2) 1) "rider" @@ -292,7 +292,7 @@ When we do not want to access items by a range in a stream, usually what we want The command that provides the ability to listen for new messages arriving into a stream is called `XREAD`. It's a bit more complex than `XRANGE`, so we'll start showing simple forms, and later the whole command layout will be provided. -{{< clients-example streams_toturial xread >}} +{{< clients-example stream_toturial xread >}} > XREAD COUNT 2 STREAMS race:france 0 1) 1) "race:france" 2) 1) 1) "1691762745152-0" @@ -391,7 +391,7 @@ Now it's time to zoom in to see the fundamental consumer group commands. They ar Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: -{{< clients-example streams_toturial xgroup_create >}} +{{< clients-example stream_toturial xgroup_create >}} > XGROUP CREATE race:france france_location $ OK {{< /clients-example >}} @@ -400,7 +400,7 @@ As you can see in the command above when creating the consumer group we have to `XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: -{{< clients-example streams_toturial xgroup_create_mkstream >}} +{{< clients-example stream_toturial xgroup_create_mkstream >}} > XGROUP CREATE race:italy italy_racers $ MKSTREAM OK {{< /clients-example >}} @@ -412,7 +412,7 @@ Now that the consumer group is created we can immediately try to read messages v We'll add racers to the race:italy stream and try reading something using the consumer group: Note: *here racer is the field name, and the name is the associated value, remember that stream items are small dictionaries.* -{{< clients-example streams_toturial xgroup_read >}} +{{< clients-example stream_toturial xgroup_read >}} > XADD race:italy * racer Castilla "1691766245113-0" > XADD race:italy * racer Royce @@ -441,7 +441,7 @@ This is almost always what you want, however it is also possible to specify a re We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about apples: -{{< clients-example streams_toturial xgroup_read_id >}} +{{< clients-example stream_toturial xgroup_read_id >}} > XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 1) 1) "race:italy" 2) 1) 1) "1691766245113-0" @@ -451,7 +451,7 @@ We can test this behavior immediately specifying an ID of 0, without any **COUNT However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: -{{< clients-example streams_toturial xack >}} +{{< clients-example stream_toturial xack >}} > XACK race:italy italy_racers 1691766245113-0 (integer) 1 > XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 @@ -463,7 +463,7 @@ Don't worry if you yet don't know how `XACK` works, the idea is just that proces Now it's Bob's turn to read something: -{{< clients-example streams_toturial xgroup_read_bob >}} +{{< clients-example stream_toturial xgroup_read_bob >}} > XREADGROUP GROUP italy_racers Bob COUNT 2 STREAMS race:italy > 1) 1) "race:italy" 2) 1) 1) "1691766256307-0" @@ -555,7 +555,7 @@ The first step of this process is just a command that provides observability of This is a read-only command which is always safe to call and will not change ownership of any message. In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. -{{< clients-example streams_toturial xpending >}} +{{< clients-example stream_toturial xpending >}} > XPENDING race:italy italy_racers 1) (integer) 2 2) "1691766256307-0" @@ -575,7 +575,7 @@ XPENDING [[IDLE ] [ By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. -{{< clients-example streams_toturial xpending_plus_minus >}} +{{< clients-example stream_toturial xpending_plus_minus >}} > XPENDING race:italy italy_racers - + 10 1) 1) "1691766256307-0" 2) "Bob" @@ -592,7 +592,7 @@ We have two messages from Bob, and they are idle for 60000+ milliseconds, about Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. -{{< clients-example streams_toturial xrange_pending >}} +{{< clients-example stream_toturial xrange_pending >}} > XRANGE race:italy 1691766256307-0 1691766256307-0 1) 1) "1691766256307-0" 2) 1) "racer" @@ -618,7 +618,7 @@ However, as a side effect, claiming a message will reset its idle time and will This is the result of the command execution: -{{< clients-example streams_toturial xclaim >}} +{{< clients-example stream_toturial xclaim >}} > XCLAIM race:italy italy_racers Alice 60000 1691766256307-0 1) 1) "1691766256307-0" 2) 1) "racer" @@ -646,7 +646,7 @@ XAUTOCLAIM [COUNT count] [JUSTI So, in the example above, I could have used automatic claiming to claim a single message like this: -{{< clients-example streams_toturial xautoclaim >}} +{{< clients-example stream_toturial xautoclaim >}} > XAUTOCLAIM race:italy italy_racers Alice 60000 0-0 COUNT 1 1) "1691766261145-0" 2) 1) 1) "1691766256307-0" @@ -657,7 +657,7 @@ So, in the example above, I could have used automatic claiming to claim a single Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: -{{< clients-example streams_toturial xautoclaim_cursor >}} +{{< clients-example stream_toturial xautoclaim_cursor >}} > XAUTOCLAIM race:italy italy_racers Lora 60000 1526569498055-0 COUNT 1 1) "0-0" 2) 1) 1) "1691766261145-0" @@ -682,7 +682,7 @@ However we may want to do more than that, and the `XINFO` command is an observab This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. -{{< clients-example streams_toturial xinfo >}} +{{< clients-example stream_toturial xinfo >}} > XINFO STREAM race:italy 1) "length" 2) (integer) 5 @@ -706,7 +706,7 @@ This command uses subcommands in order to show different information about the s The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. -{{< clients-example streams_toturial xinfo_groups >}} +{{< clients-example stream_toturial xinfo_groups >}} > XINFO GROUPS race:italy 1) 1) "name" 2) "italy_racers" @@ -722,7 +722,7 @@ As you can see in this and in the previous output, the `XINFO` command outputs a The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. -{{< clients-example streams_toturial xinfo_consumers >}}} +{{< clients-example stream_toturial xinfo_consumers >}} > XINFO CONSUMERS race:italy italy_racers 1) 1) "name" 2) "Alice" From e627742daf779425fef136fc59aeb0ce4af15cef Mon Sep 17 00:00:00 2001 From: savynorem Date: Tue, 15 Aug 2023 10:34:31 -0400 Subject: [PATCH 5/9] wordlist fix --- wordlist | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/wordlist b/wordlist index 8298ab47fd..e3ceda78e5 100644 --- a/wordlist +++ b/wordlist @@ -17,6 +17,7 @@ 1s 1th 2GB +3am 300ms 30ms 32bit @@ -25,11 +26,14 @@ 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 3GB 3MB +4am 4GB 4k +5am 500MB 512MB 5GB +6am 60s 6sync 80ms @@ -131,6 +135,7 @@ Bitwise bitwise bitwise bool +Booleans BPF BPF-optimized BPF's From 96614a800854d72bd22fc0966cbfb0960983e8e7 Mon Sep 17 00:00:00 2001 From: Savannah Date: Tue, 15 Aug 2023 10:38:50 -0400 Subject: [PATCH 6/9] hopefully the last wordlist fix --- wordlist | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/wordlist b/wordlist index 4639e02bee..c1aaf85ed6 100644 --- a/wordlist +++ b/wordlist @@ -974,7 +974,7 @@ suboptimal subsequence substring sudo -SVGsSVGs +SVGs superset swapdb swappability @@ -1142,4 +1142,4 @@ ziplists ZMODEM ZPOP ZSET -zset \ No newline at end of file +zset From caa1987e56efc20feef359a8a0f7ff01c85782c3 Mon Sep 17 00:00:00 2001 From: Savannah Date: Mon, 21 Aug 2023 11:59:35 -0400 Subject: [PATCH 7/9] update stream IDs to be more consistent, add caveat about various implementations of max length approximation --- docs/data-types/streams.md | 232 +++++++++++++++++++------------------ 1 file changed, 117 insertions(+), 115 deletions(-) diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md index d260ffbe63..9371f5cabb 100644 --- a/docs/data-types/streams.md +++ b/docs/data-types/streams.md @@ -37,32 +37,32 @@ See the [complete list of stream commands](https://redis.io/commands/?group=stre * When our racers pass a checkpoint, we add a stream entry for each racer that includes the racer's name, speed, position, and location ID: {{< clients-example stream_tutorial xadd >}} > XADD race:france * rider Castilla speed 30.2 position 1 location_id 1 -"1691762745152-0" +"1692632086370-0" > XADD race:france * rider Norem speed 28.8 position 3 location_id 1 -"1691765278160-0" +"1692632094485-0" > XADD race:france * rider Prickett speed 29.7 position 2 location_id 1 -"1691765289770-0" +"1692632102976-0" {{< /clients-example >}} -* Read two stream entries starting at ID `1691765278160-0`: +* Read two stream entries starting at ID `1692632086370-0`: {{< clients-example stream_tutorial xrange >}} -> XRANGE race:france 1691765278160-0 + COUNT 2 -1) 1) "1691765278160-0" +> XRANGE race:france 1692632086370-0 + COUNT 2 +1) 1) "1692632086370-0" 2) 1) "rider" - 2) "Norem" + 2) "Castilla" 3) "speed" - 4) "28.8" + 4) "30.2" 5) "position" - 6) "3" + 6) "1" 7) "location_id" 8) "1" -2) 1) "1691765289770-0" +2) 1) "1692632094485-0" 2) 1) "rider" - 2) "Prickett" + 2) "Norem" 3) "speed" - 4) "29.7" + 4) "28.8" 5) "position" - 6) "2" + 6) "3" 7) "location_id" 8) "1" {{< /clients-example >}} @@ -92,10 +92,10 @@ Each stream entry consists of one or more field-value pairs, somewhat like a dic {{< clients-example stream_tutorial xadd_2 >}} > XADD race:france * rider Castilla speed 29.9 position 1 location_id 2 -"1691765375865-0" +"1692632147973-0" {{< /clients-example >}} -The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1691762745152-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. +The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1692632147973-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. It is possible to get the number of items inside a Stream just using the `XLEN` command: @@ -155,7 +155,7 @@ To query the stream by range we are only required to specify two IDs, *start* an {{< clients-example stream_toturial xrange_all >}} > XRANGE race:france - + -1) 1) "1691762745152-0" +1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -164,7 +164,7 @@ To query the stream by range we are only required to specify two IDs, *start* an 6) "1" 7) "location_id" 8) "1" -2) 1) "1691765278160-0" +2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" @@ -173,7 +173,7 @@ To query the stream by range we are only required to specify two IDs, *start* an 6) "3" 7) "location_id" 8) "1" -3) 1) "1691765289770-0" +3) 1) "1692632102976-0" 2) 1) "rider" 2) "Prickett" 3) "speed" @@ -182,7 +182,7 @@ To query the stream by range we are only required to specify two IDs, *start* an 6) "2" 7) "location_id" 8) "1" -4) 1) "1691765375865-0" +4) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -196,23 +196,23 @@ To query the stream by range we are only required to specify two IDs, *start* an Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: {{< clients-example stream_toturial xrange_time >}} -> XRANGE race:france 1691765375864 1691765375866 -1) 1) "1691765375865-0" +> XRANGE race:france 1692632086369 1692632086371 +1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" - 4) "29.9" + 4) "30.2" 5) "position" 6) "1" 7) "location_id" - 8) "2" + 8) "1" {{< /clients-example >}} I have only a single entry in this range, however in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. {{< clients-example stream_toturial xrange_step_1 >}} > XRANGE race:france - + COUNT 2 -1) 1) "1691762745152-0" +1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -221,7 +221,7 @@ I have only a single entry in this range, however in real data sets, I could que 6) "1" 7) "location_id" 8) "1" -2) 1) "1691765278160-0" +2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" @@ -232,11 +232,11 @@ I have only a single entry in this range, however in real data sets, I could que 8) "1" {{< /clients-example >}} -In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1691765278160-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1691765278160-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: +In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: {{< clients-example stream_toturial xrange_step_2 >}} -> XRANGE race:france (1691765278160-0 + COUNT 2 -1) 1) "1691765289770-0" +> XRANGE race:france (1692632094485-0 + COUNT 2 +1) 1) "1692632102976-0" 2) 1) "rider" 2) "Prickett" 3) "speed" @@ -245,7 +245,7 @@ In order to continue the iteration with the next two items, I have to pick the l 6) "2" 7) "location_id" 8) "1" -2) 1) "1691765375865-0" +2) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -259,7 +259,7 @@ In order to continue the iteration with the next two items, I have to pick the l Now that we've gotten 4 items out of a stream that only had 4 things in it, if we try to get more items, we'll get an empty array: {{< clients-example stream_toturial xrange_empty >}} -> XRANGE race:france (1691765375865-0 + COUNT 2 +> XRANGE race:france (1692632147973-0 + COUNT 2 (empty array) {{< /clients-example >}} @@ -269,7 +269,7 @@ The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements {{< clients-example stream_toturial xrevrange >}} > XREVRANGE race:france + - COUNT 1 -1) 1) "1691765375865-0" +1) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -295,7 +295,7 @@ The command that provides the ability to listen for new messages arriving into a {{< clients-example stream_toturial xread >}} > XREAD COUNT 2 STREAMS race:france 0 1) 1) "race:france" - 2) 1) 1) "1691762745152-0" + 2) 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" @@ -304,7 +304,7 @@ The command that provides the ability to listen for new messages arriving into a 6) "1" 7) "location_id" 8) "1" - 2) 1) "1691765278160-0" + 2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" @@ -392,7 +392,7 @@ Now it's time to zoom in to see the fundamental consumer group commands. They ar Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: {{< clients-example stream_toturial xgroup_create >}} -> XGROUP CREATE race:france france_location $ +> XGROUP CREATE race:france france_riders $ OK {{< /clients-example >}} @@ -401,7 +401,7 @@ As you can see in the command above when creating the consumer group we have to `XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: {{< clients-example stream_toturial xgroup_create_mkstream >}} -> XGROUP CREATE race:italy italy_racers $ MKSTREAM +> XGROUP CREATE race:italy italy_riders $ MKSTREAM OK {{< /clients-example >}} @@ -409,24 +409,24 @@ Now that the consumer group is created we can immediately try to read messages v `XREADGROUP` is very similar to `XREAD` and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in `XREAD`. -We'll add racers to the race:italy stream and try reading something using the consumer group: -Note: *here racer is the field name, and the name is the associated value, remember that stream items are small dictionaries.* +We'll add riders to the race:italy stream and try reading something using the consumer group: +Note: *here rider is the field name, and the name is the associated value, remember that stream items are small dictionaries.* {{< clients-example stream_toturial xgroup_read >}} -> XADD race:italy * racer Castilla -"1691766245113-0" -> XADD race:italy * racer Royce -"1691766256307-0" -> XADD race:italy * racer Sam-Bodden -"1691766261145-0" -> XADD race:italy * racer Prickett -"1691766685178-0" -> XADD race:italy * racer Norem -"1691766698493-0" -> XREADGROUP GROUP italy_racers Alice COUNT 1 STREAMS race:italy > +> XADD race:italy * rider Castilla +"1692632639151-0" +> XADD race:italy * rider Royce +"1692632647899-0" +> XADD race:italy * rider Sam-Bodden +"1692632662819-0" +> XADD race:italy * rider Prickett +"1692632670501-0" +> XADD race:italy * rider Norem +"1692632678249-0" +> XREADGROUP GROUP italy_riders Alice COUNT 1 STREAMS race:italy > 1) 1) "race:italy" - 2) 1) 1) "1691766245113-0" - 2) 1) "racer" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" 2) "Castilla" {{< /clients-example >}} @@ -442,19 +442,19 @@ This is almost always what you want, however it is also possible to specify a re We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about apples: {{< clients-example stream_toturial xgroup_read_id >}} -> XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 1) 1) "race:italy" - 2) 1) 1) "1691766245113-0" - 2) 1) "racer" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" 2) "Castilla" {{< /clients-example >}} However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: {{< clients-example stream_toturial xack >}} -> XACK race:italy italy_racers 1691766245113-0 +> XACK race:italy italy_riders 1692632639151-0 (integer) 1 -> XREADGROUP GROUP italy_racers Alice STREAMS race:italy 0 +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 1) 1) "race:italy" 2) (empty array) {{< /clients-example >}} @@ -464,13 +464,13 @@ Don't worry if you yet don't know how `XACK` works, the idea is just that proces Now it's Bob's turn to read something: {{< clients-example stream_toturial xgroup_read_bob >}} -> XREADGROUP GROUP italy_racers Bob COUNT 2 STREAMS race:italy > +> XREADGROUP GROUP italy_riders Bob COUNT 2 STREAMS race:italy > 1) 1) "race:italy" - 2) 1) 1) "1691766256307-0" - 2) 1) "racer" + 2) 1) 1) "1692632647899-0" + 2) 1) "rider" 2) "Royce" - 2) 1) "1691766261145-0" - 2) 1) "racer" + 2) 1) "1692632662819-0" + 2) 1) "rider" 2) "Sam-Bodden" {{< /clients-example >}} @@ -556,10 +556,10 @@ This is a read-only command which is always safe to call and will not change own In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. {{< clients-example stream_toturial xpending >}} -> XPENDING race:italy italy_racers +> XPENDING race:italy italy_riders 1) (integer) 2 -2) "1691766256307-0" -3) "1691766261145-0" +2) "1692632647899-0" +3) "1692632662819-0" 4) 1) 1) "Bob" 2) "2" {{< /clients-example >}} @@ -576,14 +576,14 @@ XPENDING [[IDLE ] [ By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. {{< clients-example stream_toturial xpending_plus_minus >}} -> XPENDING race:italy italy_racers - + 10 -1) 1) "1691766256307-0" +> XPENDING race:italy italy_riders - + 10 +1) 1) "1692632647899-0" 2) "Bob" - 3) (integer) 60644 + 3) (integer) 74642 4) (integer) 1 -2) 1) "1691766261145-0" +2) 1) "1692632662819-0" 2) "Bob" - 3) (integer) 60644 + 3) (integer) 74642 4) (integer) 1 {{< /clients-example >}} @@ -593,9 +593,9 @@ We have two messages from Bob, and they are idle for 60000+ milliseconds, about Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. {{< clients-example stream_toturial xrange_pending >}} -> XRANGE race:italy 1691766256307-0 1691766256307-0 -1) 1) "1691766256307-0" - 2) 1) "racer" +> XRANGE race:italy 1692632647899-0 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" 2) "Royce" {{< /clients-example >}} @@ -610,8 +610,8 @@ XCLAIM ... Basically we say, for this specific key and group, I want that the message IDs specified will change ownership, and will be assigned to the specified consumer name ``. However, we also provide a minimum idle time, so that the operation will only work if the idle time of the mentioned messages is greater than the specified idle time. This is useful because maybe two clients are retrying to claim a message at the same time: ``` -Client 1: XCLAIM race:italy italy_racers Alice 60000 1691766256307-0 -Client 2: XCLAIM race:italy italy_racers Lora 60000 1691766256307-0 +Client 1: XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +Client 2: XCLAIM race:italy italy_riders Lora 60000 1692632647899-0 ``` However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). @@ -619,9 +619,9 @@ However, as a side effect, claiming a message will reset its idle time and will This is the result of the command execution: {{< clients-example stream_toturial xclaim >}} -> XCLAIM race:italy italy_racers Alice 60000 1691766256307-0 -1) 1) "1691766256307-0" - 2) 1) "racer" +> XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" 2) "Royce" {{< /clients-example >}} @@ -647,22 +647,22 @@ XAUTOCLAIM [COUNT count] [JUSTI So, in the example above, I could have used automatic claiming to claim a single message like this: {{< clients-example stream_toturial xautoclaim >}} -> XAUTOCLAIM race:italy italy_racers Alice 60000 0-0 COUNT 1 -1) "1691766261145-0" -2) 1) 1) "1691766256307-0" - 2) 1) "racer" - 2) "Royce" +> XAUTOCLAIM race:italy italy_riders Alice 60000 0-0 COUNT 1 +1) "0-0" +2) 1) 1) "1692632662819-0" + 2) 1) "rider" + 2) "Sam-Bodden" {{< /clients-example >}} Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: {{< clients-example stream_toturial xautoclaim_cursor >}} -> XAUTOCLAIM race:italy italy_racers Lora 60000 1526569498055-0 COUNT 1 -1) "0-0" -2) 1) 1) "1691766261145-0" - 2) 1) "racer" - 2) "Sam-Bodden" +> XAUTOCLAIM race:italy italy_riders Lora 60000 (1692632662819-0 COUNT 1 +1) "1692632662819-0" +2) 1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" {{< /clients-example >}} When `XAUTOCLAIM` returns the "0-0" stream ID as a cursor, that means that it reached the end of the consumer group pending entries list. @@ -691,16 +691,16 @@ This command uses subcommands in order to show different information about the s 5) "radix-tree-nodes" 6) (integer) 2 7) "last-generated-id" - 8) "1691766698493-0" + 8) "1692632678249-0" 9) "groups" 10) (integer) 1 11) "first-entry" -12) 1) "1691766245113-0" - 2) 1) "racer" +12) 1) "1692632639151-0" + 2) 1) "rider" 2) "Castilla" 13) "last-entry" -14) 1) "1691766698493-0" - 2) 1) "racer" +14) 1) "1692632678249-0" + 2) 1) "rider" 2) "Norem" {{< /clients-example >}} @@ -709,13 +709,13 @@ The output shows information about how the stream is encoded internally, and als {{< clients-example stream_toturial xinfo_groups >}} > XINFO GROUPS race:italy 1) 1) "name" - 2) "italy_racers" + 2) "italy_riders" 3) "consumers" 4) (integer) 3 5) "pending" 6) (integer) 2 7) "last-delivered-id" - 8) "1691766261145-0" + 8) "1692632662819-0" {{< /clients-example >}} As you can see in this and in the previous output, the `XINFO` command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking compatibility with older clients. Other commands that must be more bandwidth efficient, like `XPENDING`, just report the information without the field names. @@ -723,25 +723,25 @@ As you can see in this and in the previous output, the `XINFO` command outputs a The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. {{< clients-example stream_toturial xinfo_consumers >}} -> XINFO CONSUMERS race:italy italy_racers +> XINFO CONSUMERS race:italy italy_riders 1) 1) "name" 2) "Alice" 3) "pending" 4) (integer) 1 5) "idle" - 6) (integer) 130215 + 6) (integer) 177546 2) 1) "name" 2) "Bob" 3) "pending" 4) (integer) 0 5) "idle" - 6) (integer) 2581506 + 6) (integer) 424686 3) 1) "name" 2) "Lora" 3) "pending" 4) (integer) 1 5) "idle" - 6) (integer) 102218 + 6) (integer) 72241 {{< /clients-example >}} In case you do not remember the syntax of the command, just ask the command itself for help: @@ -780,20 +780,20 @@ So basically Kafka partitions are more similar to using N different Redis keys, Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to store the history for, potentially, decades to come. Redis streams have some support for this. One is the **MAXLEN** option of the `XADD` command. This option is very simple to use: {{< clients-example stream_tutorial maxlen >}} -> XADD race:italy MAXLEN 2 * racer Jones -"1691769379388-0" -> XADD race:italy MAXLEN 2 * racer Wood -"1691769438199-0" -> XADD race:italy MAXLEN 2 * racer Henshaw -"1691769502417-0" +> XADD race:italy MAXLEN 2 * rider Jones +"1692633189161-0" +> XADD race:italy MAXLEN 2 * rider Wood +"1692633198206-0" +> XADD race:italy MAXLEN 2 * rider Henshaw +"1692633208557-0" > XLEN race:italy (integer) 2 > XRANGE race:italy - + -1) 1) "1691769438199-0" - 2) 1) "racer" +1) 1) "1692633198206-0" + 2) 1) "rider" 2) "Wood" -2) 1) "1691769502417-0" - 2) 1) "racer" +2) 1) "1692633208557-0" + 2) 1) "rider" 2) "Henshaw" {{< /clients-example >}} @@ -805,18 +805,20 @@ However trimming with **MAXLEN** can be expensive: streams are represented by ma XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... ``` -The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. +The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this; for example the Python client defaults to approximate and has to explicitly be set to a true length. There is also the `XTRIM` command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: {{< clients-example stream_tutorial xtrim >}} > XTRIM race:italy MAXLEN 10 +(integer) 0 {{< /clients-example >}} Or, as for the `XADD` option: {{< clients-example stream_tutorial xtrim2 >}} > XTRIM mystream MAXLEN ~ 10 +(integer) 0 {{< /clients-example >}} However, `XTRIM` is designed to accept different trimming strategies. Another trimming strategy is **MINID**, that evicts entries with IDs lower than the one specified. @@ -859,17 +861,17 @@ Streams also have a special command for removing items from the middle of a stre {{< clients-example stream_tutorial xdel >}} > XRANGE race:italy - + COUNT 2 -1) 1) "1691769438199-0" - 2) 1) "racer" +1) 1) "1692633198206-0" + 2) 1) "rider" 2) "Wood" -2) 1) "1691769502417-0" - 2) 1) "racer" +2) 1) "1692633208557-0" + 2) 1) "rider" 2) "Henshaw" -> XDEL race:italy 1691769502417-0 +> XDEL race:italy 1692633208557-0 (integer) 1 > XRANGE race:italy - + COUNT 2 -1) 1) "1691769438199-0" - 2) 1) "racer" +1) 1) "1692633198206-0" + 2) 1) "rider" 2) "Wood" {{< /clients-example >}} From 4ee5a0c12cb424d7c1083fb4bd6d3b2fdb0d91cc Mon Sep 17 00:00:00 2001 From: Savannah Date: Tue, 22 Aug 2023 10:30:26 -0400 Subject: [PATCH 8/9] address feedback from PR + spelling tutorial correctly --- docs/data-types/streams.md | 60 +++++++++++++++++++------------------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md index 9371f5cabb..9ca9041527 100644 --- a/docs/data-types/streams.md +++ b/docs/data-types/streams.md @@ -118,7 +118,7 @@ The format of such IDs may look strange at first, and the gentle reader may wond If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the `XADD` command can take an explicit ID instead of the `*` wildcard ID that triggers auto-generation, like in the following examples: -{{< clients-example stream_toturial xadd_id >}} +{{< clients-example stream_tutorial xadd_id >}} > XADD race:usa 0-1 racer Castilla 0-1 > XADD race:usa 0-2 racer Norem @@ -127,14 +127,14 @@ If for some reason the user needs incremental IDs that are not related to time b Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: -{{< clients-example stream_toturial xadd_bad_id >}} +{{< clients-example stream_tutorial xadd_bad_id >}} > XADD race:usa 0-1 racer Prickett (error) ERR The ID specified in XADD is equal or smaller than the target stream top item {{< /clients-example >}} If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: -{{< clients-example stream_toturial xadd_7 >}} +{{< clients-example stream_tutorial xadd_7 >}} > XADD race:usa 0-* racer Prickett 0-3 {{< /clients-example >}} @@ -153,7 +153,7 @@ Redis Streams support all three of the query modes described above via different To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. -{{< clients-example stream_toturial xrange_all >}} +{{< clients-example stream_tutorial xrange_all >}} > XRANGE race:france - + 1) 1) "1692632086370-0" 2) 1) "rider" @@ -195,7 +195,7 @@ To query the stream by range we are only required to specify two IDs, *start* an Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: -{{< clients-example stream_toturial xrange_time >}} +{{< clients-example stream_tutorial xrange_time >}} > XRANGE race:france 1692632086369 1692632086371 1) 1) "1692632086370-0" 2) 1) "rider" @@ -208,9 +208,9 @@ Each entry returned is an array of two items: the ID and the list of field-value 8) "1" {{< /clients-example >}} -I have only a single entry in this range, however in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. +I have only a single entry in this range. However in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. -{{< clients-example stream_toturial xrange_step_1 >}} +{{< clients-example stream_tutorial xrange_step_1 >}} > XRANGE race:france - + COUNT 2 1) 1) "1692632086370-0" 2) 1) "rider" @@ -232,9 +232,9 @@ I have only a single entry in this range, however in real data sets, I could que 8) "1" {{< /clients-example >}} -In order to continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0` and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: +To continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0`, and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: -{{< clients-example stream_toturial xrange_step_2 >}} +{{< clients-example stream_tutorial xrange_step_2 >}} > XRANGE race:france (1692632094485-0 + COUNT 2 1) 1) "1692632102976-0" 2) 1) "rider" @@ -256,9 +256,9 @@ In order to continue the iteration with the next two items, I have to pick the l 8) "2" {{< /clients-example >}} -Now that we've gotten 4 items out of a stream that only had 4 things in it, if we try to get more items, we'll get an empty array: +Now that we've retrieved 4 items out of a stream that only had 4 entries in it, if we try to retrieve more items, we'll get an empty array: -{{< clients-example stream_toturial xrange_empty >}} +{{< clients-example stream_tutorial xrange_empty >}} > XRANGE race:france (1692632147973-0 + COUNT 2 (empty array) {{< /clients-example >}} @@ -267,7 +267,7 @@ Since `XRANGE` complexity is *O(log(N))* to seek, and then *O(M)* to return M el The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements in inverted order, so a practical use for `XREVRANGE` is to check what is the last item in a Stream: -{{< clients-example stream_toturial xrevrange >}} +{{< clients-example stream_tutorial xrevrange >}} > XREVRANGE race:france + - COUNT 1 1) 1) "1692632147973-0" 2) 1) "rider" @@ -292,7 +292,7 @@ When we do not want to access items by a range in a stream, usually what we want The command that provides the ability to listen for new messages arriving into a stream is called `XREAD`. It's a bit more complex than `XRANGE`, so we'll start showing simple forms, and later the whole command layout will be provided. -{{< clients-example stream_toturial xread >}} +{{< clients-example stream_tutorial xread >}} > XREAD COUNT 2 STREAMS race:france 0 1) 1) "race:france" 2) 1) 1) "1692632086370-0" @@ -391,7 +391,7 @@ Now it's time to zoom in to see the fundamental consumer group commands. They ar Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: -{{< clients-example stream_toturial xgroup_create >}} +{{< clients-example stream_tutorial xgroup_create >}} > XGROUP CREATE race:france france_riders $ OK {{< /clients-example >}} @@ -400,7 +400,7 @@ As you can see in the command above when creating the consumer group we have to `XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: -{{< clients-example stream_toturial xgroup_create_mkstream >}} +{{< clients-example stream_tutorial xgroup_create_mkstream >}} > XGROUP CREATE race:italy italy_riders $ MKSTREAM OK {{< /clients-example >}} @@ -410,9 +410,9 @@ Now that the consumer group is created we can immediately try to read messages v `XREADGROUP` is very similar to `XREAD` and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in `XREAD`. We'll add riders to the race:italy stream and try reading something using the consumer group: -Note: *here rider is the field name, and the name is the associated value, remember that stream items are small dictionaries.* +Note: *here rider is the field name, and the name is the associated value. Remember that stream items are small dictionaries.* -{{< clients-example stream_toturial xgroup_read >}} +{{< clients-example stream_tutorial xgroup_read >}} > XADD race:italy * rider Castilla "1692632639151-0" > XADD race:italy * rider Royce @@ -441,7 +441,7 @@ This is almost always what you want, however it is also possible to specify a re We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about apples: -{{< clients-example stream_toturial xgroup_read_id >}} +{{< clients-example stream_tutorial xgroup_read_id >}} > XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 1) 1) "race:italy" 2) 1) 1) "1692632639151-0" @@ -451,7 +451,7 @@ We can test this behavior immediately specifying an ID of 0, without any **COUNT However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: -{{< clients-example stream_toturial xack >}} +{{< clients-example stream_tutorial xack >}} > XACK race:italy italy_riders 1692632639151-0 (integer) 1 > XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 @@ -463,7 +463,7 @@ Don't worry if you yet don't know how `XACK` works, the idea is just that proces Now it's Bob's turn to read something: -{{< clients-example stream_toturial xgroup_read_bob >}} +{{< clients-example stream_tutorial xgroup_read_bob >}} > XREADGROUP GROUP italy_riders Bob COUNT 2 STREAMS race:italy > 1) 1) "race:italy" 2) 1) 1) "1692632647899-0" @@ -555,7 +555,7 @@ The first step of this process is just a command that provides observability of This is a read-only command which is always safe to call and will not change ownership of any message. In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. -{{< clients-example stream_toturial xpending >}} +{{< clients-example stream_tutorial xpending >}} > XPENDING race:italy italy_riders 1) (integer) 2 2) "1692632647899-0" @@ -575,7 +575,7 @@ XPENDING [[IDLE ] [ By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. -{{< clients-example stream_toturial xpending_plus_minus >}} +{{< clients-example stream_tutorial xpending_plus_minus >}} > XPENDING race:italy italy_riders - + 10 1) 1) "1692632647899-0" 2) "Bob" @@ -592,7 +592,7 @@ We have two messages from Bob, and they are idle for 60000+ milliseconds, about Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. -{{< clients-example stream_toturial xrange_pending >}} +{{< clients-example stream_tutorial xrange_pending >}} > XRANGE race:italy 1692632647899-0 1692632647899-0 1) 1) "1692632647899-0" 2) 1) "rider" @@ -618,7 +618,7 @@ However, as a side effect, claiming a message will reset its idle time and will This is the result of the command execution: -{{< clients-example stream_toturial xclaim >}} +{{< clients-example stream_tutorial xclaim >}} > XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 1) 1) "1692632647899-0" 2) 1) "rider" @@ -646,7 +646,7 @@ XAUTOCLAIM [COUNT count] [JUSTI So, in the example above, I could have used automatic claiming to claim a single message like this: -{{< clients-example stream_toturial xautoclaim >}} +{{< clients-example stream_tutorial xautoclaim >}} > XAUTOCLAIM race:italy italy_riders Alice 60000 0-0 COUNT 1 1) "0-0" 2) 1) 1) "1692632662819-0" @@ -657,7 +657,7 @@ So, in the example above, I could have used automatic claiming to claim a single Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: -{{< clients-example stream_toturial xautoclaim_cursor >}} +{{< clients-example stream_tutorial xautoclaim_cursor >}} > XAUTOCLAIM race:italy italy_riders Lora 60000 (1692632662819-0 COUNT 1 1) "1692632662819-0" 2) 1) 1) "1692632647899-0" @@ -682,7 +682,7 @@ However we may want to do more than that, and the `XINFO` command is an observab This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. -{{< clients-example stream_toturial xinfo >}} +{{< clients-example stream_tutorial xinfo >}} > XINFO STREAM race:italy 1) "length" 2) (integer) 5 @@ -706,7 +706,7 @@ This command uses subcommands in order to show different information about the s The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. -{{< clients-example stream_toturial xinfo_groups >}} +{{< clients-example stream_tutorial xinfo_groups >}} > XINFO GROUPS race:italy 1) 1) "name" 2) "italy_riders" @@ -722,7 +722,7 @@ As you can see in this and in the previous output, the `XINFO` command outputs a The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. -{{< clients-example stream_toturial xinfo_consumers >}} +{{< clients-example stream_tutorial xinfo_consumers >}} > XINFO CONSUMERS race:italy italy_riders 1) 1) "name" 2) "Alice" @@ -805,7 +805,7 @@ However trimming with **MAXLEN** can be expensive: streams are represented by ma XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... ``` -The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this; for example the Python client defaults to approximate and has to explicitly be set to a true length. +The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this, For example, the Python client defaults to approximate and has to be explicitly set to a true length. There is also the `XTRIM` command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: From cd2e477991f98ddf47a65947298fb609407015d7 Mon Sep 17 00:00:00 2001 From: Savannah Date: Tue, 22 Aug 2023 10:39:35 -0400 Subject: [PATCH 9/9] Update docs/data-types/streams.md Co-authored-by: David Dougherty --- docs/data-types/streams.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md index 9ca9041527..378c99cd76 100644 --- a/docs/data-types/streams.md +++ b/docs/data-types/streams.md @@ -805,7 +805,7 @@ However trimming with **MAXLEN** can be expensive: streams are represented by ma XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... ``` -The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this, For example, the Python client defaults to approximate and has to be explicitly set to a true length. +The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this. For example, the Python client defaults to approximate and has to be explicitly set to a true length. There is also the `XTRIM` command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: