Skip to content

Commit 9bd2c87

Browse files
authored
Merge pull request #3043 from omercelikceng/fix-typo
Fix typo in docs
2 parents 2aa8d49 + 7c396a6 commit 9bd2c87

File tree

10 files changed

+12
-12
lines changed

10 files changed

+12
-12
lines changed

docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ You can override this default by using the `partitionSelectorExpression` or `par
7373
Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side.
7474
Kafka allocates partitions across the instances.
7575

76-
NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an adminstration task).
76+
NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an administration task).
7777
The calculated partitions will be different after that (e.g. new partitions will be used then).
7878
Since 4.0.3 of Spring Cloud Stream runtime changes of partition count will be supported.
7979
See also parameter 'spring.kafka.producer.properties.metadata.max.age.ms' to configure update interval.

docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[retry-and-dlq-processing]]
22
= Retry and Dead Letter Processing
33

4-
By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
4+
By default, when you configure retry (e.g. `maxAttempts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
55

66
There are situations where it is preferable to move this functionality to the listener container, such as:
77

docs/modules/ROOT/pages/kafka/kafka-reactive-binder/consuming.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ spring.cloud.stream.kafka.bindings.lowercase-in-0.consumer.converterBeanName=ful
5353
```
5454

5555
`lowercase-in-0` is the input binding name for our `lowercase` function.
56-
For the outbound (`lowecase-out-0`), we still use the regular `MessagingMessageConverter`.
56+
For the outbound (`lowercase-out-0`), we still use the regular `MessagingMessageConverter`.
5757

5858
In the `toMessage` implementation above, we receive the raw `ConsumerRecord` (`ReceiverRecord` since we are in a reactive binder context) and then wrap it inside a `Message`.
5959
Then that message payload which is the `ReceiverRecord` is provided to the user method.

docs/modules/ROOT/pages/kafka/kafka-reactive-binder/examples.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ In these cases, the acknowledgment header is not present.
5050

5151
IMPORTANT: 4.0.2 also provided `reactiveAutoCommit`, but the implementation was incorrect, it behaved similarly to `reactiveAtMostOnce`.
5252

53-
The following is an example of how to use `reaciveAutoCommit`.
53+
The following is an example of how to use `reactiveAutoCommit`.
5454

5555
[source, java]
5656
----

docs/modules/ROOT/pages/kafka/kafka-reactive-binder/pattern.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
= Destination is Pattern
33

44
Starting with version 4.0.3, the `destination-is-pattern` Kafka binding consumer property is now supported.
5-
The receiver options are conigured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern.
5+
The receiver options are configured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern.

docs/modules/ROOT/pages/kafka/kafka-streams-binder/accessing-metrics.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,5 @@ For Spring Boot version 2.2.x, the metrics support is provided through a custom
88
For Spring Boot version 2.3.x, the Kafka Streams metrics support is provided natively through Micrometer.
99

1010
When accessing metrics through the Boot actuator endpoint, make sure to add `metrics` to the property `management.endpoints.web.exposure.include`.
11-
Then you can access `/acutator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/<metric-name>`).
11+
Then you can access `/actuator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/<metric-name>`).
1212

docs/modules/ROOT/pages/kafka/kafka-streams-binder/configuration-options.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ Default: See the discussion above on outbound partition support.
118118
producedAs::
119119
Custom name for the sink component to which the processor is producing to.
120120
+
121-
Deafult: `none` (generated by Kafka Streams)
121+
Default: `none` (generated by Kafka Streams)
122122

123123
[[kafka-streams-consumer-properties]]
124124
== Kafka Streams Consumer Properties

docs/modules/ROOT/pages/kafka/kafka-streams-binder/event-type-based-routing-in-applications.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ For instance, if we want to change the header key on this binding to `my_event`
3232

3333
`spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypeHeaderKey=my_event`.
3434

35-
When using the event routing feature in Kafkfa Streams binder, it uses the byte array `Serde` to deserialze all incoming records.
35+
When using the event routing feature in Kafka Streams binder, it uses the byte array `Serde` to deserialize all incoming records.
3636
If the record headers match the event type, then only it uses the actual `Serde` to do a proper deserialization using either the configured or the inferred `Serde`.
3737
This introduces issues if you set a deserialization exception handler on the binding as the expected deserialization only happens down the stack causing unexpected errors.
3838
In order to address this issue, you can set the following property on the binding to force the binder to use the configured or inferred `Serde` instead of byte array `Serde`.

docs/modules/ROOT/pages/kafka/kafka-streams-binder/programming-model.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -353,15 +353,15 @@ spring.cloud.function.definition=foo|bar;foo;bar
353353

354354
The composed function's default binding names in this example becomes `foobar-in-0` and `foobar-out-0`.
355355

356-
[[limitations-of-functional-composition-in-kafka-streams-bincer]]
357-
==== Limitations of functional composition in Kafka Streams bincer
356+
[[limitations-of-functional-composition-in-kafka-streams-binder]]
357+
==== Limitations of functional composition in Kafka Streams binder
358358

359359
When you have `java.util.function.Function` bean, that can be composed with another function or multiple functions.
360360
The same function bean can be composed with a `java.util.function.Consumer` as well. In this case, consumer is the last component composed.
361361
A function can be composed with multiple functions, then end with a `java.util.function.Consumer` bean as well.
362362

363363
When composing the beans of type `java.util.function.BiFunction`, the `BiFunction` must be the first function in the definition.
364-
The composed entities must be either of type `java.util.function.Function` or `java.util.funciton.Consumer`.
364+
The composed entities must be either of type `java.util.function.Function` or `java.util.function.Consumer`.
365365
In other words, you cannot take a `BiFunction` bean and then compose with another `BiFunction`.
366366

367367
You cannot compose with types of `BiConsumer` or definitions where `Consumer` is the first component.

docs/modules/ROOT/pages/kafka/kafka-streams-binder/streamsbuilderfactorybean-customizer.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ If you have multiple processors, you want to attach the global state store to th
104104
== Using StreamsBuilderFactoryBeanConfigurer to register a production exception handler
105105

106106
In the error handling section, we indicated that the binder does not provide a first class way to deal with production exceptions.
107-
Though that is the case, you can still use the `StreamsBuilderFacotryBean` customizer to register production exception handlers. See below.
107+
Though that is the case, you can still use the `StreamsBuilderFactoryBean` customizer to register production exception handlers. See below.
108108

109109
```
110110
@Bean

0 commit comments

Comments
 (0)