You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #302, we removed per-event tracing due to performance overhead it incurred. Instead of using per-event tracing, we should still trace Kafka reads and writes at a broker read/write. This avoids the performance penalty in high-throughput services.
// HookBrokerWrite is called after a write to a broker.//// Kerberos SASL does not cause write hooks, since it directly writes to the// connection.typeHookBrokerWriteinterface {
// OnBrokerWrite is passed the broker metadata, the key for the request// that was written, the number of bytes that were written (may not be// the whole request if there was an error), how long the request// waited before being written (including throttling waiting), how long// it took to write the request, and any error.//// The bytes written does not count any tls overhead.OnBrokerWrite(metaBrokerMetadata, keyint16, bytesWrittenint, writeWait, timeToWrite time.Duration, errerror)
}
// HookBrokerRead is called after a read from a broker.//// Kerberos SASL does not cause read hooks, since it directly reads from the// connection.typeHookBrokerReadinterface {
// OnBrokerRead is passed the broker metadata, the key for the response// that was read, the number of bytes read (may not be the whole read// if there was an error), how long the client waited before reading// the response, how long it took to read the response, and any error.//// The bytes read does not count any tls overhead.OnBrokerRead(metaBrokerMetadata, keyint16, bytesReadint, readWait, timeToRead time.Duration, errerror)
}
Unless we filter by the operation key, all reads and writes will be traced. Writes must only trace:
This may be a bit unorthodox, but we can simulate tracing by using timeToWrite and timeToRead, respectively by using a custom timestamp in the span with:time.Now().Add(-timeToWrite| timeToRead). Then end the trace right away after we've enriched it with the necessary data and kafka messaging fields (See Kotel tracer for inspiration).
The text was updated successfully, but these errors were encountered:
Description
In #302, we removed per-event tracing due to performance overhead it incurred. Instead of using per-event tracing, we should still trace Kafka reads and writes at a broker read/write. This avoids the performance penalty in high-throughput services.
Implementation
Kgo provides two hooks that can be used for that; OnBrokerWrite and OnBrokerRead:
Unless we filter by the operation key, all reads and writes will be traced. Writes must only trace:
kmsg.Produce
kmsg.OffsetCommit
Reads must only trace these keys:
kmsg.Fetch
This may be a bit unorthodox, but we can simulate tracing by using
timeToWrite
andtimeToRead
, respectively by using a custom timestamp in the span with:time.Now().Add(-timeToWrite| timeToRead)
. Then end the trace right away after we've enriched it with the necessary data and kafka messaging fields (See Kotel tracer for inspiration).The text was updated successfully, but these errors were encountered: