-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove Protobuf Transitive Dependency #38
base: main
Are you sure you want to change the base?
Conversation
e0d8aee
to
0b58bdb
Compare
# We need to inspect the spans and group + structure them as: | ||
# | ||
# Resource | ||
# Instrumentation Library |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instrumentation Library -> Scope
def write_to(self, out: "ProtoSerializer") -> None: | ||
... | ||
|
||
class ProtoSerializer: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One optimization that you can do is to pre-calculate the size of the binary array and pre-allocate the array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried changing my serialize
implementation to write to a right-sized buffer (using memoryview for zero-copy fast overwriting of slices). I noticed that it was about 10-20% slower than my current implementation (Can see my changes here https://github.com/sfc-gh-jopel/snowflake-telemetry-python/pull/1/files). This script illustrates it a little.
def test1():
b = bytearray(len(b"hello world") * 10000)
m = memoryview(b)
i = 0
l = len(b"hello world")
for _ in range(10000):
m[i:i+l] = b"hello world"
i += l
return b
def test2():
b = bytearray()
for _ in range(10000):
b.extend(b"hello world"[::-1])
return b[::-1]
def test3():
b = bytearray()
for _ in range(10000):
b += b"hello world"[::-1]
return b[::-1]
if __name__ == "__main__":
import timeit
print(timeit.timeit("test1()", setup="from __main__ import test1", number=1000))
print(timeit.timeit("test2()", setup="from __main__ import test2", number=1000))
print(timeit.timeit("test3()", setup="from __main__ import test3", number=1000))
Output:
1.3387652660021558
1.4347571920079645
1.276998276996892
I had the extend
approach reverse the bytes before writing since that is what my current implementation is doing. The preallocated approach was only slightly faster than the extend
approach, and slower than +=
method. (I should switch the extend
to +=
for a slight performance increase). Calculating the size of the buffer would be more complex for our proto messages than what is done in this script, and likely make it the slowest of the 3.
This result is puzzling to me because the memoryview
approach should be more memory friendly. My guess is for the messages I am using in my benchmark (and probably most telemetry data) fits in cache so the resize operation happens infrequently and is fast enough that it still outperforms the overhead of bounds checking in the right-sized approach.
I am not sure what is the best thing to do here. I think memoryview
will outperform for huge logs and traces, but I don't think it's the typical use case.
077f82b
to
fa8d31e
Compare
02dd321
to
929b9d1
Compare
Splitting PRs
Since this PR is very large, each component will be split into smaller PRs to individually be reviewed and merged individually
Changes:
Overview of files:
src/snowflake/telemetry/_internal/opentelemetry/proto/
is code generated byscripts/proto_codegen.sh
to serialize the custom messages in opentelemetry-proto speccompile/
is the custom protoc compiler plugin to generate the files above, used by the proto_codegen.sh scriptsnowflake/telemetry/_internal/opentelemetry/exporter/otlp/proto/common/
is copied from opentelemetry-exporter-otlp-proto-common and slightly modified to use the newly defined message typessnowflake/telemetry/_internal/serialize/
is the custom serialization logic for protobuf primitive typesTODO
opentelemetry.exporter.otlp.proto.common
andsnowflake.telemetry._internal.exporter.otlp.proto.common
depending on if opentelemetry-exporter-otlp-proto-common==1.26.0 is available in the runtime. Will be done in a separate PR since it makes changes / tests harder to reviewTesting:
Benchmarking: