+#
+# interactive rebase in progress; onto c4d17e9f1
+# Last command done (1 command done):
+# pick 67f43ea6a initial commit
+# Next commands to do (1623 remaining commands):
+# pick f13c716af Merge in latest from datadog/dd-trace-py (#1)
+# pick ed6dd7f25 Removing support for python 2.7 (#2)
+# You are currently rebasing branch 'rem-pkg-res' on 'c4d17e9f1'.
+#
+# Changes to be committed:
+# modified: .gitignore
+# new file: CODEOWNERS
+# modified: LICENSE
+# modified: README.md
+#
+# Untracked files:
+# git
+# my_test_venv/
+# opentelemetry-python/
+# pip
+# python
+#
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index b79d25492a..b685611d0a 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,3 +1,4 @@
+<<<<<<< HEAD
# Contributing to opentelemetry-python-contrib
The Python special interest group (SIG) meets regularly. See the OpenTelemetry
@@ -7,40 +8,86 @@ information on this and other language SIGs.
See the [public meeting notes](https://docs.google.com/document/d/1CIMGoIOZ-c3-igzbd6_Pnxx1SjAkjwqoYSUWxPY8XIs/edit)
for a summary description of past meetings. To request edit access, join the
meeting or get in touch on [Slack](https://cloud-native.slack.com/archives/C01PD4HUVBL).
+=======
+# Contributing to opentelemetry-python
+
+The Python special interest group (SIG) meets weekly on Thursdays at 9AM PST. Check the [OpenTelemetry community calendar](https://calendar.google.com/calendar/embed?src=google.com_b79e3e90j7bbsa2n2p5an5lf60%40group.calendar.google.com) for specific dates and Zoom meeting links.
+
+See the [public meeting notes](https://docs.google.com/document/d/1CIMGoIOZ-c3-igzbd6_Pnxx1SjAkjwqoYSUWxPY8XIs/edit)
+for a summary description of past meetings.
+>>>>>>> upstream/main
See to the [community membership document](https://github.com/open-telemetry/community/blob/main/community-membership.md)
on how to become a [**Member**](https://github.com/open-telemetry/community/blob/main/community-membership.md#member),
[**Approver**](https://github.com/open-telemetry/community/blob/main/community-membership.md#approver)
and [**Maintainer**](https://github.com/open-telemetry/community/blob/main/community-membership.md#maintainer).
+<<<<<<< HEAD
+=======
+# Find your right repo
+
+This is the main repo for OpenTelemetry Python. Nevertheless, there are other repos that are related to this project.
+Please take a look at this list first, your contributions may belong in one of these repos better:
+
+1. [OpenTelemetry Contrib](https://github.com/open-telemetry/opentelemetry-python-contrib): Instrumentations for third-party
+ libraries and frameworks.
+
+>>>>>>> upstream/main
## Find a Buddy and get Started Quickly!
If you are looking for someone to help you find a starting point and be a resource for your first contribution, join our
Slack and find a buddy!
+<<<<<<< HEAD
1. Join [Slack](https://slack.cncf.io/) and join our [chat room](https://cloud-native.slack.com/archives/C01PD4HUVBL).
2. Post in the room with an introduction to yourself, what area you are interested in (check issues marked "Help Wanted"),
and say you are looking for a buddy. We will match you with someone who has experience in that area.
Your OpenTelemetry buddy is your resource to talk to directly on all aspects of contributing to OpenTelemetry: providing
context, reviewing PRs, and helping those get merged. Buddies will not be available 24/7, but is committed to responding during their normal contribution hours.
+=======
+1. Join [Slack](https://slack.cncf.io/) and join our [channel](https://cloud-native.slack.com/archives/C01PD4HUVBL).
+2. Post in the room with an introduction to yourself, what area you are interested in (check issues marked "Help Wanted"),
+and say you are looking for a buddy. We will match you with someone who has experience in that area.
+
+The Slack channel will be used for introductions and an entry point for external people to be triaged and redirected. For
+discussions, please open up an issue or a Github [Discussion](https://github.com/open-telemetry/opentelemetry-python/discussions).
+
+Your OpenTelemetry buddy is your resource to talk to directly on all aspects of contributing to OpenTelemetry: providing
+context, reviewing PRs, and helping those get merged. Buddies will not be available 24/7, but is committed to responding
+during their normal contribution hours.
+>>>>>>> upstream/main
## Development
This project uses [tox](https://tox.readthedocs.io) to automate
some aspects of development, including testing against multiple Python versions.
+<<<<<<< HEAD
To install `tox`, run:
+=======
+To install `tox`, run[^1]:
+>>>>>>> upstream/main
```console
$ pip install tox==3.27.1
```
+<<<<<<< HEAD
+=======
+[^1]: Right now we are experiencing issues with `tox==4.x.y`, so we recommend you use this version.
+
+>>>>>>> upstream/main
You can run `tox` with the following arguments:
- `tox` to run all existing tox commands, including unit tests for all packages
under multiple Python versions
- `tox -e docs` to regenerate the API docs
+<<<<<<< HEAD
- `tox -e py37-test-instrumentation-aiopg` to e.g. run the aiopg instrumentation unit tests under a specific
+=======
+- `tox -e opentelemetry-api` and `tox -e opentelemetry-sdk` to run the API and SDK unit tests
+- `tox -e py37-opentelemetry-api` to e.g. run the API unit tests under a specific
+>>>>>>> upstream/main
Python version
- `tox -e spellcheck` to run a spellcheck on all the code
- `tox -e lint` to run lint checks on all code
@@ -51,6 +98,7 @@ An easier way to do so is:
1. Run `.tox/lint/bin/black .`
2. Run `.tox/lint/bin/isort .`
+<<<<<<< HEAD
See
[`tox.ini`](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tox.ini)
for more detail on available tox commands.
@@ -63,6 +111,57 @@ for more detail on available tox commands.
Performance progression of benchmarks for packages distributed by OpenTelemetry Python can be viewed as a [graph of throughput vs commit history](https://opentelemetry-python-contrib.readthedocs.io/en/latest/performance/benchmarks.html). From the linked page, you can download a JSON file with the performance results.
+=======
+We try to keep the amount of _public symbols_ in our code minimal. A public symbol is any Python identifier that does not start with an underscore.
+Every public symbol is something that has to be kept in order to maintain backwards compatibility, so we try to have as few as possible.
+
+To check if your PR is adding public symbols, run `tox -e public-symbols-check`. This will always fail if public symbols are being added/removed. The idea
+behind this is that every PR that adds/removes public symbols fails in CI, forcing reviewers to check the symbols to make sure they are strictly necessary.
+If after checking them, it is considered that they are indeed necessary, the PR will be labeled with `Skip Public API check` so that this check is not
+run.
+
+Also, we try to keep our console output as clean as possible. Most of the time this means catching expected log messages in the test cases:
+
+``` python
+from logging import WARNING
+
+...
+
+ def test_case(self):
+ with self.assertLogs(level=WARNING):
+ some_function_that_will_log_a_warning_message()
+```
+
+Other options can be to disable logging propagation or disabling a logger altogether.
+
+A similar approach can be followed to catch warnings:
+
+``` python
+ def test_case(self):
+ with self.assertWarns(DeprecationWarning):
+ some_function_that_will_raise_a_deprecation_warning()
+```
+
+See
+[`tox.ini`](https://github.com/open-telemetry/opentelemetry-python/blob/main/tox.ini)
+for more detail on available tox commands.
+
+#### Contrib repo
+
+Some of the `tox` targets install packages from the [OpenTelemetry Python Contrib Repository](https://github.com/open-telemetry/opentelemetry-python.git) via
+pip. The version of the packages installed defaults to the `main` branch in that repository when `tox` is run locally. It is possible to install packages tagged
+with a specific git commit hash by setting an environment variable before running tox as per the following example:
+
+```
+CONTRIB_REPO_SHA=dde62cebffe519c35875af6d06fae053b3be65ec tox
+```
+
+The continuation integration overrides that environment variable with as per the configuration
+[here](https://github.com/open-telemetry/opentelemetry-python/blob/main/.github/workflows/test.yml#L13).
+
+### Benchmarks
+
+>>>>>>> upstream/main
Running the `tox` tests also runs the performance tests if any are available. Benchmarking tests are done with `pytest-benchmark` and they output a table with results to the console.
To write benchmarks, simply use the [pytest benchmark fixture](https://pytest-benchmark.readthedocs.io/en/latest/usage.html#usage) like the following:
@@ -82,32 +181,54 @@ def test_simple_start_span(benchmark):
Make sure the test file is under the `tests/performance/benchmarks/` folder of
the package it is benchmarking and further has a path that corresponds to the
file in the package it is testing. Make sure that the file name begins with
+<<<<<<< HEAD
`test_benchmark_`. (e.g. `propagator/opentelemetry-propagator-aws-xray/tests/performance/benchmarks/trace/propagation/test_benchmark_aws_xray_propagator.py`)
+=======
+`test_benchmark_`. (e.g. `opentelemetry-sdk/tests/performance/benchmarks/trace/propagation/test_benchmark_b3_format.py`)
+>>>>>>> upstream/main
## Pull Requests
### How to Send Pull Requests
+<<<<<<< HEAD
Everyone is welcome to contribute code to `opentelemetry-python-contrib` via GitHub
+=======
+Everyone is welcome to contribute code to `opentelemetry-python` via GitHub
+>>>>>>> upstream/main
pull requests (PRs).
To create a new PR, fork the project in GitHub and clone the upstream repo:
+<<<<<<< HEAD
```sh
$ git clone https://github.com/open-telemetry/opentelemetry-python-contrib.git
+=======
+```console
+$ git clone https://github.com/open-telemetry/opentelemetry-python.git
+>>>>>>> upstream/main
```
Add your fork as an origin:
+<<<<<<< HEAD
```sh
$ git remote add fork https://github.com/YOUR_GITHUB_USERNAME/opentelemetry-python-contrib.git
+=======
+```console
+$ git remote add fork https://github.com/YOUR_GITHUB_USERNAME/opentelemetry-python.git
+>>>>>>> upstream/main
```
Run tests:
```sh
# make sure you have all supported versions of Python installed
+<<<<<<< HEAD
$ pip install tox==3.27.1 # only first time.
+=======
+$ pip install tox # only first time.
+>>>>>>> upstream/main
$ tox # execute in the root of the repository
```
@@ -120,7 +241,29 @@ $ git commit
$ git push fork feature
```
+<<<<<<< HEAD
Open a pull request against the main `opentelemetry-python-contrib` repo.
+=======
+Open a pull request against the main `opentelemetry-python` repo.
+
+Pull requests are also tested for their compatibility with packages distributed
+by OpenTelemetry in the [OpenTelemetry Python Contrib Repository](https://github.com/open-telemetry/opentelemetry-python.git).
+
+If a pull request (PR) introduces a change that would break the compatibility of
+these packages with the Core packages in this repo, a separate PR should be
+opened in the Contrib repo with changes to make the packages compatible.
+
+Follow these steps:
+1. Open Core repo PR (Contrib Tests will fail)
+2. Open Contrib repo PR and modify its `CORE_REPO_SHA` in `.github/workflows/test.yml`
+to equal the commit SHA of the Core repo PR to pass tests
+3. Modify the Core repo PR `CONTRIB_REPO_SHA` in `.github/workflows/test.yml` to
+equal the commit SHA of the Contrib repo PR to pass Contrib repo tests (a sanity
+check for the Maintainers & Approvers)
+4. Merge the Contrib repo
+5. Restore the Core repo PR `CONTRIB_REPO_SHA` to point to `main`
+6. Merge the Core repo PR
+>>>>>>> upstream/main
### How to Receive Comments
@@ -128,6 +271,7 @@ Open a pull request against the main `opentelemetry-python-contrib` repo.
as `work-in-progress`, or mark it as [`draft`](https://github.blog/2019-02-14-introducing-draft-pull-requests/).
* Make sure CLA is signed and CI is clear.
+<<<<<<< HEAD
### How to Get PRs Reviewed
The maintainers and approvers of this repo are not experts in every instrumentation there is here.
@@ -139,6 +283,8 @@ files are opened.
If you are not getting reviews, please contact the respective owners directly.
+=======
+>>>>>>> upstream/main
### How to Get PRs Merged
A PR is considered to be **ready to merge** when:
@@ -146,13 +292,23 @@ A PR is considered to be **ready to merge** when:
/ [Maintainers](https://github.com/open-telemetry/community/blob/main/community-membership.md#maintainer)
(at different companies).
* Major feedbacks are resolved.
+<<<<<<< HEAD
+=======
+* All tests are passing, including Contrib Repo tests which may require
+updating the GitHub workflow to reference a PR in the Contrib repo
+>>>>>>> upstream/main
* It has been open for review for at least one working day. This gives people
reasonable time to review.
* Trivial change (typo, cosmetic, doc, etc.) doesn't have to wait for one day.
* Urgent fix can take exception as long as it has been actively communicated.
+<<<<<<< HEAD
* A changelog entry is added to the corresponding changelog for the code base, if there is any impact on behavior. e.g. doc entries are not required, but small bug entries are.
Any Approver / Maintainer can merge the PR once it is **ready to merge**.
+=======
+
+One of the maintainers will merge the PR once it is **ready to merge**.
+>>>>>>> upstream/main
## Design Choices
@@ -174,6 +330,7 @@ rather than conform to specific API names or argument patterns in the spec.
For a deeper discussion, see: https://github.com/open-telemetry/opentelemetry-specification/issues/165
+<<<<<<< HEAD
## Running Tests Locally
1. Go to your Contrib repo directory. `git clone git@github.com:open-telemetry/opentelemetry-python-contrib.git && cd opentelemetry-python-contrib`.
@@ -188,6 +345,17 @@ CORE_REPO_SHA=c49ad57bfe35cfc69bfa863d74058ca9bec55fc3 tox
The continuation integration overrides that environment variable with as per the configuration [here](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/.github/workflows/test.yml#L9).
+=======
+### Environment Variables
+
+If you are adding a component that introduces new OpenTelemetry environment variables, put them all in a module,
+as it is done in `opentelemetry.environment_variables` or in `opentelemetry.sdk.environment_variables`.
+
+Keep in mind that any new environment variable must be declared in all caps and must start with `OTEL_PYTHON_`.
+
+Register this module with the `opentelemetry_environment_variables` entry point to make your environment variables
+automatically load as options for the `opentelemetry-instrument` command.
+>>>>>>> upstream/main
## Style Guide
@@ -196,6 +364,7 @@ The continuation integration overrides that environment variable with as per the
as specified with the [napoleon
extension](http://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy)
extension in [Sphinx](http://www.sphinx-doc.org/en/master/index.html).
+<<<<<<< HEAD
## Guideline for instrumentations
@@ -227,3 +396,5 @@ Below is a checklist of things to be mindful of when implementing a new instrume
OpenTelemetry is an open source community, and as such, greatly encourages contributions from anyone interested in the project. With that being said, there is a certain level of expectation from contributors even after a pull request is merged, specifically pertaining to instrumentations. The OpenTelemetry Python community expects contributors to maintain a level of support and interest in the instrumentations they contribute. This is to ensure that the instrumentation does not become stale and still functions the way the original contributor intended. Some instrumentations also pertain to libraries that the current members of the community are not so familiar with, so it is necessary to rely on the expertise of the original contributing parties.
+=======
+>>>>>>> upstream/main
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 0000000000..325ba4b865
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,38 @@
+# Use Ubuntu 20.04 LTS as the base image
+FROM ubuntu:20.04
+
+# Avoid warnings by switching to noninteractive
+ENV DEBIAN_FRONTEND=noninteractive
+
+# This will make apt-get install without question
+ARG DEBIAN_FRONTEND=noninteractive
+
+# Install Python, pip, Git, and other utilities
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends software-properties-common \
+ && add-apt-repository ppa:deadsnakes/ppa \
+ && apt-get update \
+ && apt-get install -y --no-install-recommends python3.8 python3.8-distutils \
+ && apt-get install -y --no-install-recommends python3-pip python3.8-venv \
+ # Added Git installation here
+ && apt-get install -y --no-install-recommends git \
+ && python3.8 -m pip install --upgrade pip \
+ && apt-get clean \
+ && rm -rf /var/lib/apt/lists/*
+
+# Set the working directory in the container to /app
+WORKDIR /app
+
+# Copy the current directory contents into the container at /app
+COPY . /app
+
+# Install any needed packages specified in requirements.txt
+RUN python3.8 -m pip install -r dev-requirements.txt
+# If you have a separate requirements.txt, uncomment the line below
+# RUN python3.8 -m pip install -r requirements.txt
+
+# Make port 80 available to the world outside this container
+EXPOSE 80
+
+# Define the command to run the app (e.g., using pytest)
+CMD ["python3.8", "pytest"]
diff --git a/README.md b/README.md
index ce1f8f3df4..2bbf37f092 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,4 @@
+<<<<<<< HEAD
---
@@ -93,12 +94,112 @@ See [CONTRIBUTING.md](CONTRIBUTING.md)
We meet weekly on Thursday, and the time of the meeting alternates between 9AM PT and 4PM PT. The meeting is subject to change depending on contributors' availability. Check the [OpenTelemetry community calendar](https://calendar.google.com/calendar/embed?src=google.com_b79e3e90j7bbsa2n2p5an5lf60%40group.calendar.google.com) for specific dates and for the Zoom link.
Meeting notes are available as a public [Google doc](https://docs.google.com/document/d/1CIMGoIOZ-c3-igzbd6_Pnxx1SjAkjwqoYSUWxPY8XIs/edit). For edit access, get in touch on [GitHub Discussions](https://github.com/open-telemetry/opentelemetry-python/discussions).
+=======
+# OpenTelemetry Python
+[![Slack](https://img.shields.io/badge/slack-@cncf/otel/python-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C01PD4HUVBL)
+[![Build Status](https://github.com/open-telemetry/opentelemetry-python/actions/workflows/test.yml/badge.svg?branch=main)](https://github.com/open-telemetry/opentelemetry-python/actions)
+[![Minimum Python Version](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)
+[![Release](https://img.shields.io/github/v/release/open-telemetry/opentelemetry-python?include_prereleases&style=)](https://github.com/open-telemetry/opentelemetry-python/releases/)
+[![Read the Docs](https://readthedocs.org/projects/opentelemetry-python/badge/?version=latest)](https://opentelemetry-python.readthedocs.io/en/latest/)
+
+## Project Status
+
+See the [OpenTelemetry Instrumentation for Python](https://opentelemetry.io/docs/instrumentation/python/#status-and-releases).
+
+| Signal | Status | Project |
+| ------- | ------------ | ------- |
+| Traces | Stable | N/A |
+| Metrics | Stable | N/A |
+| Logs | Experimental | N/A |
+
+Project versioning information and stability guarantees can be found [here](./rationale.md#versioning-and-releasing).
+
+## Getting started
+
+You can find the getting started guide for OpenTelemetry Python [here](https://opentelemetry.io/docs/instrumentation/python/getting-started/).
+
+If you are looking for **examples** on how to use the OpenTelemetry API to
+instrument your code manually, or how to set up the OpenTelemetry
+Python SDK, see https://opentelemetry.io/docs/instrumentation/python/manual/.
+
+## Python Version Support
+
+This project ensures compatibility with the current supported versions of the Python. As new Python versions are released, support for them is added and
+as old Python versions reach their end of life, support for them is removed.
+
+We add support for new Python versions no later than 3 months after they become stable.
+
+We remove support for old Python versions 6 months after they reach their [end of life](https://devguide.python.org/devcycle/#end-of-life-branches).
+
+
+## Documentation
+
+The online documentation is available at https://opentelemetry-python.readthedocs.io/.
+To access the latest version of the documentation, see
+https://opentelemetry-python.readthedocs.io/en/latest/.
+
+## Install
+
+This repository includes multiple installable packages. The `opentelemetry-api`
+package includes abstract classes and no-op implementations that comprise the OpenTelemetry API following the
+[OpenTelemetry specification](https://github.com/open-telemetry/opentelemetry-specification).
+The `opentelemetry-sdk` package is the reference implementation of the API.
+
+Libraries that produce telemetry data should only depend on `opentelemetry-api`,
+and defer the choice of the SDK to the application developer. Applications may
+depend on `opentelemetry-sdk` or another package that implements the API.
+
+The API and SDK packages are available on the Python Package Index (PyPI). You can install them via `pip` with the following commands:
+
+```sh
+pip install opentelemetry-api
+pip install opentelemetry-sdk
+```
+
+The
+[`exporter/`](https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter)
+directory includes OpenTelemetry exporter packages. You can install the packages separately with the following command:
+
+```sh
+pip install opentelemetry-exporter-{exporter}
+```
+
+The
+[`propagator/`](https://github.com/open-telemetry/opentelemetry-python/tree/main/propagator)
+directory includes OpenTelemetry propagator packages. You can install the packages separately with the following command:
+
+```sh
+pip install opentelemetry-propagator-{propagator}
+```
+
+To install the development versions of these packages instead, clone or fork
+this repository and perform an [editable
+install](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs):
+
+```sh
+pip install -e ./opentelemetry-api
+pip install -e ./opentelemetry-sdk
+pip install -e ./instrumentation/opentelemetry-instrumentation-{instrumentation}
+```
+
+For additional exporter and instrumentation packages, see the
+[`opentelemetry-python-contrib`](https://github.com/open-telemetry/opentelemetry-python-contrib) repository.
+
+## Contributing
+
+For information about contributing to OpenTelemetry Python, see [CONTRIBUTING.md](CONTRIBUTING.md).
+
+We meet weekly on Thursdays at 9AM PST. The meeting is subject to change depending on contributors' availability. Check the [OpenTelemetry community calendar](https://calendar.google.com/calendar/embed?src=google.com_b79e3e90j7bbsa2n2p5an5lf60%40group.calendar.google.com) for specific dates and Zoom meeting links.
+
+Meeting notes are available as a public [Google doc](https://docs.google.com/document/d/1CIMGoIOZ-c3-igzbd6_Pnxx1SjAkjwqoYSUWxPY8XIs/edit).
+>>>>>>> upstream/main
Approvers ([@open-telemetry/python-approvers](https://github.com/orgs/open-telemetry/teams/python-approvers)):
- [Aaron Abbott](https://github.com/aabmass), Google
- [Jeremy Voss](https://github.com/jeremydvoss), Microsoft
- [Sanket Mehta](https://github.com/sanketmehta28), Cisco
+<<<<<<< HEAD
Emeritus Approvers:
@@ -108,16 +209,35 @@ Emeritus Approvers:
- [Ashutosh Goel](https://github.com/ashu658), Cisco
*Find more about the approver role in [community repository](https://github.com/open-telemetry/community/blob/main/community-membership.md#approver).*
+=======
+- [Shalev Roda](https://github.com/shalevr), Cisco
+
+Emeritus Approvers
+
+- [Ashutosh Goel](https://github.com/ashu658), Cisco
+- [Carlos Alberto Cortez](https://github.com/carlosalberto), Lightstep
+- [Christian Neumüller](https://github.com/Oberon00), Dynatrace
+- [Héctor Hernández](https://github.com/hectorhdzg), Microsoft
+- [Mauricio Vásquez](https://github.com/mauriciovasquezbernal), Kinvolk
+- [Nathaniel Ruiz Nowell](https://github.com/NathanielRN), AWS
+- [Tahir H. Butt](https://github.com/majorgreys), DataDog
+
+*For more information about the approver role, see the [community repository](https://github.com/open-telemetry/community/blob/main/community-membership.md#approver).*
+>>>>>>> upstream/main
Maintainers ([@open-telemetry/python-maintainers](https://github.com/orgs/open-telemetry/teams/python-maintainers)):
- [Diego Hurtado](https://github.com/ocelotl), Lightstep
- [Leighton Chen](https://github.com/lzchen), Microsoft
+<<<<<<< HEAD
- [Shalev Roda](https://github.com/shalevr), Cisco
+=======
+>>>>>>> upstream/main
Emeritus Maintainers:
- [Alex Boten](https://github.com/codeboten), Lightstep
+<<<<<<< HEAD
- [Owais Lone](https://github.com/owais), Splunk
- [Srikanth Chekuri](https://github.com/srikanthccv), signoz.io
@@ -137,3 +257,18 @@ Emeritus Maintainers:
+=======
+- [Chris Kleinknecht](https://github.com/c24t), Google
+- [Owais Lone](https://github.com/owais), Splunk
+- [Reiley Yang](https://github.com/reyang), Microsoft
+- [Srikanth Chekuri](https://github.com/srikanthccv), signoz.io
+- [Yusuke Tsutsumi](https://github.com/toumorokoshi), Google
+
+*For more information about the maintainer role, see the [community repository](https://github.com/open-telemetry/community/blob/main/community-membership.md#maintainer).*
+
+### Thanks to all the people who already contributed!
+
+
+
+
+>>>>>>> upstream/main
diff --git a/RELEASING.md b/RELEASING.md
index a30838130f..82c02eb37e 100644
--- a/RELEASING.md
+++ b/RELEASING.md
@@ -2,7 +2,11 @@
## Preparing a new major or minor release
+<<<<<<< HEAD
* Run the [Prepare release branch workflow](https://github.com/open-telemetry/opentelemetry-python-contrib/actions/workflows/prepare-release-branch.yml).
+=======
+* Run the [Prepare release branch workflow](https://github.com/open-telemetry/opentelemetry-python/actions/workflows/prepare-release-branch.yml).
+>>>>>>> upstream/main
* Press the "Run workflow" button, and leave the default branch `main` selected.
* If making a pre-release of stable components (e.g. release candidate),
enter the pre-release version number, e.g. `1.9.0rc2`.
@@ -13,21 +17,33 @@
## Preparing a new patch release
* Backport pull request(s) to the release branch.
+<<<<<<< HEAD
* Run the [Backport workflow](https://github.com/open-telemetry/opentelemetry-python-contrib/actions/workflows/backport.yml).
+=======
+ * Run the [Backport workflow](https://github.com/open-telemetry/opentelemetry-python/actions/workflows/backport.yml).
+>>>>>>> upstream/main
* Press the "Run workflow" button, then select the release branch from the dropdown list,
e.g. `release/v1.9.x`, then enter the pull request number that you want to backport,
then click the "Run workflow" button below that.
* Review and merge the backport pull request that it generates.
* Merge a pull request to the release branch updating the `CHANGELOG.md`.
* The heading for the unreleased entries should be `## Unreleased`.
+<<<<<<< HEAD
* Run the [Prepare patch release workflow](https://github.com/open-telemetry/opentelemetry-python-contrib/actions/workflows/prepare-patch-release.yml).
+=======
+* Run the [Prepare patch release workflow](https://github.com/open-telemetry/opentelemetry-python/actions/workflows/prepare-patch-release.yml).
+>>>>>>> upstream/main
* Press the "Run workflow" button, then select the release branch from the dropdown list,
e.g. `release/v1.9.x`, and click the "Run workflow" button below that.
* Review and merge the pull request that it creates for updating the version.
## Making the release
+<<<<<<< HEAD
* Run the [Release workflow](https://github.com/open-telemetry/opentelemetry-python-contrib/actions/workflows/release.yml).
+=======
+* Run the [Release workflow](https://github.com/open-telemetry/opentelemetry-python/actions/workflows/release.yml).
+>>>>>>> upstream/main
* Press the "Run workflow" button, then select the release branch from the dropdown list,
e.g. `release/v1.9.x`, and click the "Run workflow" button below that.
* This workflow will publish the artifacts and publish a GitHub release with release notes based on the change log.
@@ -69,9 +85,15 @@
## After the release
* Check PyPI
+<<<<<<< HEAD
* This should be handled automatically on release by the [publish action](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/.github/workflows/release.yml).
* Check the [action logs](https://github.com/open-telemetry/opentelemetry-python-contrib/actions/workflows/release.yml) to make sure packages have been uploaded to PyPI
* Check the release history (e.g. https://pypi.org/project/opentelemetry-instrumentation/#history) on PyPI
+=======
+ * This should be handled automatically on release by the [publish action](https://github.com/open-telemetry/opentelemetry-python/blob/main/.github/workflows/publish.yml).
+ * Check the [action logs](https://github.com/open-telemetry/opentelemetry-python/actions?query=workflow%3APublish) to make sure packages have been uploaded to PyPI
+ * Check the release history (e.g. https://pypi.org/project/opentelemetry-api/#history) on PyPI
+>>>>>>> upstream/main
* If for some reason the action failed, see [Publish failed](#publish-failed) below
* Move stable tag
* Run the following (TODO automate):
@@ -96,4 +118,8 @@ If for some reason the action failed, do it manually:
- Build distributions with `./scripts/build.sh`
- Delete distributions we don't want to push (e.g. `testutil`)
- Push to PyPI as `twine upload --skip-existing --verbose dist/*`
-- Double check PyPI!
\ No newline at end of file
+<<<<<<< HEAD
+- Double check PyPI!
+=======
+- Double check PyPI!
+>>>>>>> upstream/main
diff --git a/dev-requirements.txt b/dev-requirements.txt
index fffb4c445d..442fcb16a7 100644
--- a/dev-requirements.txt
+++ b/dev-requirements.txt
@@ -10,10 +10,24 @@ sphinx-autodoc-typehints==1.25.2
pytest==7.1.3
pytest-cov==4.1.0
readme-renderer==42.0
+<<<<<<< HEAD
bleach==4.1.0 # transient dependency for readme-renderer
protobuf~=3.13
markupsafe>=2.0.1
codespell==2.1.0
requests==2.31.0
ruamel.yaml==0.17.21
+=======
+# temporary fix. we should update the jinja, flask deps
+# See https://github.com/pallets/markupsafe/issues/282
+# breaking change introduced in markupsafe causes jinja, flask to break
+markupsafe==2.0.1
+bleach==4.1.0 # This dependency was updated to a breaking version.
+codespell==2.1.0
+requests==2.31.0
+ruamel.yaml==0.17.21
+asgiref==3.7.2
+psutil==5.9.6
+GitPython==3.1.40
+>>>>>>> upstream/main
flaky==3.7.0
diff --git a/docs-requirements.txt b/docs-requirements.txt
index 965ea850c2..91360c3e6a 100644
--- a/docs-requirements.txt
+++ b/docs-requirements.txt
@@ -1,6 +1,7 @@
sphinx==7.1.2
sphinx-rtd-theme==2.0.0rc4
sphinx-autodoc-typehints==1.25.2
+<<<<<<< HEAD
# Need to install the api/sdk in the venv for autodoc. Modifying sys.path
# doesn't work for pkg_resources.
@@ -50,3 +51,32 @@ httpx>=0.18.0
# indirect dependency pins
markupsafe==2.0.1
itsdangerous==2.0.1
+=======
+# used to generate docs for the website
+sphinx-jekyll-builder==0.3.0
+
+# Need to install the api/sdk in the venv for autodoc. Modifying sys.path
+# doesn't work for pkg_resources.
+./opentelemetry-api
+./opentelemetry-semantic-conventions
+./opentelemetry-sdk
+./shim/opentelemetry-opencensus-shim
+./shim/opentelemetry-opentracing-shim
+
+# Required by instrumentation and exporter packages
+grpcio~=1.27
+Deprecated~=1.2
+django~=4.2
+flask~=1.0
+opentracing~=2.2.0
+thrift~=0.10
+wrapt>=1.0.0,<2.0.0
+# temporary fix. we should update the jinja, flask deps
+# See https://github.com/pallets/markupsafe/issues/282
+# breaking change introduced in markupsafe causes jinja, flask to break
+markupsafe==2.0.1
+
+# Jaeger generated protobufs do not currently support protobuf 4.x. This can be removed once
+# they're regenerated.
+protobuf~=3.19
+>>>>>>> upstream/main
diff --git a/docs/api/_logs.rst b/docs/api/_logs.rst
new file mode 100644
index 0000000000..85ae72dc0d
--- /dev/null
+++ b/docs/api/_logs.rst
@@ -0,0 +1,14 @@
+opentelemetry._logs package
+=============================
+
+Submodules
+----------
+
+.. toctree::
+
+ _logs.severity
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry._logs
diff --git a/docs/api/_logs.severity.rst b/docs/api/_logs.severity.rst
new file mode 100644
index 0000000000..4e31e70cf8
--- /dev/null
+++ b/docs/api/_logs.severity.rst
@@ -0,0 +1,4 @@
+opentelemetry._logs.severity
+============================
+
+.. automodule:: opentelemetry._logs.severity
\ No newline at end of file
diff --git a/docs/api/baggage.propagation.rst b/docs/api/baggage.propagation.rst
new file mode 100644
index 0000000000..7c8eba7940
--- /dev/null
+++ b/docs/api/baggage.propagation.rst
@@ -0,0 +1,7 @@
+opentelemetry.baggage.propagation package
+====================================================
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.baggage.propagation
diff --git a/docs/api/baggage.rst b/docs/api/baggage.rst
new file mode 100644
index 0000000000..34712e78bd
--- /dev/null
+++ b/docs/api/baggage.rst
@@ -0,0 +1,14 @@
+opentelemetry.baggage package
+========================================
+
+Subpackages
+-----------
+
+.. toctree::
+
+ baggage.propagation
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.baggage
diff --git a/docs/api/context.context.rst b/docs/api/context.context.rst
new file mode 100644
index 0000000000..331557d2dd
--- /dev/null
+++ b/docs/api/context.context.rst
@@ -0,0 +1,7 @@
+opentelemetry.context.base\_context module
+==========================================
+
+.. automodule:: opentelemetry.context.context
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/api/context.rst b/docs/api/context.rst
new file mode 100644
index 0000000000..7aef5ffe7d
--- /dev/null
+++ b/docs/api/context.rst
@@ -0,0 +1,14 @@
+opentelemetry.context package
+=============================
+
+Submodules
+----------
+
+.. toctree::
+
+ context.context
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.context
diff --git a/docs/api/environment_variables.rst b/docs/api/environment_variables.rst
new file mode 100644
index 0000000000..284675cf08
--- /dev/null
+++ b/docs/api/environment_variables.rst
@@ -0,0 +1,7 @@
+opentelemetry.environment_variables package
+===========================================
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.environment_variables
diff --git a/docs/api/index.rst b/docs/api/index.rst
new file mode 100644
index 0000000000..c1dffd6e75
--- /dev/null
+++ b/docs/api/index.rst
@@ -0,0 +1,16 @@
+OpenTelemetry Python API
+========================
+
+.. TODO: what is the API
+
+.. toctree::
+ :maxdepth: 1
+
+ _logs
+ baggage
+ context
+ propagate
+ propagators
+ trace
+ metrics
+ environment_variables
diff --git a/docs/api/metrics.rst b/docs/api/metrics.rst
new file mode 100644
index 0000000000..93a8cbe720
--- /dev/null
+++ b/docs/api/metrics.rst
@@ -0,0 +1,10 @@
+opentelemetry.metrics package
+=============================
+
+.. toctree::
+
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.metrics
diff --git a/docs/api/propagate.rst b/docs/api/propagate.rst
new file mode 100644
index 0000000000..a86beeaddc
--- /dev/null
+++ b/docs/api/propagate.rst
@@ -0,0 +1,7 @@
+opentelemetry.propagate package
+========================================
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.propagate
diff --git a/docs/api/propagators.composite.rst b/docs/api/propagators.composite.rst
new file mode 100644
index 0000000000..930ca0b88d
--- /dev/null
+++ b/docs/api/propagators.composite.rst
@@ -0,0 +1,7 @@
+opentelemetry.propagators.composite
+====================================================
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.propagators.composite
diff --git a/docs/api/propagators.rst b/docs/api/propagators.rst
new file mode 100644
index 0000000000..08825315be
--- /dev/null
+++ b/docs/api/propagators.rst
@@ -0,0 +1,10 @@
+opentelemetry.propagators package
+========================================
+
+Subpackages
+-----------
+
+.. toctree::
+
+ propagators.textmap
+ propagators.composite
diff --git a/docs/api/propagators.textmap.rst b/docs/api/propagators.textmap.rst
new file mode 100644
index 0000000000..a5db537b80
--- /dev/null
+++ b/docs/api/propagators.textmap.rst
@@ -0,0 +1,7 @@
+opentelemetry.propagators.textmap
+====================================================
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.propagators.textmap
diff --git a/docs/api/trace.rst b/docs/api/trace.rst
new file mode 100644
index 0000000000..65d9b4d8c8
--- /dev/null
+++ b/docs/api/trace.rst
@@ -0,0 +1,15 @@
+opentelemetry.trace package
+===========================
+
+Submodules
+----------
+
+.. toctree::
+
+ trace.status
+ trace.span
+
+Module contents
+---------------
+
+.. automodule:: opentelemetry.trace
\ No newline at end of file
diff --git a/docs/api/trace.span.rst b/docs/api/trace.span.rst
new file mode 100644
index 0000000000..94b36930df
--- /dev/null
+++ b/docs/api/trace.span.rst
@@ -0,0 +1,7 @@
+opentelemetry.trace.span
+========================
+
+.. automodule:: opentelemetry.trace.span
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/api/trace.status.rst b/docs/api/trace.status.rst
new file mode 100644
index 0000000000..0205446c80
--- /dev/null
+++ b/docs/api/trace.status.rst
@@ -0,0 +1,7 @@
+opentelemetry.trace.status
+==========================
+
+.. automodule:: opentelemetry.trace.status
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/conf.py b/docs/conf.py
index 4b2bda04a8..b317d5d8f9 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -12,7 +12,10 @@
import os
import sys
+<<<<<<< HEAD
from configparser import ConfigParser
+=======
+>>>>>>> upstream/main
from os import listdir
from os.path import isdir, join
@@ -24,7 +27,14 @@
settings.configure()
+<<<<<<< HEAD
source_dirs = []
+=======
+
+source_dirs = [
+ os.path.abspath("../opentelemetry-instrumentation/src/"),
+]
+>>>>>>> upstream/main
exp = "../exporter"
exp_dirs = [
@@ -33,6 +43,7 @@
if isdir(join(exp, f))
]
+<<<<<<< HEAD
instr = "../instrumentation"
instr_dirs = [
os.path.abspath("/".join(["../instrumentation", f, "src"]))
@@ -65,6 +76,20 @@
# -- Project information -----------------------------------------------------
project = "OpenTelemetry Python Contrib"
+=======
+shim = "../shim"
+shim_dirs = [
+ os.path.abspath("/".join(["../shim", f, "src"]))
+ for f in listdir(shim)
+ if isdir(join(shim, f))
+]
+
+sys.path[:0] = source_dirs + exp_dirs + shim_dirs
+
+# -- Project information -----------------------------------------------------
+
+project = "OpenTelemetry Python"
+>>>>>>> upstream/main
copyright = "OpenTelemetry Authors" # pylint: disable=redefined-builtin
author = "OpenTelemetry Authors"
@@ -104,10 +129,14 @@
"aiohttp": ("https://aiohttp.readthedocs.io/en/stable/", None),
"wrapt": ("https://wrapt.readthedocs.io/en/latest/", None),
"pymongo": ("https://pymongo.readthedocs.io/en/stable/", None),
+<<<<<<< HEAD
"opentelemetry": (
"https://opentelemetry-python.readthedocs.io/en/latest/",
None,
),
+=======
+ "grpc": ("https://grpc.github.io/grpc/python/", None),
+>>>>>>> upstream/main
}
# http://www.sphinx-doc.org/en/master/config.html#confval-nitpicky
@@ -116,6 +145,7 @@
# Sphinx does not recognize generic type TypeVars
# Container supposedly were fixed, but does not work
# https://github.com/sphinx-doc/sphinx/pull/3744
+<<<<<<< HEAD
nitpick_ignore = []
cfg = ConfigParser()
@@ -140,6 +170,29 @@ def getlistcfg(strval):
for item in items:
nitpick_ignore.append((category.replace("-", ":"), item))
+=======
+nitpick_ignore = [
+ ("py:class", "ValueT"),
+ ("py:class", "CarrierT"),
+ ("py:obj", "opentelemetry.propagators.textmap.CarrierT"),
+ ("py:obj", "Union"),
+ (
+ "py:class",
+ "opentelemetry.sdk.metrics._internal.instrument._Synchronous",
+ ),
+ (
+ "py:class",
+ "opentelemetry.sdk.metrics._internal.instrument._Asynchronous",
+ ),
+ # Even if wrapt is added to intersphinx_mapping, sphinx keeps failing
+ # with "class reference target not found: ObjectProxy".
+ ("py:class", "ObjectProxy"),
+ (
+ "py:class",
+ "opentelemetry.trace._LinkBase",
+ ),
+]
+>>>>>>> upstream/main
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
@@ -147,13 +200,31 @@ def getlistcfg(strval):
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
+<<<<<<< HEAD
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
+=======
+exclude_patterns = [
+ "_build",
+ "Thumbs.db",
+ ".DS_Store",
+ "examples/fork-process-model/flask-gunicorn",
+ "examples/fork-process-model/flask-uwsgi",
+ "examples/error_handler/error_handler_0",
+ "examples/error_handler/error_handler_1",
+]
+
+_exclude_members = ["_abc_impl"]
+>>>>>>> upstream/main
autodoc_default_options = {
"members": True,
"undoc-members": True,
"show-inheritance": True,
"member-order": "bysource",
+<<<<<<< HEAD
+=======
+ "exclude-members": ",".join(_exclude_members),
+>>>>>>> upstream/main
}
# -- Options for HTML output -------------------------------------------------
@@ -173,16 +244,30 @@ def getlistcfg(strval):
if branch is None or branch == "latest":
branch = "main"
+<<<<<<< HEAD
REPO = "open-telemetry/opentelemetry-python-contrib/"
+=======
+REPO = "open-telemetry/opentelemetry-python/"
+>>>>>>> upstream/main
scm_raw_web = "https://raw.githubusercontent.com/" + REPO + branch
scm_web = "https://github.com/" + REPO + "blob/" + branch
# Store variables in the epilogue so they are globally available.
+<<<<<<< HEAD
rst_epilog = f"""
.. |SCM_WEB| replace:: {scm_web}
.. |SCM_RAW_WEB| replace:: {scm_raw_web}
.. |SCM_BRANCH| replace:: {branch}
"""
+=======
+rst_epilog = """
+.. |SCM_WEB| replace:: {s}
+.. |SCM_RAW_WEB| replace:: {sr}
+.. |SCM_BRANCH| replace:: {b}
+""".format(
+ s=scm_web, sr=scm_raw_web, b=branch
+)
+>>>>>>> upstream/main
# used to have links to repo files
extlinks = {
diff --git a/docs/examples/auto-instrumentation/README.rst b/docs/examples/auto-instrumentation/README.rst
new file mode 100644
index 0000000000..b9f3692a37
--- /dev/null
+++ b/docs/examples/auto-instrumentation/README.rst
@@ -0,0 +1,7 @@
+Auto-instrumentation
+====================
+
+To learn about automatic instrumentation and how to run the example in this
+directory, see `Automatic Instrumentation`_.
+
+.. _Automatic Instrumentation: https://opentelemetry.io/docs/instrumentation/python/automatic/example
diff --git a/docs/examples/auto-instrumentation/client.py b/docs/examples/auto-instrumentation/client.py
new file mode 100644
index 0000000000..4f70e2b933
--- /dev/null
+++ b/docs/examples/auto-instrumentation/client.py
@@ -0,0 +1,48 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from sys import argv
+
+from requests import get
+
+from opentelemetry import trace
+from opentelemetry.propagate import inject
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer_provider().get_tracer(__name__)
+
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+
+assert len(argv) == 2
+
+with tracer.start_as_current_span("client"):
+
+ with tracer.start_as_current_span("client-server"):
+ headers = {}
+ inject(headers)
+ requested = get(
+ "http://localhost:8082/server_request",
+ params={"param": argv[1]},
+ headers=headers,
+ )
+
+ assert requested.status_code == 200
diff --git a/docs/examples/auto-instrumentation/server_automatic.py b/docs/examples/auto-instrumentation/server_automatic.py
new file mode 100644
index 0000000000..9c247a049a
--- /dev/null
+++ b/docs/examples/auto-instrumentation/server_automatic.py
@@ -0,0 +1,27 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from flask import Flask, request
+
+app = Flask(__name__)
+
+
+@app.route("/server_request")
+def server_request():
+ print(request.args.get("param"))
+ return "served"
+
+
+if __name__ == "__main__":
+ app.run(port=8082)
diff --git a/docs/examples/auto-instrumentation/server_manual.py b/docs/examples/auto-instrumentation/server_manual.py
new file mode 100644
index 0000000000..38abc02fb4
--- /dev/null
+++ b/docs/examples/auto-instrumentation/server_manual.py
@@ -0,0 +1,53 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from flask import Flask, request
+
+from opentelemetry.instrumentation.wsgi import collect_request_attributes
+from opentelemetry.propagate import extract
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+from opentelemetry.trace import (
+ SpanKind,
+ get_tracer_provider,
+ set_tracer_provider,
+)
+
+app = Flask(__name__)
+
+set_tracer_provider(TracerProvider())
+tracer = get_tracer_provider().get_tracer(__name__)
+
+get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+
+@app.route("/server_request")
+def server_request():
+ with tracer.start_as_current_span(
+ "server_request",
+ context=extract(request.headers),
+ kind=SpanKind.SERVER,
+ attributes=collect_request_attributes(request.environ),
+ ):
+ print(request.args.get("param"))
+ return "served"
+
+
+if __name__ == "__main__":
+ app.run(port=8082)
diff --git a/docs/examples/auto-instrumentation/server_programmatic.py b/docs/examples/auto-instrumentation/server_programmatic.py
new file mode 100644
index 0000000000..759613e50d
--- /dev/null
+++ b/docs/examples/auto-instrumentation/server_programmatic.py
@@ -0,0 +1,45 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from flask import Flask, request
+
+from opentelemetry.instrumentation.flask import FlaskInstrumentor
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+from opentelemetry.trace import get_tracer_provider, set_tracer_provider
+
+set_tracer_provider(TracerProvider())
+get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+instrumentor = FlaskInstrumentor()
+
+app = Flask(__name__)
+
+instrumentor.instrument_app(app)
+# instrumentor.instrument_app(app, excluded_urls="/server_request")
+
+
+@app.route("/server_request")
+def server_request():
+ print(request.args.get("param"))
+ return "served"
+
+
+if __name__ == "__main__":
+ app.run(port=8082)
diff --git a/docs/examples/basic_context/README.rst b/docs/examples/basic_context/README.rst
new file mode 100644
index 0000000000..1499a4bf8e
--- /dev/null
+++ b/docs/examples/basic_context/README.rst
@@ -0,0 +1,36 @@
+Basic Context
+=============
+
+These examples show how context is propagated through Spans in OpenTelemetry. There are three different
+examples:
+
+* implicit_context: Shows how starting a span implicitly creates context.
+* child_context: Shows how context is propagated through child spans.
+* async_context: Shows how context can be shared in another coroutine.
+
+The source files of these examples are available :scm_web:`here `.
+
+Installation
+------------
+
+.. code-block:: sh
+
+ pip install opentelemetry-api
+ pip install opentelemetry-sdk
+
+Run the Example
+---------------
+
+.. code-block:: sh
+
+ python .py
+
+The output will be shown in the console.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../api/trace`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/basic_context/async_context.py b/docs/examples/basic_context/async_context.py
new file mode 100644
index 0000000000..d80ccb31e0
--- /dev/null
+++ b/docs/examples/basic_context/async_context.py
@@ -0,0 +1,38 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+from opentelemetry import baggage, trace
+from opentelemetry.sdk.trace import TracerProvider
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer(__name__)
+
+loop = asyncio.get_event_loop()
+
+
+async def async_span(span):
+ with trace.use_span(span):
+ ctx = baggage.set_baggage("foo", "bar")
+ return ctx
+
+
+async def main():
+ span = tracer.start_span(name="span")
+ ctx = await async_span(span)
+ print(baggage.get_all(context=ctx))
+
+
+loop.run_until_complete(main())
diff --git a/docs/examples/basic_context/child_context.py b/docs/examples/basic_context/child_context.py
new file mode 100644
index 0000000000..d2a6d50136
--- /dev/null
+++ b/docs/examples/basic_context/child_context.py
@@ -0,0 +1,29 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import baggage, trace
+
+tracer = trace.get_tracer(__name__)
+
+global_ctx = baggage.set_baggage("context", "global")
+with tracer.start_as_current_span(name="root span") as root_span:
+ parent_ctx = baggage.set_baggage("context", "parent")
+ with tracer.start_as_current_span(
+ name="child span", context=parent_ctx
+ ) as child_span:
+ child_ctx = baggage.set_baggage("context", "child")
+
+print(baggage.get_baggage("context", global_ctx))
+print(baggage.get_baggage("context", parent_ctx))
+print(baggage.get_baggage("context", child_ctx))
diff --git a/docs/examples/basic_context/implicit_context.py b/docs/examples/basic_context/implicit_context.py
new file mode 100644
index 0000000000..0d89448058
--- /dev/null
+++ b/docs/examples/basic_context/implicit_context.py
@@ -0,0 +1,25 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import baggage, trace
+from opentelemetry.sdk.trace import TracerProvider
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer(__name__)
+
+with tracer.start_span(name="root span") as root_span:
+ ctx = baggage.set_baggage("foo", "bar")
+
+print(f"Global context baggage: {baggage.get_all()}")
+print(f"Span context baggage: {baggage.get_all(context=ctx)}")
diff --git a/docs/examples/basic_tracer/README.rst b/docs/examples/basic_tracer/README.rst
new file mode 100644
index 0000000000..572b4dc870
--- /dev/null
+++ b/docs/examples/basic_tracer/README.rst
@@ -0,0 +1,34 @@
+Basic Trace
+===========
+
+These examples show how to use OpenTelemetry to create and export Spans. There are two different examples:
+
+* basic_trace: Shows how to configure a SpanProcessor and Exporter, and how to create a tracer and span.
+* resources: Shows how to add resource information to a Provider.
+
+The source files of these examples are available :scm_web:`here `.
+
+Installation
+------------
+
+.. code-block:: sh
+
+ pip install opentelemetry-api
+ pip install opentelemetry-sdk
+
+Run the Example
+---------------
+
+.. code-block:: sh
+
+ python .py
+
+The output will be shown in the console.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../api/trace`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/basic_tracer/basic_trace.py b/docs/examples/basic_tracer/basic_trace.py
new file mode 100644
index 0000000000..bb1e341a61
--- /dev/null
+++ b/docs/examples/basic_tracer/basic_trace.py
@@ -0,0 +1,28 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+tracer = trace.get_tracer(__name__)
+with tracer.start_as_current_span("foo"):
+ print("Hello world!")
diff --git a/docs/examples/basic_tracer/resources.py b/docs/examples/basic_tracer/resources.py
new file mode 100644
index 0000000000..87853a8f66
--- /dev/null
+++ b/docs/examples/basic_tracer/resources.py
@@ -0,0 +1,33 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+# Use Resource.create() instead of constructor directly
+resource = Resource.create({"service.name": "basic_service"})
+
+trace.set_tracer_provider(TracerProvider(resource=resource))
+
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+tracer = trace.get_tracer(__name__)
+with tracer.start_as_current_span("foo"):
+ print("Hello world!")
diff --git a/docs/examples/django/README.rst b/docs/examples/django/README.rst
new file mode 100644
index 0000000000..1dd8999c03
--- /dev/null
+++ b/docs/examples/django/README.rst
@@ -0,0 +1,140 @@
+Django Instrumentation
+======================
+
+This shows how to use ``opentelemetry-instrumentation-django`` to automatically instrument a
+Django app.
+
+For more user convenience, a Django app is already provided in this directory.
+
+Preparation
+-----------
+
+This example will be executed in a separate virtual environment:
+
+.. code-block::
+
+ $ mkdir django_auto_instrumentation
+ $ virtualenv django_auto_instrumentation
+ $ source django_auto_instrumentation/bin/activate
+
+
+Installation
+------------
+
+.. code-block::
+
+ $ pip install opentelemetry-sdk
+ $ pip install opentelemetry-instrumentation-django
+ $ pip install requests
+
+
+Execution
+---------
+
+Execution of the Django app
+...........................
+
+This example uses Django features intended for development environment.
+The ``runserver`` option should not be used for production environments.
+
+Set these environment variables first:
+
+#. ``export DJANGO_SETTINGS_MODULE=instrumentation_example.settings``
+
+The way to achieve OpenTelemetry instrumentation for your Django app is to use
+an ``opentelemetry.instrumentation.django.DjangoInstrumentor`` to instrument the app.
+
+Clone the ``opentelemetry-python`` repository and go to ``opentelemetry-python/docs/examples/django``.
+
+Once there, open the ``manage.py`` file. The call to ``DjangoInstrumentor().instrument()``
+in ``main`` is all that is needed to make the app be instrumented.
+
+Run the Django app with ``python manage.py runserver --noreload``.
+The ``--noreload`` flag is needed to avoid Django from running ``main`` twice.
+
+Execution of the client
+.......................
+
+Open up a new console and activate the previous virtual environment there too:
+
+``source django_auto_instrumentation/bin/activate``
+
+Go to ``opentelemetry-python/docs/examples/django``, once there
+run the client with:
+
+``python client.py hello``
+
+Go to the previous console, where the Django app is running. You should see
+output similar to this one:
+
+.. code-block::
+
+ {
+ "name": "home_page_view",
+ "context": {
+ "trace_id": "0xed88755c56d95d05a506f5f70e7849b9",
+ "span_id": "0x0a94c7a60e0650d5",
+ "trace_state": "{}"
+ },
+ "kind": "SpanKind.SERVER",
+ "parent_id": "0x3096ef92e621c22d",
+ "start_time": "2020-04-26T01:49:57.205833Z",
+ "end_time": "2020-04-26T01:49:57.206214Z",
+ "status": {
+ "status_code": "OK"
+ },
+ "attributes": {
+ "http.request.method": "GET",
+ "server.address": "localhost",
+ "url.scheme": "http",
+ "server.port": 8000,
+ "url.full": "http://localhost:8000/?param=hello",
+ "server.socket.address": "127.0.0.1",
+ "network.protocol.version": "1.1",
+ "http.response.status_code": 200
+ },
+ "events": [],
+ "links": []
+ }
+
+The last output shows spans automatically generated by the OpenTelemetry Django
+Instrumentation package.
+
+Disabling Django Instrumentation
+--------------------------------
+
+Django's instrumentation can be disabled by setting the following environment variable:
+
+``export OTEL_PYTHON_DJANGO_INSTRUMENT=False``
+
+Auto Instrumentation
+--------------------
+
+This same example can be run using auto instrumentation. Comment out the call
+to ``DjangoInstrumentor().instrument()`` in ``main``, then Run the django app
+with ``opentelemetry-instrument python manage.py runserver --noreload``.
+Repeat the steps with the client, the result should be the same.
+
+Usage with Auto Instrumentation and uWSGI
+-----------------------------------------
+
+uWSGI and Django can be used together with auto instrumentation. To do so,
+first install uWSGI in the previous virtual environment:
+
+``pip install uwsgi``
+
+Once that is done, run the server with ``uwsgi`` from the directory that
+contains ``instrumentation_example``:
+
+``opentelemetry-instrument uwsgi --http :8000 --module instrumentation_example.wsgi``
+
+This should start one uWSGI worker in your console. Open up a browser and point
+it to ``localhost:8000``. This request should display a span exported in the
+server console.
+
+References
+----------
+
+* `Django `_
+* `OpenTelemetry Project `_
+* `OpenTelemetry Django extension `_
diff --git a/docs/examples/django/client.py b/docs/examples/django/client.py
new file mode 100644
index 0000000000..859fe4a9da
--- /dev/null
+++ b/docs/examples/django/client.py
@@ -0,0 +1,46 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from sys import argv
+
+from requests import get
+
+from opentelemetry import trace
+from opentelemetry.propagate import inject
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer_provider().get_tracer(__name__)
+
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+
+with tracer.start_as_current_span("client"):
+
+ with tracer.start_as_current_span("client-server"):
+ headers = {}
+ inject(headers)
+ requested = get(
+ "http://localhost:8000",
+ params={"param": argv[1]},
+ headers=headers,
+ )
+
+ assert requested.status_code == 200
diff --git a/docs/examples/django/instrumentation_example/__init__.py b/docs/examples/django/instrumentation_example/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/examples/django/instrumentation_example/asgi.py b/docs/examples/django/instrumentation_example/asgi.py
new file mode 100644
index 0000000000..dd8fb568f4
--- /dev/null
+++ b/docs/examples/django/instrumentation_example/asgi.py
@@ -0,0 +1,31 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+ASGI config for instrumentation_example project.
+
+It exposes the ASGI callable as a module-level variable named ``application``.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/3.0/howto/deployment/asgi/
+"""
+
+import os
+
+from django.core.asgi import get_asgi_application
+
+os.environ.setdefault(
+ "DJANGO_SETTINGS_MODULE", "instrumentation_example.settings"
+)
+
+application = get_asgi_application()
diff --git a/docs/examples/django/instrumentation_example/settings.py b/docs/examples/django/instrumentation_example/settings.py
new file mode 100644
index 0000000000..b5b8897b91
--- /dev/null
+++ b/docs/examples/django/instrumentation_example/settings.py
@@ -0,0 +1,133 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+Django settings for instrumentation_example project.
+
+Generated by "django-admin startproject" using Django 3.0.4.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/3.0/topics/settings/
+
+For the full list of settings and their values, see
+https://docs.djangoproject.com/en/3.0/ref/settings/
+"""
+
+import os
+
+# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
+BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+
+
+# Quick-start development settings - unsuitable for production
+# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
+
+# SECURITY WARNING: keep the secret key used in production secret!
+SECRET_KEY = "it%*!=l2(fcawu=!m-06nj(iq2j#%$fu6)myi*b9i5ojk+6+"
+
+# SECURITY WARNING: don"t run with debug turned on in production!
+DEBUG = True
+
+ALLOWED_HOSTS = []
+
+
+# Application definition
+
+INSTALLED_APPS = [
+ "django.contrib.admin",
+ "django.contrib.auth",
+ "django.contrib.contenttypes",
+ "django.contrib.sessions",
+ "django.contrib.messages",
+ "django.contrib.staticfiles",
+]
+
+MIDDLEWARE = [
+ "django.middleware.security.SecurityMiddleware",
+ "django.contrib.sessions.middleware.SessionMiddleware",
+ "django.middleware.common.CommonMiddleware",
+ "django.middleware.csrf.CsrfViewMiddleware",
+ "django.contrib.auth.middleware.AuthenticationMiddleware",
+ "django.contrib.messages.middleware.MessageMiddleware",
+ "django.middleware.clickjacking.XFrameOptionsMiddleware",
+]
+
+ROOT_URLCONF = "instrumentation_example.urls"
+
+TEMPLATES = [
+ {
+ "BACKEND": "django.template.backends.django.DjangoTemplates",
+ "DIRS": [],
+ "APP_DIRS": True,
+ "OPTIONS": {
+ "context_processors": [
+ "django.template.context_processors.debug",
+ "django.template.context_processors.request",
+ "django.contrib.auth.context_processors.auth",
+ "django.contrib.messages.context_processors.messages",
+ ],
+ },
+ },
+]
+
+WSGI_APPLICATION = "instrumentation_example.wsgi.application"
+
+
+# Database
+# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
+
+DATABASES = {
+ "default": {
+ "ENGINE": "django.db.backends.sqlite3",
+ "NAME": os.path.join(BASE_DIR, "db.sqlite3"),
+ }
+}
+
+
+# Password validation
+# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
+
+AUTH_PASSWORD_VALIDATORS = [
+ {
+ "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
+ },
+ {
+ "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
+ },
+ {
+ "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
+ },
+ {
+ "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
+ },
+]
+
+
+# Internationalization
+# https://docs.djangoproject.com/en/3.0/topics/i18n/
+
+LANGUAGE_CODE = "en-us"
+
+TIME_ZONE = "UTC"
+
+USE_I18N = True
+
+USE_L10N = True
+
+USE_TZ = True
+
+
+# Static files (CSS, JavaScript, Images)
+# https://docs.djangoproject.com/en/3.0/howto/static-files/
+
+STATIC_URL = "/static/"
diff --git a/docs/examples/django/instrumentation_example/urls.py b/docs/examples/django/instrumentation_example/urls.py
new file mode 100644
index 0000000000..292467155f
--- /dev/null
+++ b/docs/examples/django/instrumentation_example/urls.py
@@ -0,0 +1,35 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""instrumentation_example URL Configuration
+
+The `urlpatterns` list routes URLs to views. For more information please see:
+ https://docs.djangoproject.com/en/3.0/topics/http/urls/
+Examples:
+Function views
+ 1. Add an import: from my_app import views
+ 2. Add a URL to urlpatterns: path("", views.home, name="home")
+Class-based views
+ 1. Add an import: from other_app.views import Home
+ 2. Add a URL to urlpatterns: path("", Home.as_view(), name="home")
+Including another URLconf
+ 1. Import the include() function: from django.urls import include, path
+ 2. Add a URL to urlpatterns: path("blog/", include("blog.urls"))
+"""
+from django.contrib import admin
+from django.urls import include, path
+
+urlpatterns = [
+ path("admin/", admin.site.urls),
+ path("", include("pages.urls")),
+]
diff --git a/docs/examples/django/instrumentation_example/wsgi.py b/docs/examples/django/instrumentation_example/wsgi.py
new file mode 100644
index 0000000000..70ea9e0db5
--- /dev/null
+++ b/docs/examples/django/instrumentation_example/wsgi.py
@@ -0,0 +1,31 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+WSGI config for instrumentation_example project.
+
+It exposes the WSGI callable as a module-level variable named ``application``.
+
+For more information on this file, see
+https://docs.djangoproject.com/en/3.0/howto/deployment/wsgi/
+"""
+
+import os
+
+from django.core.wsgi import get_wsgi_application
+
+os.environ.setdefault(
+ "DJANGO_SETTINGS_MODULE", "instrumentation_example.settings"
+)
+
+application = get_wsgi_application()
diff --git a/docs/examples/django/manage.py b/docs/examples/django/manage.py
new file mode 100755
index 0000000000..bc2d44886b
--- /dev/null
+++ b/docs/examples/django/manage.py
@@ -0,0 +1,43 @@
+#!/usr/bin/env python
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Django"s command-line utility for administrative tasks."""
+import os
+import sys
+
+from opentelemetry.instrumentation.django import DjangoInstrumentor
+
+
+def main():
+ os.environ.setdefault(
+ "DJANGO_SETTINGS_MODULE", "instrumentation_example.settings"
+ )
+
+ # This call is what makes the Django application be instrumented
+ DjangoInstrumentor().instrument()
+
+ try:
+ from django.core.management import execute_from_command_line
+ except ImportError as exc:
+ raise ImportError(
+ "Couldn't import Django. Are you sure it's installed and "
+ "available on your PYTHONPATH environment variable? Did you "
+ "forget to activate a virtual environment?"
+ ) from exc
+ execute_from_command_line(sys.argv)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/docs/examples/django/pages/__init__.py b/docs/examples/django/pages/__init__.py
new file mode 100644
index 0000000000..5855e41f3a
--- /dev/null
+++ b/docs/examples/django/pages/__init__.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+default_app_config = "pages.apps.PagesConfig"
diff --git a/docs/examples/django/pages/apps.py b/docs/examples/django/pages/apps.py
new file mode 100644
index 0000000000..0f12b7b66c
--- /dev/null
+++ b/docs/examples/django/pages/apps.py
@@ -0,0 +1,18 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from django.apps import AppConfig
+
+
+class PagesConfig(AppConfig):
+ name = "pages"
diff --git a/docs/examples/django/pages/migrations/__init__.py b/docs/examples/django/pages/migrations/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/examples/django/pages/urls.py b/docs/examples/django/pages/urls.py
new file mode 100644
index 0000000000..99c95765a4
--- /dev/null
+++ b/docs/examples/django/pages/urls.py
@@ -0,0 +1,18 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from django.urls import path
+
+from .views import home_page_view
+
+urlpatterns = [path("", home_page_view, name="home")]
diff --git a/docs/examples/django/pages/views.py b/docs/examples/django/pages/views.py
new file mode 100644
index 0000000000..e805f43186
--- /dev/null
+++ b/docs/examples/django/pages/views.py
@@ -0,0 +1,31 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from django.http import HttpResponse
+
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+
+def home_page_view(request):
+ return HttpResponse("Hello, world")
diff --git a/docs/examples/error_handler/README.rst b/docs/examples/error_handler/README.rst
new file mode 100644
index 0000000000..b879e53e9b
--- /dev/null
+++ b/docs/examples/error_handler/README.rst
@@ -0,0 +1,153 @@
+Global Error Handler
+====================
+
+Overview
+--------
+
+This example shows how to use the global error handler.
+
+
+Preparation
+-----------
+
+This example will be executed in a separate virtual environment:
+
+.. code:: sh
+
+ $ mkdir global_error_handler
+ $ virtualenv global_error_handler
+ $ source global_error_handler/bin/activate
+
+Installation
+------------
+
+Here we install first ``opentelemetry-sdk``, the only dependency. Afterwards, 2
+error handlers are installed: ``error_handler_0`` will handle
+``ZeroDivisionError`` exceptions, ``error_handler_1`` will handle
+``IndexError`` and ``KeyError`` exceptions.
+
+.. code:: sh
+
+ $ pip install opentelemetry-sdk
+ $ git clone https://github.com/open-telemetry/opentelemetry-python.git
+ $ pip install -e opentelemetry-python/docs/examples/error_handler/error_handler_0
+ $ pip install -e opentelemetry-python/docs/examples/error_handler/error_handler_1
+
+Execution
+---------
+
+An example is provided in the
+``opentelemetry-python/docs/examples/error_handler/example.py``.
+
+You can just run it, you should get output similar to this one:
+
+.. code:: pytb
+
+ ErrorHandler0 handling a ZeroDivisionError
+ Traceback (most recent call last):
+ File "test.py", line 5, in
+ 1 / 0
+ ZeroDivisionError: division by zero
+
+ ErrorHandler1 handling an IndexError
+ Traceback (most recent call last):
+ File "test.py", line 11, in
+ [1][2]
+ IndexError: list index out of range
+
+ ErrorHandler1 handling a KeyError
+ Traceback (most recent call last):
+ File "test.py", line 17, in
+ {1: 2}[2]
+ KeyError: 2
+
+ Error handled by default error handler:
+ Traceback (most recent call last):
+ File "test.py", line 23, in
+ assert False
+ AssertionError
+
+ No error raised
+
+The ``opentelemetry-sdk.error_handler`` module includes documentation that
+explains how this works. We recommend you read it also, here is just a small
+summary.
+
+In ``example.py`` we use ``GlobalErrorHandler`` as a context manager in several
+places, for example:
+
+
+.. code:: python
+
+ with GlobalErrorHandler():
+ {1: 2}[2]
+
+Running that code will raise a ``KeyError`` exception.
+``GlobalErrorHandler`` will "capture" that exception and pass it down to the
+registered error handlers. If there is one that handles ``KeyError`` exceptions
+then it will handle it. That can be seen in the result of the execution of
+``example.py``:
+
+.. code::
+
+ ErrorHandler1 handling a KeyError
+ Traceback (most recent call last):
+ File "test.py", line 17, in
+ {1: 2}[2]
+ KeyError: 2
+
+There is no registered error handler that can handle ``AssertionError``
+exceptions so this kind of errors are handled by the default error handler
+which just logs the exception to standard logging, as seen here:
+
+.. code::
+
+ Error handled by default error handler:
+ Traceback (most recent call last):
+ File "test.py", line 23, in
+ assert False
+ AssertionError
+
+When no exception is raised, the code inside the scope of
+``GlobalErrorHandler`` is executed normally:
+
+.. code::
+
+ No error raised
+
+Users can create Python packages that provide their own custom error handlers
+and install them in their virtual environments before running their code which
+instantiates ``GlobalErrorHandler`` context managers. ``error_handler_0`` and
+``error_handler_1`` can be used as examples to create these custom error
+handlers.
+
+In order for the error handlers to be registered, they need to create a class
+that inherits from ``opentelemetry.sdk.error_handler.ErrorHandler`` and at
+least one ``Exception``-type class. For example, this is an error handler that
+handles ``ZeroDivisionError`` exceptions:
+
+.. code:: python
+
+ from opentelemetry.sdk.error_handler import ErrorHandler
+ from logging import getLogger
+
+ logger = getLogger(__name__)
+
+
+ class ErrorHandler0(ErrorHandler, ZeroDivisionError):
+
+ def handle(self, error: Exception, *args, **kwargs):
+
+ logger.exception("ErrorHandler0 handling a ZeroDivisionError")
+
+To register this error handler, use the ``opentelemetry_error_handler`` entry
+point in the setup of the error handler package:
+
+.. code::
+
+ [options.entry_points]
+ opentelemetry_error_handler =
+ error_handler_0 = error_handler_0:ErrorHandler0
+
+This entry point should point to the error handler class, ``ErrorHandler0`` in
+this case.
diff --git a/docs/examples/error_handler/error_handler_0/README.rst b/docs/examples/error_handler/error_handler_0/README.rst
new file mode 100644
index 0000000000..0c86902e4c
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_0/README.rst
@@ -0,0 +1,4 @@
+Error Handler 0
+===============
+
+This is just an error handler for this example.
diff --git a/docs/examples/error_handler/error_handler_0/pyproject.toml b/docs/examples/error_handler/error_handler_0/pyproject.toml
new file mode 100644
index 0000000000..b148d0b13a
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_0/pyproject.toml
@@ -0,0 +1,43 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "error-handler-0"
+dynamic = ["version"]
+description = "This is just an error handler example package"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "opentelemetry-sdk ~= 1.3",
+]
+
+[project.entry-points.opentelemetry_error_handler]
+error_handler_0 = "error_handler_0:ErrorHandler0"
+
+[tool.hatch.version]
+path = "src/error_handler_0/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/docs/examples/error_handler/error_handler_0/src/error_handler_0/__init__.py b/docs/examples/error_handler/error_handler_0/src/error_handler_0/__init__.py
new file mode 100644
index 0000000000..8b42b7c70e
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_0/src/error_handler_0/__init__.py
@@ -0,0 +1,25 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+
+from opentelemetry.sdk.error_handler import ErrorHandler
+
+logger = getLogger(__name__)
+
+
+class ErrorHandler0(ErrorHandler, ZeroDivisionError):
+ def _handle(self, error: Exception, *args, **kwargs):
+
+ logger.exception("ErrorHandler0 handling a ZeroDivisionError")
diff --git a/docs/examples/error_handler/error_handler_0/src/error_handler_0/version.py b/docs/examples/error_handler/error_handler_0/src/error_handler_0/version.py
new file mode 100644
index 0000000000..c829b95757
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_0/src/error_handler_0/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.23.dev0"
diff --git a/docs/examples/error_handler/error_handler_1/README.rst b/docs/examples/error_handler/error_handler_1/README.rst
new file mode 100644
index 0000000000..029b95f5c0
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_1/README.rst
@@ -0,0 +1,4 @@
+Error Handler 1
+===============
+
+This is just an error handler for this example.
diff --git a/docs/examples/error_handler/error_handler_1/pyproject.toml b/docs/examples/error_handler/error_handler_1/pyproject.toml
new file mode 100644
index 0000000000..506b8e24ae
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_1/pyproject.toml
@@ -0,0 +1,43 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "error-handler-1"
+dynamic = ["version"]
+description = "This is just an error handler example package"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "opentelemetry-sdk ~= 1.3",
+]
+
+[project.entry-points.opentelemetry_error_handler]
+error_handler_1 = "error_handler_1:ErrorHandler1"
+
+[tool.hatch.version]
+path = "src/error_handler_1/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/docs/examples/error_handler/error_handler_1/src/error_handler_1/__init__.py b/docs/examples/error_handler/error_handler_1/src/error_handler_1/__init__.py
new file mode 100644
index 0000000000..cc63465617
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_1/src/error_handler_1/__init__.py
@@ -0,0 +1,30 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+
+from opentelemetry.sdk.error_handler import ErrorHandler
+
+logger = getLogger(__name__)
+
+
+# pylint: disable=too-many-ancestors
+class ErrorHandler1(ErrorHandler, IndexError, KeyError):
+ def _handle(self, error: Exception, *args, **kwargs):
+
+ if isinstance(error, IndexError):
+ logger.exception("ErrorHandler1 handling an IndexError")
+
+ elif isinstance(error, KeyError):
+ logger.exception("ErrorHandler1 handling a KeyError")
diff --git a/docs/examples/error_handler/error_handler_1/src/error_handler_1/version.py b/docs/examples/error_handler/error_handler_1/src/error_handler_1/version.py
new file mode 100644
index 0000000000..c829b95757
--- /dev/null
+++ b/docs/examples/error_handler/error_handler_1/src/error_handler_1/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.23.dev0"
diff --git a/docs/examples/error_handler/example.py b/docs/examples/error_handler/example.py
new file mode 100644
index 0000000000..372c39c16f
--- /dev/null
+++ b/docs/examples/error_handler/example.py
@@ -0,0 +1,29 @@
+from opentelemetry.sdk.error_handler import GlobalErrorHandler
+
+# ZeroDivisionError to be handled by ErrorHandler0
+with GlobalErrorHandler():
+ 1 / 0
+
+print()
+
+# IndexError to be handled by ErrorHandler1
+with GlobalErrorHandler():
+ [1][2]
+
+print()
+
+# KeyError to be handled by ErrorHandler1
+with GlobalErrorHandler():
+ {1: 2}[2]
+
+print()
+
+# AssertionError to be handled by DefaultErrorHandler
+with GlobalErrorHandler():
+ assert False
+
+print()
+
+# No error raised
+with GlobalErrorHandler():
+ print("No error raised")
diff --git a/docs/examples/fork-process-model/README.rst b/docs/examples/fork-process-model/README.rst
new file mode 100644
index 0000000000..2f33bcf500
--- /dev/null
+++ b/docs/examples/fork-process-model/README.rst
@@ -0,0 +1,66 @@
+Working With Fork Process Models
+================================
+
+The `BatchSpanProcessor` is not fork-safe and doesn't work well with application servers
+(Gunicorn, uWSGI) which are based on the pre-fork web server model. The `BatchSpanProcessor`
+spawns a thread to run in the background to export spans to the telemetry backend. During the fork, the child
+process inherits the lock which is held by the parent process and deadlock occurs. We can use fork hooks to
+get around this limitation of the span processor.
+
+Please see http://bugs.python.org/issue6721 for the problems about Python locks in (multi)threaded
+context with fork.
+
+Gunicorn post_fork hook
+-----------------------
+
+.. code-block:: python
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
+ from opentelemetry.sdk.resources import Resource
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+
+ def post_fork(server, worker):
+ server.log.info("Worker spawned (pid: %s)", worker.pid)
+
+ resource = Resource.create(attributes={
+ "service.name": "api-service"
+ })
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ span_processor = BatchSpanProcessor(
+ OTLPSpanExporter(endpoint="http://localhost:4317")
+ )
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+
+uWSGI postfork decorator
+------------------------
+
+.. code-block:: python
+
+ from uwsgidecorators import postfork
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
+ from opentelemetry.sdk.resources import Resource
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+
+ @postfork
+ def init_tracing():
+ resource = Resource.create(attributes={
+ "service.name": "api-service"
+ })
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ span_processor = BatchSpanProcessor(
+ OTLPSpanExporter(endpoint="http://localhost:4317")
+ )
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+
+The source code for the examples with Flask app are available :scm_web:`here `.
diff --git a/docs/examples/fork-process-model/flask-gunicorn/README.rst b/docs/examples/fork-process-model/flask-gunicorn/README.rst
new file mode 100644
index 0000000000..6ca9790dcd
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-gunicorn/README.rst
@@ -0,0 +1,11 @@
+Installation
+------------
+.. code-block:: sh
+
+ pip install -rrequirements.txt
+
+Run application
+---------------
+.. code-block:: sh
+
+ gunicorn app -c gunicorn.conf.py
diff --git a/docs/examples/fork-process-model/flask-gunicorn/app.py b/docs/examples/fork-process-model/flask-gunicorn/app.py
new file mode 100644
index 0000000000..008e1f04d5
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-gunicorn/app.py
@@ -0,0 +1,59 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import flask
+from flask import request
+
+from opentelemetry import trace
+from opentelemetry.instrumentation.flask import FlaskInstrumentor
+
+application = flask.Flask(__name__)
+
+FlaskInstrumentor().instrument_app(application)
+
+tracer = trace.get_tracer(__name__)
+
+
+def fib_slow(n):
+ if n <= 1:
+ return n
+ return fib_slow(n - 1) + fib_fast(n - 2)
+
+
+def fib_fast(n):
+ nth_fib = [0] * (n + 2)
+ nth_fib[1] = 1
+ for i in range(2, n + 1):
+ nth_fib[i] = nth_fib[i - 1] + nth_fib[i - 2]
+ return nth_fib[n]
+
+
+@application.route("/fibonacci")
+def fibonacci():
+ n = int(request.args.get("n", 1))
+ with tracer.start_as_current_span("root"):
+ with tracer.start_as_current_span("fib_slow") as slow_span:
+ ans = fib_slow(n)
+ slow_span.set_attribute("n", n)
+ slow_span.set_attribute("nth_fibonacci", ans)
+ with tracer.start_as_current_span("fib_fast") as fast_span:
+ ans = fib_fast(n)
+ fast_span.set_attribute("n", n)
+ fast_span.set_attribute("nth_fibonacci", ans)
+
+ return f"F({n}) is: ({ans})"
+
+
+if __name__ == "__main__":
+ application.run()
diff --git a/docs/examples/fork-process-model/flask-gunicorn/gunicorn.conf.py b/docs/examples/fork-process-model/flask-gunicorn/gunicorn.conf.py
new file mode 100644
index 0000000000..34b4591596
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-gunicorn/gunicorn.conf.py
@@ -0,0 +1,79 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import metrics, trace
+from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
+ OTLPMetricExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+bind = "127.0.0.1:8000"
+
+# Sample Worker processes
+workers = 4
+worker_class = "sync"
+worker_connections = 1000
+timeout = 30
+keepalive = 2
+
+# Sample logging
+errorlog = "-"
+loglevel = "info"
+accesslog = "-"
+access_log_format = (
+ '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
+)
+
+
+def post_fork(server, worker):
+ server.log.info("Worker spawned (pid: %s)", worker.pid)
+
+ resource = Resource.create(
+ attributes={
+ "service.name": "api-service",
+ # If workers are not distinguished within attributes, traces and
+ # metrics exported from each worker will be indistinguishable. While
+ # not necessarily an issue for traces, it is confusing for almost
+ # all metric types. A built-in way to identify a worker is by PID
+ # but this may lead to high label cardinality. An alternative
+ # workaround and additional discussion are available here:
+ # https://github.com/benoitc/gunicorn/issues/1352
+ "worker": worker.pid,
+ }
+ )
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ # This uses insecure connection for the purpose of example. Please see the
+ # OTLP Exporter documentation for other options.
+ span_processor = BatchSpanProcessor(
+ OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
+ )
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ reader = PeriodicExportingMetricReader(
+ OTLPMetricExporter(endpoint="http://localhost:4317")
+ )
+ metrics.set_meter_provider(
+ MeterProvider(
+ resource=resource,
+ metric_readers=[reader],
+ )
+ )
diff --git a/docs/examples/fork-process-model/flask-gunicorn/requirements.txt b/docs/examples/fork-process-model/flask-gunicorn/requirements.txt
new file mode 100644
index 0000000000..8f7a7bbf31
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-gunicorn/requirements.txt
@@ -0,0 +1,20 @@
+click==7.1.2
+Flask==2.3.2
+googleapis-common-protos==1.52.0
+grpcio==1.56.0
+gunicorn==20.0.4
+itsdangerous==1.1.0
+Jinja2==2.11.3
+MarkupSafe==1.1.1
+opentelemetry-api==1.20.0
+opentelemetry-exporter-otlp==1.20.0
+opentelemetry-instrumentation==0.41b0
+opentelemetry-instrumentation-flask==0.41b0
+opentelemetry-instrumentation-wsgi==0.41b0
+opentelemetry-sdk==1.20.0
+protobuf==3.18.3
+six==1.15.0
+thrift==0.13.0
+uWSGI==2.0.22
+Werkzeug==2.2.3
+wrapt==1.12.1
diff --git a/docs/examples/fork-process-model/flask-uwsgi/README.rst b/docs/examples/fork-process-model/flask-uwsgi/README.rst
new file mode 100644
index 0000000000..d9310e03f4
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-uwsgi/README.rst
@@ -0,0 +1,12 @@
+Installation
+------------
+.. code-block:: sh
+
+ pip install -rrequirements.txt
+
+Run application
+---------------
+
+.. code-block:: sh
+
+ uwsgi --http :8000 --wsgi-file app.py --callable application --master --enable-threads
diff --git a/docs/examples/fork-process-model/flask-uwsgi/app.py b/docs/examples/fork-process-model/flask-uwsgi/app.py
new file mode 100644
index 0000000000..1191bcc30e
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-uwsgi/app.py
@@ -0,0 +1,79 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import flask
+from flask import request
+from uwsgidecorators import postfork
+
+from opentelemetry import trace
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.instrumentation.flask import FlaskInstrumentor
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+application = flask.Flask(__name__)
+
+FlaskInstrumentor().instrument_app(application)
+
+tracer = trace.get_tracer(__name__)
+
+
+@postfork
+def init_tracing():
+ resource = Resource.create(attributes={"service.name": "api-service"})
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ # This uses insecure connection for the purpose of example. Please see the
+ # OTLP Exporter documentation for other options.
+ span_processor = BatchSpanProcessor(
+ OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
+ )
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+
+def fib_slow(n):
+ if n <= 1:
+ return n
+ return fib_slow(n - 1) + fib_fast(n - 2)
+
+
+def fib_fast(n):
+ nth_fib = [0] * (n + 2)
+ nth_fib[1] = 1
+ for i in range(2, n + 1):
+ nth_fib[i] = nth_fib[i - 1] + nth_fib[i - 2]
+ return nth_fib[n]
+
+
+@application.route("/fibonacci")
+def fibonacci():
+ n = int(request.args.get("n", 1))
+ with tracer.start_as_current_span("root"):
+ with tracer.start_as_current_span("fib_slow") as slow_span:
+ ans = fib_slow(n)
+ slow_span.set_attribute("n", n)
+ slow_span.set_attribute("nth_fibonacci", ans)
+ with tracer.start_as_current_span("fib_fast") as fast_span:
+ ans = fib_fast(n)
+ fast_span.set_attribute("n", n)
+ fast_span.set_attribute("nth_fibonacci", ans)
+
+ return f"F({n}) is: ({ans})"
+
+
+if __name__ == "__main__":
+ application.run()
diff --git a/docs/examples/fork-process-model/flask-uwsgi/requirements.txt b/docs/examples/fork-process-model/flask-uwsgi/requirements.txt
new file mode 100644
index 0000000000..8f7a7bbf31
--- /dev/null
+++ b/docs/examples/fork-process-model/flask-uwsgi/requirements.txt
@@ -0,0 +1,20 @@
+click==7.1.2
+Flask==2.3.2
+googleapis-common-protos==1.52.0
+grpcio==1.56.0
+gunicorn==20.0.4
+itsdangerous==1.1.0
+Jinja2==2.11.3
+MarkupSafe==1.1.1
+opentelemetry-api==1.20.0
+opentelemetry-exporter-otlp==1.20.0
+opentelemetry-instrumentation==0.41b0
+opentelemetry-instrumentation-flask==0.41b0
+opentelemetry-instrumentation-wsgi==0.41b0
+opentelemetry-sdk==1.20.0
+protobuf==3.18.3
+six==1.15.0
+thrift==0.13.0
+uWSGI==2.0.22
+Werkzeug==2.2.3
+wrapt==1.12.1
diff --git a/docs/examples/index.rst b/docs/examples/index.rst
new file mode 100644
index 0000000000..92fc679b70
--- /dev/null
+++ b/docs/examples/index.rst
@@ -0,0 +1,10 @@
+:orphan:
+
+Examples
+========
+
+.. toctree::
+ :maxdepth: 1
+ :glob:
+
+ **
diff --git a/docs/examples/logs/README.rst b/docs/examples/logs/README.rst
new file mode 100644
index 0000000000..3821466e32
--- /dev/null
+++ b/docs/examples/logs/README.rst
@@ -0,0 +1,81 @@
+OpenTelemetry Logs SDK
+======================
+
+.. warning::
+ OpenTelemetry Python logs are in an experimental state. The APIs within
+ :mod:`opentelemetry.sdk._logs` are subject to change in minor/patch releases and make no
+ backward compatibility guarantees at this time.
+
+Start the Collector locally to see data being exported. Write the following file:
+
+.. code-block:: yaml
+
+ # otel-collector-config.yaml
+ receivers:
+ otlp:
+ protocols:
+ grpc:
+
+ processors:
+ batch:
+
+ exporters:
+ logging:
+
+ service:
+ pipelines:
+ logs:
+ receivers: [otlp]
+ exporters: [logging]
+
+Then start the Docker container:
+
+.. code-block:: sh
+
+ docker run \
+ -p 4317:4317 \
+ -v $(pwd)/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml \
+ otel/opentelemetry-collector-contrib:latest
+
+.. code-block:: sh
+
+ $ python example.py
+
+The resulting logs will appear in the output from the collector and look similar to this:
+
+.. code-block:: sh
+
+ Resource SchemaURL:
+ Resource labels:
+ -> telemetry.sdk.language: STRING(python)
+ -> telemetry.sdk.name: STRING(opentelemetry)
+ -> telemetry.sdk.version: STRING(1.8.0)
+ -> service.name: STRING(shoppingcart)
+ -> service.instance.id: STRING(instance-12)
+ InstrumentationLibraryLogs #0
+ InstrumentationLibraryMetrics SchemaURL:
+ InstrumentationLibrary __main__ 0.1
+ LogRecord #0
+ Timestamp: 2022-01-13 20:37:03.998733056 +0000 UTC
+ Severity: WARNING
+ ShortName:
+ Body: Jail zesty vixen who grabbed pay from quack.
+ Trace ID:
+ Span ID:
+ Flags: 0
+ LogRecord #1
+ Timestamp: 2022-01-13 20:37:04.082757888 +0000 UTC
+ Severity: ERROR
+ ShortName:
+ Body: The five boxing wizards jump quickly.
+ Trace ID:
+ Span ID:
+ Flags: 0
+ LogRecord #2
+ Timestamp: 2022-01-13 20:37:04.082979072 +0000 UTC
+ Severity: ERROR
+ ShortName:
+ Body: Hyderabad, we have a major problem.
+ Trace ID: 63491217958f126f727622e41d4460f3
+ Span ID: d90c57d6e1ca4f6c
+ Flags: 1
diff --git a/docs/examples/logs/example.py b/docs/examples/logs/example.py
new file mode 100644
index 0000000000..2505aacea7
--- /dev/null
+++ b/docs/examples/logs/example.py
@@ -0,0 +1,58 @@
+import logging
+
+from opentelemetry import trace
+from opentelemetry._logs import set_logger_provider
+from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
+ OTLPLogExporter,
+)
+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
+from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+logger_provider = LoggerProvider(
+ resource=Resource.create(
+ {
+ "service.name": "shoppingcart",
+ "service.instance.id": "instance-12",
+ }
+ ),
+)
+set_logger_provider(logger_provider)
+
+exporter = OTLPLogExporter(insecure=True)
+logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
+handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
+
+# Attach OTLP handler to root logger
+logging.getLogger().addHandler(handler)
+
+# Log directly
+logging.info("Jackdaws love my big sphinx of quartz.")
+
+# Create different namespaced loggers
+logger1 = logging.getLogger("myapp.area1")
+logger2 = logging.getLogger("myapp.area2")
+
+logger1.debug("Quick zephyrs blow, vexing daft Jim.")
+logger1.info("How quickly daft jumping zebras vex.")
+logger2.warning("Jail zesty vixen who grabbed pay from quack.")
+logger2.error("The five boxing wizards jump quickly.")
+
+
+# Trace context correlation
+tracer = trace.get_tracer(__name__)
+with tracer.start_as_current_span("foo"):
+ # Do something
+ logger2.error("Hyderabad, we have a major problem.")
+
+logger_provider.shutdown()
diff --git a/docs/examples/logs/otel-collector-config.yaml b/docs/examples/logs/otel-collector-config.yaml
new file mode 100644
index 0000000000..6c87a2e847
--- /dev/null
+++ b/docs/examples/logs/otel-collector-config.yaml
@@ -0,0 +1,17 @@
+receivers:
+ otlp:
+ protocols:
+ grpc:
+
+exporters:
+ logging:
+ loglevel: debug
+
+processors:
+ batch:
+
+service:
+ pipelines:
+ logs:
+ receivers: [otlp]
+ exporters: [logging]
\ No newline at end of file
diff --git a/docs/examples/metrics/instruments/README.rst b/docs/examples/metrics/instruments/README.rst
new file mode 100644
index 0000000000..50e80a945e
--- /dev/null
+++ b/docs/examples/metrics/instruments/README.rst
@@ -0,0 +1,43 @@
+OpenTelemetry Metrics SDK
+=========================
+
+Start the Collector locally to see data being exported. Write the following file:
+
+.. code-block:: yaml
+
+ # otel-collector-config.yaml
+ receivers:
+ otlp:
+ protocols:
+ grpc:
+
+ exporters:
+ logging:
+
+ processors:
+ batch:
+
+ service:
+ pipelines:
+ metrics:
+ receivers: [otlp]
+ exporters: [logging]
+
+Then start the Docker container:
+
+.. code-block:: sh
+
+ docker run \
+ -p 4317:4317 \
+ -v $(pwd)/otel-collector-config.yaml:/etc/otel/config.yaml \
+ otel/opentelemetry-collector-contrib:latest
+
+.. code-block:: sh
+
+ $ python example.py
+
+The resulting metrics will appear in the output from the collector and look similar to this:
+
+.. code-block:: sh
+
+TODO
diff --git a/docs/examples/metrics/instruments/example.py b/docs/examples/metrics/instruments/example.py
new file mode 100644
index 0000000000..fde20308a2
--- /dev/null
+++ b/docs/examples/metrics/instruments/example.py
@@ -0,0 +1,62 @@
+from typing import Iterable
+
+from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
+ OTLPMetricExporter,
+)
+from opentelemetry.metrics import (
+ CallbackOptions,
+ Observation,
+ get_meter_provider,
+ set_meter_provider,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+
+exporter = OTLPMetricExporter(insecure=True)
+reader = PeriodicExportingMetricReader(exporter)
+provider = MeterProvider(metric_readers=[reader])
+set_meter_provider(provider)
+
+
+def observable_counter_func(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(1, {})
+
+
+def observable_up_down_counter_func(
+ options: CallbackOptions,
+) -> Iterable[Observation]:
+ yield Observation(-10, {})
+
+
+def observable_gauge_func(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(9, {})
+
+
+meter = get_meter_provider().get_meter("getting-started", "0.1.2")
+
+# Counter
+counter = meter.create_counter("counter")
+counter.add(1)
+
+# Async Counter
+observable_counter = meter.create_observable_counter(
+ "observable_counter",
+ [observable_counter_func],
+)
+
+# UpDownCounter
+updown_counter = meter.create_up_down_counter("updown_counter")
+updown_counter.add(1)
+updown_counter.add(-5)
+
+# Async UpDownCounter
+observable_updown_counter = meter.create_observable_up_down_counter(
+ "observable_updown_counter", [observable_up_down_counter_func]
+)
+
+# Histogram
+histogram = meter.create_histogram("histogram")
+histogram.record(99.9)
+
+# Async Gauge
+gauge = meter.create_observable_gauge("gauge", [observable_gauge_func])
diff --git a/docs/examples/metrics/instruments/otel-collector-config.yaml b/docs/examples/metrics/instruments/otel-collector-config.yaml
new file mode 100644
index 0000000000..3ae12695e6
--- /dev/null
+++ b/docs/examples/metrics/instruments/otel-collector-config.yaml
@@ -0,0 +1,16 @@
+receivers:
+ otlp:
+ protocols:
+ grpc:
+
+exporters:
+ logging:
+
+processors:
+ batch:
+
+service:
+ pipelines:
+ metrics:
+ receivers: [otlp]
+ exporters: [logging]
diff --git a/docs/examples/metrics/reader/README.rst b/docs/examples/metrics/reader/README.rst
new file mode 100644
index 0000000000..1751e4bd81
--- /dev/null
+++ b/docs/examples/metrics/reader/README.rst
@@ -0,0 +1,34 @@
+MetricReader configuration scenarios
+====================================
+
+These examples show how to customize the metrics that are output by the SDK using configuration on metric readers. There are multiple examples:
+
+* preferred_aggregation.py: Shows how to configure the preferred aggregation for metric instrument types.
+* preferred_temporality.py: Shows how to configure the preferred temporality for metric instrument types.
+
+The source files of these examples are available :scm_web:`here `.
+
+
+Installation
+------------
+
+.. code-block:: sh
+
+ pip install -r requirements.txt
+
+Run the Example
+---------------
+
+.. code-block:: sh
+
+ python .py
+
+The output will be shown in the console.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../../api/metrics`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/metrics/reader/preferred_aggregation.py b/docs/examples/metrics/reader/preferred_aggregation.py
new file mode 100644
index 0000000000..a332840d3f
--- /dev/null
+++ b/docs/examples/metrics/reader/preferred_aggregation.py
@@ -0,0 +1,52 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import LastValueAggregation
+
+aggregation_last_value = {Counter: LastValueAggregation()}
+
+# Use console exporter for the example
+exporter = ConsoleMetricExporter(
+ preferred_aggregation=aggregation_last_value,
+)
+
+# The PeriodicExportingMetricReader takes the preferred aggregation
+# from the passed in exporter
+reader = PeriodicExportingMetricReader(
+ exporter,
+ export_interval_millis=5_000,
+)
+
+provider = MeterProvider(metric_readers=[reader])
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("preferred-aggregation", "0.1.2")
+
+counter = meter.create_counter("my-counter")
+
+# A counter normally would have an aggregation type of SumAggregation,
+# in which it's value would be determined by a cumulative sum.
+# In this example, the counter is configured with the LastValueAggregation,
+# which will only hold the most recent value.
+for x in range(10):
+ counter.add(x)
+ time.sleep(2.0)
diff --git a/docs/examples/metrics/reader/preferred_temporality.py b/docs/examples/metrics/reader/preferred_temporality.py
new file mode 100644
index 0000000000..910c3fc953
--- /dev/null
+++ b/docs/examples/metrics/reader/preferred_temporality.py
@@ -0,0 +1,68 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+
+temporality_cumulative = {Counter: AggregationTemporality.CUMULATIVE}
+temporality_delta = {Counter: AggregationTemporality.DELTA}
+
+# Use console exporters for the example
+
+# The metrics that are exported using this exporter will represent a cumulative value
+exporter = ConsoleMetricExporter(
+ preferred_temporality=temporality_cumulative,
+)
+
+# The metrics that are exported using this exporter will represent a delta value
+exporter2 = ConsoleMetricExporter(
+ preferred_temporality=temporality_delta,
+)
+
+# The PeriodicExportingMetricReader takes the preferred aggregation
+# from the passed in exporter
+reader = PeriodicExportingMetricReader(
+ exporter,
+ export_interval_millis=5_000,
+)
+
+# The PeriodicExportingMetricReader takes the preferred aggregation
+# from the passed in exporter
+reader2 = PeriodicExportingMetricReader(
+ exporter2,
+ export_interval_millis=5_000,
+)
+
+provider = MeterProvider(metric_readers=[reader, reader2])
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("preferred-temporality", "0.1.2")
+
+counter = meter.create_counter("my-counter")
+
+# Two metrics are expected to be printed to the console per export interval.
+# The metric originating from the metric exporter with a preferred temporality
+# of cumulative will keep a running sum of all values added.
+# The metric originating from the metric exporter with a preferred temporality
+# of delta will have the sum value reset each export interval.
+counter.add(5)
+time.sleep(10)
+counter.add(20)
diff --git a/docs/examples/metrics/reader/requirements.txt b/docs/examples/metrics/reader/requirements.txt
new file mode 100644
index 0000000000..2ccffaf392
--- /dev/null
+++ b/docs/examples/metrics/reader/requirements.txt
@@ -0,0 +1,6 @@
+Deprecated==1.2.13
+opentelemetry-api==1.15.0
+opentelemetry-sdk==1.15.0
+opentelemetry-semantic-conventions==0.36b0
+typing_extensions==4.3.0
+wrapt==1.14.1
diff --git a/docs/examples/metrics/views/README.rst b/docs/examples/metrics/views/README.rst
new file mode 100644
index 0000000000..cc9afd97d0
--- /dev/null
+++ b/docs/examples/metrics/views/README.rst
@@ -0,0 +1,36 @@
+View common scenarios
+=====================
+
+These examples show how to customize the metrics that are output by the SDK using Views. There are multiple examples:
+
+* change_aggregation.py: Shows how to configure to change the default aggregation for an instrument.
+* change_name.py: Shows how to change the name of a metric.
+* limit_num_of_attrs.py: Shows how to limit the number of attributes that are output for a metric.
+* drop_metrics_from_instrument.py: Shows how to drop measurements from an instrument.
+
+The source files of these examples are available :scm_web:`here `.
+
+
+Installation
+------------
+
+.. code-block:: sh
+
+ pip install -r requirements.txt
+
+Run the Example
+---------------
+
+.. code-block:: sh
+
+ python .py
+
+The output will be shown in the console.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../../api/metrics`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/metrics/views/change_aggregation.py b/docs/examples/metrics/views/change_aggregation.py
new file mode 100644
index 0000000000..5dad07e64b
--- /dev/null
+++ b/docs/examples/metrics/views/change_aggregation.py
@@ -0,0 +1,53 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import SumAggregation, View
+
+# Create a view matching the histogram instrument name `http.client.request.latency`
+# and configure the `SumAggregation` for the result metrics stream
+hist_to_sum_view = View(
+ instrument_name="http.client.request.latency", aggregation=SumAggregation()
+)
+
+# Use console exporter for the example
+exporter = ConsoleMetricExporter()
+
+# Create a metric reader with stdout exporter
+reader = PeriodicExportingMetricReader(exporter, export_interval_millis=1_000)
+provider = MeterProvider(
+ metric_readers=[
+ reader,
+ ],
+ views=[
+ hist_to_sum_view,
+ ],
+)
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("view-change-aggregation", "0.1.2")
+
+histogram = meter.create_histogram("http.client.request.latency")
+
+while 1:
+ histogram.record(99.9)
+ time.sleep(random.random())
diff --git a/docs/examples/metrics/views/change_name.py b/docs/examples/metrics/views/change_name.py
new file mode 100644
index 0000000000..c70f7852a2
--- /dev/null
+++ b/docs/examples/metrics/views/change_name.py
@@ -0,0 +1,55 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import View
+
+# Create a view matching the counter instrument `my.counter`
+# and configure the new name `my.counter.total` for the result metrics stream
+change_metric_name_view = View(
+ instrument_type=Counter,
+ instrument_name="my.counter",
+ name="my.counter.total",
+)
+
+# Use console exporter for the example
+exporter = ConsoleMetricExporter()
+
+# Create a metric reader with stdout exporter
+reader = PeriodicExportingMetricReader(exporter, export_interval_millis=1_000)
+provider = MeterProvider(
+ metric_readers=[
+ reader,
+ ],
+ views=[
+ change_metric_name_view,
+ ],
+)
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("view-name-change", "0.1.2")
+
+my_counter = meter.create_counter("my.counter")
+
+while 1:
+ my_counter.add(random.randint(1, 10))
+ time.sleep(random.random())
diff --git a/docs/examples/metrics/views/disable_default_aggregation.py b/docs/examples/metrics/views/disable_default_aggregation.py
new file mode 100644
index 0000000000..387bfc465d
--- /dev/null
+++ b/docs/examples/metrics/views/disable_default_aggregation.py
@@ -0,0 +1,57 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import (
+ DropAggregation,
+ SumAggregation,
+ View,
+)
+
+# disable_default_aggregation.
+disable_default_aggregation = View(
+ instrument_name="*", aggregation=DropAggregation()
+)
+
+exporter = ConsoleMetricExporter()
+
+reader = PeriodicExportingMetricReader(exporter, export_interval_millis=1_000)
+provider = MeterProvider(
+ metric_readers=[
+ reader,
+ ],
+ views=[
+ disable_default_aggregation,
+ View(instrument_name="mycounter", aggregation=SumAggregation()),
+ ],
+)
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter(
+ "view-disable-default-aggregation", "0.1.2"
+)
+# Create a view to configure aggregation specific for this counter.
+my_counter = meter.create_counter("mycounter")
+
+while 1:
+ my_counter.add(random.randint(1, 10))
+ time.sleep(random.random())
diff --git a/docs/examples/metrics/views/drop_metrics_from_instrument.py b/docs/examples/metrics/views/drop_metrics_from_instrument.py
new file mode 100644
index 0000000000..c8ca1008e5
--- /dev/null
+++ b/docs/examples/metrics/views/drop_metrics_from_instrument.py
@@ -0,0 +1,53 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+
+from opentelemetry.metrics import get_meter_provider, set_meter_provider
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import DropAggregation, View
+
+# Create a view matching the counter instrument `my.counter`
+# and configure the view to drop the aggregation.
+drop_aggregation_view = View(
+ instrument_type=Counter,
+ instrument_name="my.counter",
+ aggregation=DropAggregation(),
+)
+
+exporter = ConsoleMetricExporter()
+
+reader = PeriodicExportingMetricReader(exporter, export_interval_millis=1_000)
+provider = MeterProvider(
+ metric_readers=[
+ reader,
+ ],
+ views=[
+ drop_aggregation_view,
+ ],
+)
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("view-drop-aggregation", "0.1.2")
+
+my_counter = meter.create_counter("my.counter")
+
+while 1:
+ my_counter.add(random.randint(1, 10))
+ time.sleep(random.random())
diff --git a/docs/examples/metrics/views/limit_num_of_attrs.py b/docs/examples/metrics/views/limit_num_of_attrs.py
new file mode 100644
index 0000000000..d9f0e9484c
--- /dev/null
+++ b/docs/examples/metrics/views/limit_num_of_attrs.py
@@ -0,0 +1,71 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+from typing import Iterable
+
+from opentelemetry.metrics import (
+ CallbackOptions,
+ Observation,
+ get_meter_provider,
+ set_meter_provider,
+)
+from opentelemetry.sdk.metrics import MeterProvider, ObservableGauge
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import View
+
+# Create a view matching the observable gauge instrument `observable_gauge`
+# and configure the attributes in the result metric stream
+# to contain only the attributes with keys with `k_3` and `k_5`
+view_with_attributes_limit = View(
+ instrument_type=ObservableGauge,
+ instrument_name="observable_gauge",
+ attribute_keys={"k_3", "k_5"},
+)
+
+exporter = ConsoleMetricExporter()
+
+reader = PeriodicExportingMetricReader(exporter, export_interval_millis=1_000)
+provider = MeterProvider(
+ metric_readers=[
+ reader,
+ ],
+ views=[
+ view_with_attributes_limit,
+ ],
+)
+set_meter_provider(provider)
+
+meter = get_meter_provider().get_meter("reduce-cardinality-with-view", "0.1.2")
+
+
+def observable_gauge_func(options: CallbackOptions) -> Iterable[Observation]:
+ attrs = {}
+ for i in range(random.randint(1, 100)):
+ attrs[f"k_{i}"] = f"v_{i}"
+ yield Observation(1, attrs)
+
+
+# Async gauge
+observable_gauge = meter.create_observable_gauge(
+ "observable_gauge",
+ [observable_gauge_func],
+)
+
+while 1:
+ time.sleep(1)
diff --git a/docs/examples/metrics/views/requirements.txt b/docs/examples/metrics/views/requirements.txt
new file mode 100644
index 0000000000..be61271135
--- /dev/null
+++ b/docs/examples/metrics/views/requirements.txt
@@ -0,0 +1,6 @@
+Deprecated==1.2.13
+opentelemetry-api==1.12.0
+opentelemetry-sdk==1.12.0
+opentelemetry-semantic-conventions==0.33b0
+typing_extensions==4.3.0
+wrapt==1.14.1
diff --git a/docs/examples/opencensus-exporter-tracer/README.rst b/docs/examples/opencensus-exporter-tracer/README.rst
new file mode 100644
index 0000000000..3047987c2c
--- /dev/null
+++ b/docs/examples/opencensus-exporter-tracer/README.rst
@@ -0,0 +1,51 @@
+OpenCensus Exporter
+===================
+
+This example shows how to use the OpenCensus Exporter to export traces to the
+OpenTelemetry collector.
+
+The source files of this example are available :scm_web:`here `.
+
+Installation
+------------
+
+.. code-block:: sh
+
+ pip install opentelemetry-api
+ pip install opentelemetry-sdk
+ pip install opentelemetry-exporter-opencensus
+
+Run the Example
+---------------
+
+Before running the example, it's necessary to run the OpenTelemetry collector
+and Jaeger. The :scm_web:`docker `
+folder contains a ``docker-compose`` template with the configuration of those
+services.
+
+.. code-block:: sh
+
+ pip install docker-compose
+ cd docker
+ docker-compose up
+
+
+Now, the example can be executed:
+
+.. code-block:: sh
+
+ python collector.py
+
+
+The traces are available in the Jaeger UI at http://localhost:16686/.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- `OpenTelemetry Collector`_
+- :doc:`../../api/trace`
+- :doc:`../../exporter/opencensus/opencensus`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+.. _OpenTelemetry Collector: https://github.com/open-telemetry/opentelemetry-collector
diff --git a/docs/examples/opencensus-exporter-tracer/collector.py b/docs/examples/opencensus-exporter-tracer/collector.py
new file mode 100644
index 0000000000..cd33c89617
--- /dev/null
+++ b/docs/examples/opencensus-exporter-tracer/collector.py
@@ -0,0 +1,32 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.exporter.opencensus.trace_exporter import (
+ OpenCensusSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+exporter = OpenCensusSpanExporter(endpoint="localhost:55678")
+
+trace.set_tracer_provider(TracerProvider())
+tracer = trace.get_tracer(__name__)
+span_processor = BatchSpanProcessor(exporter)
+
+trace.get_tracer_provider().add_span_processor(span_processor)
+with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ print("Hello world from OpenTelemetry Python!")
diff --git a/docs/examples/opencensus-exporter-tracer/docker/collector-config.yaml b/docs/examples/opencensus-exporter-tracer/docker/collector-config.yaml
new file mode 100644
index 0000000000..bcf59c5802
--- /dev/null
+++ b/docs/examples/opencensus-exporter-tracer/docker/collector-config.yaml
@@ -0,0 +1,19 @@
+receivers:
+ opencensus:
+ endpoint: "0.0.0.0:55678"
+
+exporters:
+ jaeger_grpc:
+ endpoint: jaeger-all-in-one:14250
+ logging: {}
+
+processors:
+ batch:
+ queued_retry:
+
+service:
+ pipelines:
+ traces:
+ receivers: [opencensus]
+ exporters: [jaeger_grpc, logging]
+ processors: [batch, queued_retry]
diff --git a/docs/examples/opencensus-exporter-tracer/docker/docker-compose.yaml b/docs/examples/opencensus-exporter-tracer/docker/docker-compose.yaml
new file mode 100644
index 0000000000..71d7ccd5a1
--- /dev/null
+++ b/docs/examples/opencensus-exporter-tracer/docker/docker-compose.yaml
@@ -0,0 +1,20 @@
+version: "2"
+services:
+
+ # Collector
+ collector:
+ image: omnition/opentelemetry-collector-contrib:latest
+ command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
+ volumes:
+ - ./collector-config.yaml:/conf/collector-config.yaml
+ ports:
+ - "55678:55678"
+
+ jaeger-all-in-one:
+ image: jaegertracing/all-in-one:latest
+ ports:
+ - "16686:16686"
+ - "6831:6831/udp"
+ - "6832:6832/udp"
+ - "14268"
+ - "14250"
diff --git a/docs/examples/opencensus-shim/.gitignore b/docs/examples/opencensus-shim/.gitignore
new file mode 100644
index 0000000000..300f4e1546
--- /dev/null
+++ b/docs/examples/opencensus-shim/.gitignore
@@ -0,0 +1 @@
+example.db
diff --git a/docs/examples/opencensus-shim/README.rst b/docs/examples/opencensus-shim/README.rst
new file mode 100644
index 0000000000..9c24440172
--- /dev/null
+++ b/docs/examples/opencensus-shim/README.rst
@@ -0,0 +1,93 @@
+OpenCensus Shim
+================
+
+This example shows how to use the :doc:`opentelemetry-opencensus-shim
+package <../../shim/opencensus_shim/opencensus_shim>`
+to interact with libraries instrumented with
+`opencensus-python `_.
+
+
+The source files required to run this example are available :scm_web:`here `.
+
+Installation
+------------
+
+Jaeger
+******
+
+Start Jaeger
+
+.. code-block:: sh
+
+ docker run --rm \
+ -p 6831:6831/udp \
+ -p 6832:6832/udp \
+ -p 16686:16686 \
+ jaegertracing/all-in-one:1.13 \
+ --log-level=debug
+
+Python Dependencies
+*******************
+
+Install the Python dependencies in :scm_raw_web:`requirements.txt `
+
+.. code-block:: sh
+
+ pip install -r requirements.txt
+
+
+Alternatively, you can install the Python dependencies separately:
+
+.. code-block:: sh
+
+ pip install \
+ opentelemetry-api \
+ opentelemetry-sdk \
+ opentelemetry-exporter-jaeger \
+ opentelemetry-opencensus-shim \
+ opentelemetry-instrumentation-sqlite3 \
+ opencensus \
+ opencensus-ext-flask \
+ Flask
+
+
+Run the Application
+-------------------
+
+Start the application in a terminal.
+
+.. code-block:: sh
+
+ flask --app app run -h 0.0.0.0
+
+Point your browser to the address printed out (probably http://127.0.0.1:5000). Alternatively, just use curl to trigger a request:
+
+.. code-block:: sh
+
+ curl http://127.0.0.1:5000
+
+Jaeger UI
+*********
+
+Open the Jaeger UI in your browser at ``_ and view traces for the
+"opencensus-shim-example-flask" service. Click on a span named "span" in the scatter plot. You
+will see a span tree with the following structure:
+
+* ``span``
+ * ``query movies from db``
+ * ``SELECT``
+ * ``build response html``
+
+The root span comes from OpenCensus Flask instrumentation. The children ``query movies from
+db`` and ``build response html`` come from the manual instrumentation using OpenTelemetry's
+:meth:`opentelemetry.trace.Tracer.start_as_current_span`. Finally, the ``SELECT`` span is
+created by OpenTelemetry's SQLite3 instrumentation. Everything is exported to Jaeger using the
+OpenTelemetry exporter.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../shim/opencensus_shim/opencensus_shim`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/opencensus-shim/app.py b/docs/examples/opencensus-shim/app.py
new file mode 100644
index 0000000000..5c8b7f744b
--- /dev/null
+++ b/docs/examples/opencensus-shim/app.py
@@ -0,0 +1,102 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sqlite3
+
+from flask import Flask
+from opencensus.ext.flask.flask_middleware import FlaskMiddleware
+
+from opentelemetry import trace
+from opentelemetry.exporter.jaeger.thrift import JaegerExporter
+from opentelemetry.instrumentation.sqlite3 import SQLite3Instrumentor
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+from opentelemetry.shim.opencensus import install_shim
+
+DB = "example.db"
+
+# Set up OpenTelemetry
+tracer_provider = TracerProvider(
+ resource=Resource(
+ {
+ "service.name": "opencensus-shim-example-flask",
+ }
+ )
+)
+trace.set_tracer_provider(tracer_provider)
+
+# Configure OTel to export traces to Jaeger
+tracer_provider.add_span_processor(
+ BatchSpanProcessor(
+ JaegerExporter(
+ agent_host_name="localhost",
+ agent_port=6831,
+ )
+ )
+)
+tracer = tracer_provider.get_tracer(__name__)
+
+# Install the shim to start bridging spans from OpenCensus to OpenTelemetry
+install_shim()
+
+# Instrument sqlite3 library
+SQLite3Instrumentor().instrument()
+
+# Setup Flask with OpenCensus instrumentation
+app = Flask(__name__)
+FlaskMiddleware(app)
+
+
+# Setup the application database
+def setup_db():
+ with sqlite3.connect(DB) as con:
+ cur = con.cursor()
+ cur.execute(
+ """
+ CREATE TABLE IF NOT EXISTS movie(
+ title,
+ year,
+ PRIMARY KEY(title, year)
+ )
+ """
+ )
+ cur.execute(
+ """
+ INSERT OR IGNORE INTO movie(title, year) VALUES
+ ('Mission Telemetry', 2000),
+ ('Observing the World', 2010),
+ ('The Tracer', 1999),
+ ('The Instrument', 2020)
+ """
+ )
+
+
+setup_db()
+
+
+@app.route("/")
+def hello_world():
+ lines = []
+ with tracer.start_as_current_span("query movies from db"), sqlite3.connect(
+ DB
+ ) as con:
+ cur = con.cursor()
+ for title, year in cur.execute("SELECT title, year from movie"):
+ lines.append(f"{title} is from the year {year}")
+
+ with tracer.start_as_current_span("build response html"):
+ html = f""
+
+ return html
diff --git a/docs/examples/opencensus-shim/requirements.txt b/docs/examples/opencensus-shim/requirements.txt
new file mode 100644
index 0000000000..da9f0f3f96
--- /dev/null
+++ b/docs/examples/opencensus-shim/requirements.txt
@@ -0,0 +1,8 @@
+opentelemetry-api
+opentelemetry-sdk
+opentelemetry-exporter-jaeger
+opentelemetry-opencensus-shim
+opentelemetry-instrumentation-sqlite3
+opencensus
+opencensus-ext-flask
+Flask
diff --git a/docs/examples/opentracing/README.rst b/docs/examples/opentracing/README.rst
new file mode 100644
index 0000000000..0bf5f8dca3
--- /dev/null
+++ b/docs/examples/opentracing/README.rst
@@ -0,0 +1,105 @@
+OpenTracing Shim
+================
+
+This example shows how to use the :doc:`opentelemetry-opentracing-shim
+package <../../shim/opentracing_shim/opentracing_shim>`
+to interact with libraries instrumented with
+`opentracing-python `_.
+
+The included ``rediscache`` library creates spans via the OpenTracing Redis
+integration,
+`redis_opentracing `_.
+Spans are exported via the Jaeger exporter, which is attached to the
+OpenTelemetry tracer.
+
+
+The source files required to run this example are available :scm_web:`here `.
+
+Installation
+------------
+
+Jaeger
+******
+
+Start Jaeger
+
+.. code-block:: sh
+
+ docker run --rm \
+ -p 6831:6831/udp \
+ -p 6832:6832/udp \
+ -p 16686:16686 \
+ jaegertracing/all-in-one:1.13 \
+ --log-level=debug
+
+Redis
+*****
+
+Install Redis following the `instructions `_.
+
+Make sure that the Redis server is running by executing this:
+
+.. code-block:: sh
+
+ redis-server
+
+
+Python Dependencies
+*******************
+
+Install the Python dependencies in :scm_raw_web:`requirements.txt `
+
+.. code-block:: sh
+
+ pip install -r requirements.txt
+
+
+Alternatively, you can install the Python dependencies separately:
+
+.. code-block:: sh
+
+ pip install \
+ opentelemetry-api \
+ opentelemetry-sdk \
+ opentelemetry-exporter-jaeger \
+ opentelemetry-opentracing-shim \
+ redis \
+ redis_opentracing
+
+
+Run the Application
+-------------------
+
+The example script calculates a few Fibonacci numbers and stores the results in
+Redis. The script, the ``rediscache`` library, and the OpenTracing Redis
+integration all contribute spans to the trace.
+
+To run the script:
+
+.. code-block:: sh
+
+ python main.py
+
+
+After running, you can view the generated trace in the Jaeger UI.
+
+Jaeger UI
+*********
+
+Open the Jaeger UI in your browser at
+``_ and view traces for the
+"OpenTracing Shim Example" service.
+
+Each ``main.py`` run should generate a trace, and each trace should include
+multiple spans that represent calls to Redis.
+
+Note that tags and logs (OpenTracing) and attributes and events (OpenTelemetry)
+from both tracing systems appear in the exported trace.
+
+Useful links
+------------
+
+- OpenTelemetry_
+- :doc:`../../shim/opentracing_shim/opentracing_shim`
+
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
diff --git a/docs/examples/opentracing/__init__.py b/docs/examples/opentracing/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/examples/opentracing/main.py b/docs/examples/opentracing/main.py
new file mode 100755
index 0000000000..3975c4a45d
--- /dev/null
+++ b/docs/examples/opentracing/main.py
@@ -0,0 +1,46 @@
+#!/usr/bin/env python
+
+from rediscache import RedisCache
+
+from opentelemetry import trace
+from opentelemetry.exporter.jaeger.thrift import JaegerExporter
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+from opentelemetry.shim import opentracing_shim
+
+# Configure the tracer using the default implementation
+trace.set_tracer_provider(TracerProvider())
+tracer_provider = trace.get_tracer_provider()
+
+# Configure the tracer to export traces to Jaeger
+jaeger_exporter = JaegerExporter(
+ agent_host_name="localhost",
+ agent_port=6831,
+)
+span_processor = BatchSpanProcessor(jaeger_exporter)
+tracer_provider.add_span_processor(span_processor)
+
+# Create an OpenTracing shim. This implements the OpenTracing tracer API, but
+# forwards calls to the underlying OpenTelemetry tracer.
+opentracing_tracer = opentracing_shim.create_tracer(tracer_provider)
+
+# Our example caching library expects an OpenTracing-compliant tracer.
+redis_cache = RedisCache(opentracing_tracer)
+
+# Application code uses an OpenTelemetry Tracer as usual.
+tracer = trace.get_tracer(__name__)
+
+
+@redis_cache
+def fib(number):
+ """Get the Nth Fibonacci number, cache intermediate results in Redis."""
+ if number < 0:
+ raise ValueError
+ if number in (0, 1):
+ return number
+ return fib(number - 1) + fib(number - 2)
+
+
+with tracer.start_as_current_span("Fibonacci") as span:
+ span.set_attribute("is_example", "yes :)")
+ fib(4)
diff --git a/docs/examples/opentracing/rediscache.py b/docs/examples/opentracing/rediscache.py
new file mode 100644
index 0000000000..9d2a51aab8
--- /dev/null
+++ b/docs/examples/opentracing/rediscache.py
@@ -0,0 +1,61 @@
+"""
+This is an example of a library written to work with opentracing-python. It
+provides a simple caching decorator backed by Redis, and uses the OpenTracing
+Redis integration to automatically generate spans for each call to Redis.
+"""
+
+import pickle
+from functools import wraps
+
+# FIXME The pylint disablings are needed here because the code of this
+# example is being executed against the tox.ini of the main
+# opentelemetry-python project. Find a way to separate the two.
+import redis # pylint: disable=import-error
+import redis_opentracing # pylint: disable=import-error
+
+
+class RedisCache:
+ """Redis-backed caching decorator, using OpenTracing!
+
+ Args:
+ tracer: an opentracing.tracer.Tracer
+ """
+
+ def __init__(self, tracer):
+ redis_opentracing.init_tracing(tracer)
+ self.tracer = tracer
+ self.client = redis.StrictRedis()
+
+ def __call__(self, func):
+ @wraps(func)
+ def inner(*args, **kwargs):
+ with self.tracer.start_active_span("Caching decorator") as scope1:
+
+ # Pickle the call args to get a canonical key. Don't do this in
+ # prod!
+ key = pickle.dumps((func.__qualname__, args, kwargs))
+
+ pval = self.client.get(key)
+ if pval is not None:
+ val = pickle.loads(pval)
+ scope1.span.log_kv(
+ {"msg": "Found cached value", "val": val}
+ )
+ return val
+
+ scope1.span.log_kv({"msg": "Cache miss, calling function"})
+ with self.tracer.start_active_span(
+ f'Call "{func.__name__}"'
+ ) as scope2:
+ scope2.span.set_tag("func", func.__name__)
+ scope2.span.set_tag("args", str(args))
+ scope2.span.set_tag("kwargs", str(kwargs))
+
+ val = func(*args, **kwargs)
+ scope2.span.set_tag("val", str(val))
+
+ # Let keys expire after 10 seconds
+ self.client.setex(key, 10, pickle.dumps(val))
+ return val
+
+ return inner
diff --git a/docs/examples/opentracing/requirements.txt b/docs/examples/opentracing/requirements.txt
new file mode 100644
index 0000000000..fa4b520936
--- /dev/null
+++ b/docs/examples/opentracing/requirements.txt
@@ -0,0 +1,6 @@
+opentelemetry-api
+opentelemetry-sdk
+opentelemetry-exporter-jaeger
+opentelemetry-opentracing-shim
+redis
+redis_opentracing
diff --git a/docs/exporter/index.rst b/docs/exporter/index.rst
new file mode 100644
index 0000000000..9316ba0e6d
--- /dev/null
+++ b/docs/exporter/index.rst
@@ -0,0 +1,10 @@
+:orphan:
+
+Exporters
+=========
+
+.. toctree::
+ :maxdepth: 1
+ :glob:
+
+ **
diff --git a/docs/exporter/opencensus/opencensus.rst b/docs/exporter/opencensus/opencensus.rst
new file mode 100644
index 0000000000..6bdcd6a873
--- /dev/null
+++ b/docs/exporter/opencensus/opencensus.rst
@@ -0,0 +1,7 @@
+OpenCensus Exporter
+===================
+
+.. automodule:: opentelemetry.exporter.opencensus
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/exporter/otlp/otlp.rst b/docs/exporter/otlp/otlp.rst
new file mode 100644
index 0000000000..471f2935fb
--- /dev/null
+++ b/docs/exporter/otlp/otlp.rst
@@ -0,0 +1,12 @@
+OpenTelemetry OTLP Exporters
+============================
+
+.. automodule:: opentelemetry.exporter.otlp
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. automodule:: opentelemetry.exporter.otlp.proto.grpc
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/exporter/zipkin/zipkin.rst b/docs/exporter/zipkin/zipkin.rst
new file mode 100644
index 0000000000..a33b7f5de1
--- /dev/null
+++ b/docs/exporter/zipkin/zipkin.rst
@@ -0,0 +1,17 @@
+OpenTelemetry Zipkin Exporters
+==============================
+
+.. automodule:: opentelemetry.exporter.zipkin
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. automodule:: opentelemetry.exporter.zipkin.json
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+.. automodule:: opentelemetry.exporter.zipkin.proto.http
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/getting_started/flask_example.py b/docs/getting_started/flask_example.py
new file mode 100644
index 0000000000..64ed606c7f
--- /dev/null
+++ b/docs/getting_started/flask_example.py
@@ -0,0 +1,47 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# flask_example.py
+import flask
+import requests
+
+from opentelemetry import trace
+from opentelemetry.instrumentation.flask import FlaskInstrumentor
+from opentelemetry.instrumentation.requests import RequestsInstrumentor
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+trace.set_tracer_provider(TracerProvider())
+trace.get_tracer_provider().add_span_processor(
+ BatchSpanProcessor(ConsoleSpanExporter())
+)
+
+app = flask.Flask(__name__)
+FlaskInstrumentor().instrument_app(app)
+RequestsInstrumentor().instrument()
+
+tracer = trace.get_tracer(__name__)
+
+
+@app.route("/")
+def hello():
+ with tracer.start_as_current_span("example-request"):
+ requests.get("http://www.example.com")
+ return "hello"
+
+
+app.run(port=5000)
diff --git a/docs/getting_started/metrics_example.py b/docs/getting_started/metrics_example.py
new file mode 100644
index 0000000000..83c9a1b8c4
--- /dev/null
+++ b/docs/getting_started/metrics_example.py
@@ -0,0 +1,78 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# metrics.py
+# This is still work in progress as the metrics SDK is being implemented
+
+from typing import Iterable
+
+from opentelemetry.metrics import (
+ CallbackOptions,
+ Observation,
+ get_meter_provider,
+ set_meter_provider,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+
+exporter = ConsoleMetricExporter()
+reader = PeriodicExportingMetricReader(exporter)
+provider = MeterProvider(metric_readers=[reader])
+set_meter_provider(provider)
+
+
+def observable_counter_func(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(1, {})
+
+
+def observable_up_down_counter_func(
+ options: CallbackOptions,
+) -> Iterable[Observation]:
+ yield Observation(-10, {})
+
+
+def observable_gauge_func(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(9, {})
+
+
+meter = get_meter_provider().get_meter("getting-started", "0.1.2")
+
+# Counter
+counter = meter.create_counter("counter")
+counter.add(1)
+
+# Async Counter
+observable_counter = meter.create_observable_counter(
+ "observable_counter", [observable_counter_func]
+)
+
+# UpDownCounter
+updown_counter = meter.create_up_down_counter("updown_counter")
+updown_counter.add(1)
+updown_counter.add(-5)
+
+# Async UpDownCounter
+observable_updown_counter = meter.create_observable_up_down_counter(
+ "observable_updown_counter", [observable_up_down_counter_func]
+)
+
+# Histogram
+histogram = meter.create_histogram("histogram")
+histogram.record(99.9)
+
+# Async Gauge
+gauge = meter.create_observable_gauge("gauge", [observable_gauge_func])
diff --git a/docs/getting_started/otlpcollector_example.py b/docs/getting_started/otlpcollector_example.py
new file mode 100644
index 0000000000..11b3b12d4b
--- /dev/null
+++ b/docs/getting_started/otlpcollector_example.py
@@ -0,0 +1,39 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# otcollector.py
+
+from opentelemetry import trace
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+span_exporter = OTLPSpanExporter(
+ # optional
+ # endpoint="myCollectorURL:4317",
+ # credentials=ChannelCredentials(credentials),
+ # headers=(("metadata", "metadata")),
+)
+tracer_provider = TracerProvider()
+trace.set_tracer_provider(tracer_provider)
+span_processor = BatchSpanProcessor(span_exporter)
+tracer_provider.add_span_processor(span_processor)
+
+# Configure the tracer to use the collector exporter
+tracer = trace.get_tracer_provider().get_tracer(__name__)
+
+with tracer.start_as_current_span("foo"):
+ print("Hello world!")
diff --git a/docs/getting_started/tests/__init__.py b/docs/getting_started/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/docs/getting_started/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/docs/getting_started/tests/requirements.txt b/docs/getting_started/tests/requirements.txt
new file mode 100644
index 0000000000..c4c62067ac
--- /dev/null
+++ b/docs/getting_started/tests/requirements.txt
@@ -0,0 +1,27 @@
+asgiref==3.7.2
+attrs==23.1.0
+certifi==2023.7.22
+charset-normalizer==2.0.12
+click==8.1.7
+Deprecated==1.2.14
+flaky==3.7.0
+Flask==2.0.1
+idna==3.4
+importlib-metadata==6.8.0
+iniconfig==2.0.0
+itsdangerous==2.1.2
+Jinja2==3.1.2
+MarkupSafe==2.1.3
+packaging==23.2
+pluggy==1.3.0
+py==1.11.0
+py-cpuinfo==9.0.0
+pytest==7.1.3
+pytest-benchmark==4.0.0
+requests==2.26.0
+tomli==2.0.1
+typing_extensions==4.8.0
+urllib3==1.26.18
+Werkzeug==2.3.7
+wrapt==1.15.0
+zipp==3.17.0
diff --git a/docs/getting_started/tests/test_flask.py b/docs/getting_started/tests/test_flask.py
new file mode 100644
index 0000000000..b7a5b46d5a
--- /dev/null
+++ b/docs/getting_started/tests/test_flask.py
@@ -0,0 +1,49 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import subprocess
+import sys
+import unittest
+from time import sleep
+
+import requests
+from requests.adapters import HTTPAdapter
+from requests.packages.urllib3.util.retry import Retry
+
+
+class TestFlask(unittest.TestCase):
+ def test_flask(self):
+ dirpath = os.path.dirname(os.path.realpath(__file__))
+ server_script = f"{dirpath}/../flask_example.py"
+ server = subprocess.Popen(
+ [sys.executable, server_script],
+ stdout=subprocess.PIPE,
+ )
+ retry_strategy = Retry(total=10, backoff_factor=1)
+ adapter = HTTPAdapter(max_retries=retry_strategy)
+ http = requests.Session()
+ http.mount("http://", adapter)
+
+ try:
+ result = http.get("http://localhost:5000")
+ self.assertEqual(result.status_code, 200)
+
+ sleep(5)
+ finally:
+ server.terminate()
+
+ output = str(server.stdout.read())
+ self.assertIn('"name": "GET"', output)
+ self.assertIn('"name": "example-request"', output)
+ self.assertIn('"name": "/"', output)
diff --git a/docs/getting_started/tests/test_tracing.py b/docs/getting_started/tests/test_tracing.py
new file mode 100644
index 0000000000..2ad571963b
--- /dev/null
+++ b/docs/getting_started/tests/test_tracing.py
@@ -0,0 +1,30 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import subprocess
+import sys
+import unittest
+
+
+class TestBasicTracerExample(unittest.TestCase):
+ def test_basic_tracer(self):
+ dirpath = os.path.dirname(os.path.realpath(__file__))
+ test_script = f"{dirpath}/../tracing_example.py"
+ output = subprocess.check_output(
+ (sys.executable, test_script)
+ ).decode()
+
+ self.assertIn('"name": "foo"', output)
+ self.assertIn('"name": "bar"', output)
+ self.assertIn('"name": "baz"', output)
diff --git a/docs/getting_started/tracing_example.py b/docs/getting_started/tracing_example.py
new file mode 100644
index 0000000000..519e45f360
--- /dev/null
+++ b/docs/getting_started/tracing_example.py
@@ -0,0 +1,34 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# tracing.py
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ ConsoleSpanExporter,
+)
+
+provider = TracerProvider()
+processor = BatchSpanProcessor(ConsoleSpanExporter())
+provider.add_span_processor(processor)
+trace.set_tracer_provider(provider)
+
+
+tracer = trace.get_tracer(__name__)
+
+with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ print("Hello world from OpenTelemetry Python!")
diff --git a/docs/index.rst b/docs/index.rst
index 5203c377e4..a018f46674 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -1,13 +1,19 @@
+<<<<<<< HEAD
OpenTelemetry-Python-Contrib
============================
Complimentary instrumentation and vendor-specific packages for use with the
Python `OpenTelemetry `_ client.
+=======
+OpenTelemetry-Python API Reference
+==================================
+>>>>>>> upstream/main
.. image:: https://img.shields.io/badge/slack-chat-green.svg
:target: https://cloud-native.slack.com/archives/C01PD4HUVBL
:alt: Slack Chat
+<<<<<<< HEAD
**Please note** that this library is currently in _beta_, and shouldn't
generally be used in production environments.
@@ -96,6 +102,42 @@ install
Indices and tables
------------------
+=======
+Welcome to the docs for the `Python OpenTelemetry implementation
+`_.
+
+For an introduction to OpenTelemetry, see the `OpenTelemetry website docs
+`_.
+
+To learn how to instrument your Python code, see `Getting Started
+`_. For
+project status, information about releases, installation instructions and more,
+see `Python `_.
+
+Getting Started
+---------------
+
+* `Getting Started `_
+* `Frequently Asked Questions and Cookbook `_
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Core Packages
+ :name: packages
+
+ api/index
+ sdk/index
+
+.. toctree::
+ :maxdepth: 2
+ :caption: More
+ :glob:
+
+ exporter/index
+ shim/index
+ examples/index
+
+>>>>>>> upstream/main
* :ref:`genindex`
* :ref:`modindex`
diff --git a/docs/sdk/_logs.rst b/docs/sdk/_logs.rst
new file mode 100644
index 0000000000..185e7006e4
--- /dev/null
+++ b/docs/sdk/_logs.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk._logs package
+===============================
+
+.. automodule:: opentelemetry.sdk._logs
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/environment_variables.rst b/docs/sdk/environment_variables.rst
new file mode 100644
index 0000000000..084a34b7be
--- /dev/null
+++ b/docs/sdk/environment_variables.rst
@@ -0,0 +1,12 @@
+opentelemetry.sdk.environment_variables
+=======================================
+
+.. TODO: what is the SDK
+
+.. toctree::
+ :maxdepth: 1
+
+.. automodule:: opentelemetry.sdk.environment_variables
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/error_handler.rst b/docs/sdk/error_handler.rst
new file mode 100644
index 0000000000..49962bf769
--- /dev/null
+++ b/docs/sdk/error_handler.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.error_handler package
+=======================================
+
+.. automodule:: opentelemetry.sdk.error_handler
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/index.rst b/docs/sdk/index.rst
new file mode 100644
index 0000000000..d5d3688443
--- /dev/null
+++ b/docs/sdk/index.rst
@@ -0,0 +1,14 @@
+OpenTelemetry Python SDK
+========================
+
+.. TODO: what is the SDK
+
+.. toctree::
+ :maxdepth: 1
+
+ _logs
+ resources
+ trace
+ metrics
+ error_handler
+ environment_variables
diff --git a/docs/sdk/metrics.export.rst b/docs/sdk/metrics.export.rst
new file mode 100644
index 0000000000..0c0efaaf91
--- /dev/null
+++ b/docs/sdk/metrics.export.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.metrics.export
+================================
+
+.. automodule:: opentelemetry.sdk.metrics.export
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/metrics.rst b/docs/sdk/metrics.rst
new file mode 100644
index 0000000000..28f33f097c
--- /dev/null
+++ b/docs/sdk/metrics.rst
@@ -0,0 +1,15 @@
+opentelemetry.sdk.metrics package
+==================================
+
+Submodules
+----------
+
+.. toctree::
+
+ metrics.export
+ metrics.view
+
+.. automodule:: opentelemetry.sdk.metrics
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/metrics.view.rst b/docs/sdk/metrics.view.rst
new file mode 100644
index 0000000000..d7fa96b235
--- /dev/null
+++ b/docs/sdk/metrics.view.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.metrics.view
+==============================
+
+.. automodule:: opentelemetry.sdk.metrics.view
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/resources.rst b/docs/sdk/resources.rst
new file mode 100644
index 0000000000..08732ac025
--- /dev/null
+++ b/docs/sdk/resources.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.resources package
+==========================================
+
+.. automodule:: opentelemetry.sdk.resources
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/trace.export.rst b/docs/sdk/trace.export.rst
new file mode 100644
index 0000000000..b876f366fd
--- /dev/null
+++ b/docs/sdk/trace.export.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.trace.export
+==========================================
+
+.. automodule:: opentelemetry.sdk.trace.export
+ :members:
+ :undoc-members:
+ :show-inheritance:
\ No newline at end of file
diff --git a/docs/sdk/trace.id_generator.rst b/docs/sdk/trace.id_generator.rst
new file mode 100644
index 0000000000..e0b4640e41
--- /dev/null
+++ b/docs/sdk/trace.id_generator.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.trace.id_generator
+====================================
+
+.. automodule:: opentelemetry.sdk.trace.id_generator
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/trace.rst b/docs/sdk/trace.rst
new file mode 100644
index 0000000000..d163ac11e2
--- /dev/null
+++ b/docs/sdk/trace.rst
@@ -0,0 +1,17 @@
+opentelemetry.sdk.trace package
+===============================
+
+Submodules
+----------
+
+.. toctree::
+
+ trace.export
+ trace.id_generator
+ trace.sampling
+ util.instrumentation
+
+.. automodule:: opentelemetry.sdk.trace
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/docs/sdk/trace.sampling.rst b/docs/sdk/trace.sampling.rst
new file mode 100644
index 0000000000..f9c2fffa25
--- /dev/null
+++ b/docs/sdk/trace.sampling.rst
@@ -0,0 +1,7 @@
+opentelemetry.sdk.trace.sampling
+==========================================
+
+.. automodule:: opentelemetry.sdk.trace.sampling
+ :members:
+ :undoc-members:
+ :show-inheritance:
\ No newline at end of file
diff --git a/docs/sdk/util.instrumentation.rst b/docs/sdk/util.instrumentation.rst
new file mode 100644
index 0000000000..a7d391bcee
--- /dev/null
+++ b/docs/sdk/util.instrumentation.rst
@@ -0,0 +1,4 @@
+opentelemetry.sdk.util.instrumentation
+==========================================
+
+.. automodule:: opentelemetry.sdk.util.instrumentation
diff --git a/docs/shim/index.rst b/docs/shim/index.rst
new file mode 100644
index 0000000000..5fad3b3663
--- /dev/null
+++ b/docs/shim/index.rst
@@ -0,0 +1,10 @@
+:orphan:
+
+Shims
+=====
+
+.. toctree::
+ :maxdepth: 1
+ :glob:
+
+ **
diff --git a/docs/shim/opencensus_shim/opencensus_shim.rst b/docs/shim/opencensus_shim/opencensus_shim.rst
new file mode 100644
index 0000000000..3c8bff1d3c
--- /dev/null
+++ b/docs/shim/opencensus_shim/opencensus_shim.rst
@@ -0,0 +1,5 @@
+OpenCensus Shim for OpenTelemetry
+==================================
+
+.. automodule:: opentelemetry.shim.opencensus
+ :no-show-inheritance:
diff --git a/docs/shim/opentracing_shim/opentracing_shim.rst b/docs/shim/opentracing_shim/opentracing_shim.rst
new file mode 100644
index 0000000000..175a10e860
--- /dev/null
+++ b/docs/shim/opentracing_shim/opentracing_shim.rst
@@ -0,0 +1,5 @@
+OpenTracing Shim for OpenTelemetry
+==================================
+
+.. automodule:: opentelemetry.shim.opentracing_shim
+ :no-show-inheritance:
diff --git a/eachdist.ini b/eachdist.ini
index 5c6531ca65..18218e6b64 100644
--- a/eachdist.ini
+++ b/eachdist.ini
@@ -1,6 +1,7 @@
# These will be sorted first in that order.
# All packages that are depended upon by others should be listed here.
[DEFAULT]
+<<<<<<< HEAD
ignore=
_template
@@ -14,6 +15,16 @@ sortfirst=
instrumentation/*
exporter/*
ext/*
+=======
+
+sortfirst=
+ opentelemetry-api
+ opentelemetry-sdk
+ opentelemetry-proto
+ opentelemetry-distro
+ tests/opentelemetry-test-utils
+ exporter/*
+>>>>>>> upstream/main
[stable]
version=1.23.0.dev
@@ -27,16 +38,22 @@ packages=
opentelemetry-exporter-zipkin-json
opentelemetry-exporter-zipkin
opentelemetry-exporter-otlp-proto-grpc
+<<<<<<< HEAD
opentelemetry-exporter-otlp
opentelemetry-exporter-jaeger-thrift
opentelemetry-exporter-jaeger-proto-grpc
opentelemetry-exporter-jaeger
+=======
+ opentelemetry-exporter-otlp-proto-http
+ opentelemetry-exporter-otlp
+>>>>>>> upstream/main
opentelemetry-api
[prerelease]
version=0.44b0.dev
packages=
+<<<<<<< HEAD
all
opentelemetry-semantic-conventions
opentelemetry-test-utils
@@ -50,11 +67,24 @@ packages=
opentelemetry-resource-detector-azure
opentelemetry-sdk-extension-aws
opentelemetry-propagator-aws-xray
+=======
+ opentelemetry-opentracing-shim
+ opentelemetry-opencensus-shim
+ opentelemetry-exporter-opencensus
+ opentelemetry-exporter-prometheus
+ opentelemetry-distro
+ opentelemetry-semantic-conventions
+ opentelemetry-test-utils
+ tests
+>>>>>>> upstream/main
[lintroots]
extraroots=examples/*,scripts/
subglob=*.py,tests/,test/,src/*,examples/*
+<<<<<<< HEAD
ignore=sklearn
+=======
+>>>>>>> upstream/main
[testroots]
extraroots=examples/*,tests/
diff --git a/exporter/opentelemetry-exporter-opencensus/LICENSE b/exporter/opentelemetry-exporter-opencensus/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-opencensus/README.rst b/exporter/opentelemetry-exporter-opencensus/README.rst
new file mode 100644
index 0000000000..f7b7f4fb2b
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/README.rst
@@ -0,0 +1,24 @@
+OpenCensus Exporter
+===================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-opencensus.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-opencensus/
+
+This library allows to export traces using OpenCensus.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-opencensus
+
+
+References
+----------
+
+* `OpenCensus Exporter `_
+* `OpenTelemetry Collector `_
+* `OpenTelemetry `_
diff --git a/exporter/opentelemetry-exporter-opencensus/pyproject.toml b/exporter/opentelemetry-exporter-opencensus/pyproject.toml
new file mode 100644
index 0000000000..e4ecc1119c
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/pyproject.toml
@@ -0,0 +1,56 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-opencensus"
+dynamic = ["version"]
+description = "OpenCensus Exporter"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "grpcio >= 1.0.0, < 2.0.0",
+ "opencensus-proto >= 0.1.0, < 1.0.0",
+ "opentelemetry-api >= 1.23.0.dev",
+ "opentelemetry-sdk >= 1.15",
+ "protobuf ~= 3.13",
+ "setuptools >= 16.0",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_traces_exporter]
+opencensus = "opentelemetry.exporter.opencensus.trace_exporter:OpenCensusSpanExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-opencensus"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/opencensus/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/__init__.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/__init__.py
new file mode 100644
index 0000000000..ff8bb25be6
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/__init__.py
@@ -0,0 +1,17 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The **OpenCensus Exporter** allows to export traces using OpenCensus.
+"""
diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/py.typed b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/trace_exporter/__init__.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/trace_exporter/__init__.py
new file mode 100644
index 0000000000..b855728cad
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/trace_exporter/__init__.py
@@ -0,0 +1,192 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""OpenCensus Span Exporter."""
+
+import logging
+from typing import Sequence
+
+import grpc
+from opencensus.proto.agent.trace.v1 import (
+ trace_service_pb2,
+ trace_service_pb2_grpc,
+)
+from opencensus.proto.trace.v1 import trace_pb2
+
+import opentelemetry.exporter.opencensus.util as utils
+from opentelemetry import trace
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+from opentelemetry.sdk.trace import ReadableSpan
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+
+DEFAULT_ENDPOINT = "localhost:55678"
+
+logger = logging.getLogger(__name__)
+
+
+# pylint: disable=no-member
+class OpenCensusSpanExporter(SpanExporter):
+ """OpenCensus Collector span exporter.
+
+ Args:
+ endpoint: OpenCensus Collector receiver endpoint.
+ host_name: Host name.
+ client: TraceService client stub.
+ """
+
+ def __init__(
+ self,
+ endpoint=DEFAULT_ENDPOINT,
+ host_name=None,
+ client=None,
+ ):
+ tracer_provider = trace.get_tracer_provider()
+ service_name = (
+ tracer_provider.resource.attributes[SERVICE_NAME]
+ if getattr(tracer_provider, "resource", None)
+ else Resource.create().attributes.get(SERVICE_NAME)
+ )
+ self.endpoint = endpoint
+ if client is None:
+ self.channel = grpc.insecure_channel(self.endpoint)
+ self.client = trace_service_pb2_grpc.TraceServiceStub(
+ channel=self.channel
+ )
+ else:
+ self.client = client
+
+ self.host_name = host_name
+ self.node = utils.get_node(service_name, host_name)
+
+ def export(self, spans: Sequence[ReadableSpan]) -> SpanExportResult:
+ # Populate service_name from first span
+ # We restrict any SpanProcessor to be only associated with a single
+ # TracerProvider, so it is safe to assume that all Spans in a single
+ # batch all originate from one TracerProvider (and in turn have all
+ # the same service_name)
+ if spans:
+ service_name = spans[0].resource.attributes.get(SERVICE_NAME)
+ if service_name:
+ self.node = utils.get_node(service_name, self.host_name)
+ try:
+ responses = self.client.Export(self.generate_span_requests(spans))
+
+ # Read response
+ for _ in responses:
+ pass
+
+ except grpc.RpcError:
+ return SpanExportResult.FAILURE
+
+ return SpanExportResult.SUCCESS
+
+ def shutdown(self) -> None:
+ pass
+
+ def generate_span_requests(self, spans):
+ collector_spans = translate_to_collector(spans)
+ service_request = trace_service_pb2.ExportTraceServiceRequest(
+ node=self.node, spans=collector_spans
+ )
+ yield service_request
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ return True
+
+
+# pylint: disable=too-many-branches
+def translate_to_collector(spans: Sequence[ReadableSpan]):
+ collector_spans = []
+ for span in spans:
+ status = None
+ if span.status is not None:
+ status = trace_pb2.Status(
+ code=span.status.status_code.value,
+ message=span.status.description,
+ )
+
+ collector_span = trace_pb2.Span(
+ name=trace_pb2.TruncatableString(value=span.name),
+ kind=utils.get_collector_span_kind(span.kind),
+ trace_id=span.context.trace_id.to_bytes(16, "big"),
+ span_id=span.context.span_id.to_bytes(8, "big"),
+ start_time=utils.proto_timestamp_from_time_ns(span.start_time),
+ end_time=utils.proto_timestamp_from_time_ns(span.end_time),
+ status=status,
+ )
+
+ parent_id = 0
+ if span.parent is not None:
+ parent_id = span.parent.span_id
+
+ collector_span.parent_span_id = parent_id.to_bytes(8, "big")
+
+ if span.context.trace_state is not None:
+ for (key, value) in span.context.trace_state.items():
+ collector_span.tracestate.entries.add(key=key, value=value)
+
+ if span.attributes:
+ for (key, value) in span.attributes.items():
+ utils.add_proto_attribute_value(
+ collector_span.attributes, key, value
+ )
+
+ if span.events:
+ for event in span.events:
+
+ collector_annotation = trace_pb2.Span.TimeEvent.Annotation(
+ description=trace_pb2.TruncatableString(value=event.name)
+ )
+
+ if event.attributes:
+ for (key, value) in event.attributes.items():
+ utils.add_proto_attribute_value(
+ collector_annotation.attributes, key, value
+ )
+
+ collector_span.time_events.time_event.add(
+ time=utils.proto_timestamp_from_time_ns(event.timestamp),
+ annotation=collector_annotation,
+ )
+
+ if span.links:
+ for link in span.links:
+ collector_span_link = collector_span.links.link.add()
+ collector_span_link.trace_id = link.context.trace_id.to_bytes(
+ 16, "big"
+ )
+ collector_span_link.span_id = link.context.span_id.to_bytes(
+ 8, "big"
+ )
+
+ collector_span_link.type = (
+ trace_pb2.Span.Link.Type.TYPE_UNSPECIFIED
+ )
+ if span.parent is not None:
+ if (
+ link.context.span_id == span.parent.span_id
+ and link.context.trace_id == span.parent.trace_id
+ ):
+ collector_span_link.type = (
+ trace_pb2.Span.Link.Type.PARENT_LINKED_SPAN
+ )
+
+ if link.attributes:
+ for (key, value) in link.attributes.items():
+ utils.add_proto_attribute_value(
+ collector_span_link.attributes, key, value
+ )
+
+ collector_spans.append(collector_span)
+ return collector_spans
diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
new file mode 100644
index 0000000000..694e8dc6a1
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/util.py
@@ -0,0 +1,100 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from os import getpid
+from socket import gethostname
+from time import time
+
+# pylint: disable=wrong-import-position
+from google.protobuf.timestamp_pb2 import Timestamp
+from opencensus.proto.agent.common.v1 import common_pb2
+from opencensus.proto.trace.v1 import trace_pb2
+
+from opentelemetry.exporter.opencensus.version import (
+ __version__ as opencensusexporter_exporter_version,
+)
+from opentelemetry.trace import SpanKind
+from opentelemetry.util._importlib_metadata import version
+
+OPENTELEMETRY_VERSION = version("opentelemetry-api")
+
+
+def proto_timestamp_from_time_ns(time_ns):
+ """Converts datetime to protobuf timestamp.
+
+ Args:
+ time_ns: Time in nanoseconds
+
+ Returns:
+ Returns protobuf timestamp.
+ """
+ ts = Timestamp()
+ if time_ns is not None:
+ # pylint: disable=no-member
+ ts.FromNanoseconds(time_ns)
+ return ts
+
+
+# pylint: disable=no-member
+def get_collector_span_kind(kind: SpanKind):
+ if kind is SpanKind.SERVER:
+ return trace_pb2.Span.SpanKind.SERVER
+ if kind is SpanKind.CLIENT:
+ return trace_pb2.Span.SpanKind.CLIENT
+ return trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED
+
+
+def add_proto_attribute_value(pb_attributes, key, value):
+ """Sets string, int, boolean or float value on protobuf
+ span, link or annotation attributes.
+
+ Args:
+ pb_attributes: protobuf Span's attributes property.
+ key: attribute key to set.
+ value: attribute value
+ """
+
+ if isinstance(value, bool):
+ pb_attributes.attribute_map[key].bool_value = value
+ elif isinstance(value, int):
+ pb_attributes.attribute_map[key].int_value = value
+ elif isinstance(value, str):
+ pb_attributes.attribute_map[key].string_value.value = value
+ elif isinstance(value, float):
+ pb_attributes.attribute_map[key].double_value = value
+ else:
+ pb_attributes.attribute_map[key].string_value.value = str(value)
+
+
+# pylint: disable=no-member
+def get_node(service_name, host_name):
+ """Generates Node message from params and system information.
+
+ Args:
+ service_name: Name of Collector service.
+ host_name: Host name.
+ """
+ return common_pb2.Node(
+ identifier=common_pb2.ProcessIdentifier(
+ host_name=gethostname() if host_name is None else host_name,
+ pid=getpid(),
+ start_timestamp=proto_timestamp_from_time_ns(int(time() * 1e9)),
+ ),
+ library_info=common_pb2.LibraryInfo(
+ language=common_pb2.LibraryInfo.Language.Value("PYTHON"),
+ exporter_version=opencensusexporter_exporter_version,
+ core_library_version=OPENTELEMETRY_VERSION,
+ ),
+ service_info=common_pb2.ServiceInfo(name=service_name),
+ )
diff --git a/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/version.py b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/version.py
new file mode 100644
index 0000000000..ff896307c3
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/src/opentelemetry/exporter/opencensus/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.44b0.dev"
diff --git a/exporter/opentelemetry-exporter-opencensus/tests/__init__.py b/exporter/opentelemetry-exporter-opencensus/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-opencensus/tests/test_otcollector_trace_exporter.py b/exporter/opentelemetry-exporter-opencensus/tests/test_otcollector_trace_exporter.py
new file mode 100644
index 0000000000..fa546cde7a
--- /dev/null
+++ b/exporter/opentelemetry-exporter-opencensus/tests/test_otcollector_trace_exporter.py
@@ -0,0 +1,366 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from unittest import mock
+
+import grpc
+from google.protobuf.timestamp_pb2 import Timestamp
+from opencensus.proto.trace.v1 import trace_pb2
+
+import opentelemetry.exporter.opencensus.util as utils
+from opentelemetry import trace as trace_api
+from opentelemetry.exporter.opencensus.trace_exporter import (
+ OpenCensusSpanExporter,
+ translate_to_collector,
+)
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import SpanExportResult
+from opentelemetry.test.globals_test import TraceGlobalsTest
+from opentelemetry.trace import TraceFlags
+
+
+# pylint: disable=no-member
+class TestCollectorSpanExporter(TraceGlobalsTest, unittest.TestCase):
+ def test_constructor(self):
+ mock_get_node = mock.Mock()
+ patch = mock.patch(
+ "opentelemetry.exporter.opencensus.util.get_node",
+ side_effect=mock_get_node,
+ )
+ trace_api.set_tracer_provider(
+ TracerProvider(
+ resource=Resource.create({SERVICE_NAME: "testServiceName"})
+ )
+ )
+
+ host_name = "testHostName"
+ client = grpc.insecure_channel("")
+ endpoint = "testEndpoint"
+ with patch:
+ exporter = OpenCensusSpanExporter(
+ host_name=host_name,
+ endpoint=endpoint,
+ client=client,
+ )
+
+ self.assertIs(exporter.client, client)
+ self.assertEqual(exporter.endpoint, endpoint)
+ mock_get_node.assert_called_with("testServiceName", host_name)
+
+ def test_get_collector_span_kind(self):
+ result = utils.get_collector_span_kind(trace_api.SpanKind.SERVER)
+ self.assertIs(result, trace_pb2.Span.SpanKind.SERVER)
+ result = utils.get_collector_span_kind(trace_api.SpanKind.CLIENT)
+ self.assertIs(result, trace_pb2.Span.SpanKind.CLIENT)
+ result = utils.get_collector_span_kind(trace_api.SpanKind.CONSUMER)
+ self.assertIs(result, trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED)
+ result = utils.get_collector_span_kind(trace_api.SpanKind.PRODUCER)
+ self.assertIs(result, trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED)
+ result = utils.get_collector_span_kind(trace_api.SpanKind.INTERNAL)
+ self.assertIs(result, trace_pb2.Span.SpanKind.SPAN_KIND_UNSPECIFIED)
+
+ def test_proto_timestamp_from_time_ns(self):
+ result = utils.proto_timestamp_from_time_ns(12345)
+ self.assertIsInstance(result, Timestamp)
+ self.assertEqual(result.nanos, 12345)
+
+ # pylint: disable=too-many-locals
+ # pylint: disable=too-many-statements
+ def test_translate_to_collector(self):
+ trace_id = 0x6E0C63257DE34C926F9EFCD03927272E
+ span_id = 0x34BF92DEEFC58C92
+ parent_id = 0x1111111111111111
+ base_time = 683647322 * 10**9 # in ns
+ start_times = (
+ base_time,
+ base_time + 150 * 10**6,
+ base_time + 300 * 10**6,
+ )
+ durations = (50 * 10**6, 100 * 10**6, 200 * 10**6)
+ end_times = (
+ start_times[0] + durations[0],
+ start_times[1] + durations[1],
+ start_times[2] + durations[2],
+ )
+ span_context = trace_api.SpanContext(
+ trace_id,
+ span_id,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ trace_state=trace_api.TraceState([("testkey", "testvalue")]),
+ )
+ parent_span_context = trace_api.SpanContext(
+ trace_id, parent_id, is_remote=False
+ )
+ other_context = trace_api.SpanContext(
+ trace_id, span_id, is_remote=False
+ )
+ event_attributes = {
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ }
+ event_timestamp = base_time + 50 * 10**6
+ event = trace.Event(
+ name="event0",
+ timestamp=event_timestamp,
+ attributes=event_attributes,
+ )
+ link_attributes = {"key_bool": True}
+ link_1 = trace_api.Link(
+ context=other_context, attributes=link_attributes
+ )
+ link_2 = trace_api.Link(
+ context=parent_span_context, attributes=link_attributes
+ )
+ span_1 = trace._Span(
+ name="test1",
+ context=span_context,
+ parent=parent_span_context,
+ events=(event,),
+ links=(link_1,),
+ kind=trace_api.SpanKind.CLIENT,
+ )
+ span_2 = trace._Span(
+ name="test2",
+ context=parent_span_context,
+ parent=None,
+ kind=trace_api.SpanKind.SERVER,
+ )
+ span_3 = trace._Span(
+ name="test3",
+ context=other_context,
+ links=(link_2,),
+ parent=span_2.get_span_context(),
+ )
+ otel_spans = [span_1, span_2, span_3]
+ otel_spans[0].start(start_time=start_times[0])
+ otel_spans[0].set_attribute("key_bool", False)
+ otel_spans[0].set_attribute("key_string", "hello_world")
+ otel_spans[0].set_attribute("key_float", 111.22)
+ otel_spans[0].set_attribute("key_int", 333)
+ otel_spans[0].set_status(trace_api.Status(trace_api.StatusCode.OK))
+ otel_spans[0].end(end_time=end_times[0])
+ otel_spans[1].start(start_time=start_times[1])
+ otel_spans[1].set_status(
+ trace_api.Status(
+ trace_api.StatusCode.ERROR,
+ {"test", "val"},
+ )
+ )
+ otel_spans[1].end(end_time=end_times[1])
+ otel_spans[2].start(start_time=start_times[2])
+ otel_spans[2].end(end_time=end_times[2])
+ output_spans = translate_to_collector(otel_spans)
+
+ self.assertEqual(len(output_spans), 3)
+ self.assertEqual(
+ output_spans[0].trace_id, b"n\x0cc%}\xe3L\x92o\x9e\xfc\xd09''."
+ )
+ self.assertEqual(
+ output_spans[0].span_id, b"4\xbf\x92\xde\xef\xc5\x8c\x92"
+ )
+ self.assertEqual(
+ output_spans[0].name, trace_pb2.TruncatableString(value="test1")
+ )
+ self.assertEqual(
+ output_spans[1].name, trace_pb2.TruncatableString(value="test2")
+ )
+ self.assertEqual(
+ output_spans[2].name, trace_pb2.TruncatableString(value="test3")
+ )
+ self.assertEqual(
+ output_spans[0].start_time.seconds,
+ int(start_times[0] / 1000000000),
+ )
+ self.assertEqual(
+ output_spans[0].end_time.seconds, int(end_times[0] / 1000000000)
+ )
+ self.assertEqual(output_spans[0].kind, trace_api.SpanKind.CLIENT.value)
+ self.assertEqual(output_spans[1].kind, trace_api.SpanKind.SERVER.value)
+
+ self.assertEqual(
+ output_spans[0].parent_span_id, b"\x11\x11\x11\x11\x11\x11\x11\x11"
+ )
+ self.assertEqual(
+ output_spans[2].parent_span_id, b"\x11\x11\x11\x11\x11\x11\x11\x11"
+ )
+ self.assertEqual(
+ output_spans[0].status.code,
+ trace_api.StatusCode.OK.value,
+ )
+ self.assertEqual(len(output_spans[0].tracestate.entries), 1)
+ self.assertEqual(output_spans[0].tracestate.entries[0].key, "testkey")
+ self.assertEqual(
+ output_spans[0].tracestate.entries[0].value, "testvalue"
+ )
+
+ self.assertEqual(
+ output_spans[0].attributes.attribute_map["key_bool"].bool_value,
+ False,
+ )
+ self.assertEqual(
+ output_spans[0]
+ .attributes.attribute_map["key_string"]
+ .string_value.value,
+ "hello_world",
+ )
+ self.assertEqual(
+ output_spans[0].attributes.attribute_map["key_float"].double_value,
+ 111.22,
+ )
+ self.assertEqual(
+ output_spans[0].attributes.attribute_map["key_int"].int_value, 333
+ )
+
+ self.assertEqual(
+ output_spans[0].time_events.time_event[0].time.seconds, 683647322
+ )
+ self.assertEqual(
+ output_spans[0]
+ .time_events.time_event[0]
+ .annotation.description.value,
+ "event0",
+ )
+ self.assertEqual(
+ output_spans[0]
+ .time_events.time_event[0]
+ .annotation.attributes.attribute_map["annotation_bool"]
+ .bool_value,
+ True,
+ )
+ self.assertEqual(
+ output_spans[0]
+ .time_events.time_event[0]
+ .annotation.attributes.attribute_map["annotation_string"]
+ .string_value.value,
+ "annotation_test",
+ )
+ self.assertEqual(
+ output_spans[0]
+ .time_events.time_event[0]
+ .annotation.attributes.attribute_map["key_float"]
+ .double_value,
+ 0.3,
+ )
+
+ self.assertEqual(
+ output_spans[0].links.link[0].trace_id,
+ b"n\x0cc%}\xe3L\x92o\x9e\xfc\xd09''.",
+ )
+ self.assertEqual(
+ output_spans[0].links.link[0].span_id,
+ b"4\xbf\x92\xde\xef\xc5\x8c\x92",
+ )
+ self.assertEqual(
+ output_spans[0].links.link[0].type,
+ trace_pb2.Span.Link.Type.TYPE_UNSPECIFIED,
+ )
+ self.assertEqual(
+ output_spans[1].status.code,
+ trace_api.StatusCode.ERROR.value,
+ )
+ self.assertEqual(
+ output_spans[2].links.link[0].type,
+ trace_pb2.Span.Link.Type.PARENT_LINKED_SPAN,
+ )
+ self.assertEqual(
+ output_spans[0]
+ .links.link[0]
+ .attributes.attribute_map["key_bool"]
+ .bool_value,
+ True,
+ )
+
+ def test_export(self):
+ mock_client = mock.MagicMock()
+ mock_export = mock.MagicMock()
+ mock_client.Export = mock_export
+ host_name = "testHostName"
+ collector_exporter = OpenCensusSpanExporter(
+ client=mock_client, host_name=host_name
+ )
+
+ trace_id = 0x6E0C63257DE34C926F9EFCD03927272E
+ span_id = 0x34BF92DEEFC58C92
+ span_context = trace_api.SpanContext(
+ trace_id,
+ span_id,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ )
+ otel_spans = [
+ trace._Span(
+ name="test1",
+ context=span_context,
+ kind=trace_api.SpanKind.CLIENT,
+ )
+ ]
+ result_status = collector_exporter.export(otel_spans)
+ self.assertEqual(SpanExportResult.SUCCESS, result_status)
+
+ # pylint: disable=unsubscriptable-object
+ export_arg = mock_export.call_args[0]
+ service_request = next(export_arg[0])
+ output_spans = getattr(service_request, "spans")
+ output_node = getattr(service_request, "node")
+ self.assertEqual(len(output_spans), 1)
+ self.assertIsNotNone(getattr(output_node, "library_info"))
+ self.assertIsNotNone(getattr(output_node, "service_info"))
+ output_identifier = getattr(output_node, "identifier")
+ self.assertEqual(
+ getattr(output_identifier, "host_name"), "testHostName"
+ )
+
+ def test_export_service_name(self):
+ trace_api.set_tracer_provider(
+ TracerProvider(
+ resource=Resource.create({SERVICE_NAME: "testServiceName"})
+ )
+ )
+ mock_client = mock.MagicMock()
+ mock_export = mock.MagicMock()
+ mock_client.Export = mock_export
+ host_name = "testHostName"
+ collector_exporter = OpenCensusSpanExporter(
+ client=mock_client, host_name=host_name
+ )
+ self.assertEqual(
+ collector_exporter.node.service_info.name, "testServiceName"
+ )
+
+ trace_id = 0x6E0C63257DE34C926F9EFCD03927272E
+ span_id = 0x34BF92DEEFC58C92
+ span_context = trace_api.SpanContext(
+ trace_id,
+ span_id,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ )
+ resource = Resource.create({SERVICE_NAME: "test"})
+ otel_spans = [
+ trace._Span(
+ name="test1",
+ context=span_context,
+ kind=trace_api.SpanKind.CLIENT,
+ resource=resource,
+ )
+ ]
+
+ result_status = collector_exporter.export(otel_spans)
+ self.assertEqual(SpanExportResult.SUCCESS, result_status)
+ self.assertEqual(collector_exporter.node.service_info.name, "test")
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/LICENSE b/exporter/opentelemetry-exporter-otlp-proto-common/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/README.rst b/exporter/opentelemetry-exporter-otlp-proto-common/README.rst
new file mode 100644
index 0000000000..9756a49bc3
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/README.rst
@@ -0,0 +1,27 @@
+OpenTelemetry Protobuf Encoding
+===============================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-otlp-proto-common.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-otlp-proto-common/
+
+This library is provided as a convenience to encode to Protobuf. Currently used by:
+
+* opentelemetry-exporter-otlp-proto-grpc
+* opentelemetry-exporter-otlp-proto-http
+
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-otlp-proto-common
+
+
+References
+----------
+
+* `OpenTelemetry `_
+* `OpenTelemetry Protocol Specification `_
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/pyproject.toml b/exporter/opentelemetry-exporter-otlp-proto-common/pyproject.toml
new file mode 100644
index 0000000000..307bed9c61
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/pyproject.toml
@@ -0,0 +1,49 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-otlp-proto-common"
+dynamic = ["version"]
+description = "OpenTelemetry Protobuf encoding"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "opentelemetry-proto == 1.23.0.dev",
+ "backoff >= 1.10.0, < 2.0.0; python_version<'3.7'",
+ "backoff >= 1.10.0, < 3.0.0; python_version>='3.7'",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-otlp-proto-common"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/otlp/proto/common/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/__init__.py
new file mode 100644
index 0000000000..2d336aee83
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/__init__.py
@@ -0,0 +1,18 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.exporter.otlp.proto.common.version import __version__
+
+__all__ = ["__version__"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py
new file mode 100644
index 0000000000..bd6ca4ad18
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py
@@ -0,0 +1,147 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import logging
+from collections.abc import Sequence
+from typing import Any, Mapping, Optional, List, Callable, TypeVar, Dict
+
+import backoff
+
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.proto.common.v1.common_pb2 import (
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as PB2Resource,
+)
+from opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue
+from opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue
+from opentelemetry.proto.common.v1.common_pb2 import (
+ KeyValueList as PB2KeyValueList,
+)
+from opentelemetry.proto.common.v1.common_pb2 import (
+ ArrayValue as PB2ArrayValue,
+)
+from opentelemetry.sdk.trace import Resource
+from opentelemetry.util.types import Attributes
+
+
+_logger = logging.getLogger(__name__)
+
+_TypingResourceT = TypeVar("_TypingResourceT")
+_ResourceDataT = TypeVar("_ResourceDataT")
+
+
+def _encode_instrumentation_scope(
+ instrumentation_scope: InstrumentationScope,
+) -> PB2InstrumentationScope:
+ if instrumentation_scope is None:
+ return PB2InstrumentationScope()
+ return PB2InstrumentationScope(
+ name=instrumentation_scope.name,
+ version=instrumentation_scope.version,
+ )
+
+
+def _encode_resource(resource: Resource) -> PB2Resource:
+ return PB2Resource(attributes=_encode_attributes(resource.attributes))
+
+
+def _encode_value(value: Any) -> PB2AnyValue:
+ if isinstance(value, bool):
+ return PB2AnyValue(bool_value=value)
+ if isinstance(value, str):
+ return PB2AnyValue(string_value=value)
+ if isinstance(value, int):
+ return PB2AnyValue(int_value=value)
+ if isinstance(value, float):
+ return PB2AnyValue(double_value=value)
+ if isinstance(value, Sequence):
+ return PB2AnyValue(
+ array_value=PB2ArrayValue(values=[_encode_value(v) for v in value])
+ )
+ elif isinstance(value, Mapping):
+ return PB2AnyValue(
+ kvlist_value=PB2KeyValueList(
+ values=[_encode_key_value(str(k), v) for k, v in value.items()]
+ )
+ )
+ raise Exception(f"Invalid type {type(value)} of value {value}")
+
+
+def _encode_key_value(key: str, value: Any) -> PB2KeyValue:
+ return PB2KeyValue(key=key, value=_encode_value(value))
+
+
+def _encode_span_id(span_id: int) -> bytes:
+ return span_id.to_bytes(length=8, byteorder="big", signed=False)
+
+
+def _encode_trace_id(trace_id: int) -> bytes:
+ return trace_id.to_bytes(length=16, byteorder="big", signed=False)
+
+
+def _encode_attributes(
+ attributes: Attributes,
+) -> Optional[List[PB2KeyValue]]:
+ if attributes:
+ pb2_attributes = []
+ for key, value in attributes.items():
+ try:
+ pb2_attributes.append(_encode_key_value(key, value))
+ except Exception as error: # pylint: disable=broad-except
+ _logger.exception(error)
+ else:
+ pb2_attributes = None
+ return pb2_attributes
+
+
+def _get_resource_data(
+ sdk_resource_scope_data: Dict[Resource, _ResourceDataT],
+ resource_class: Callable[..., _TypingResourceT],
+ name: str,
+) -> List[_TypingResourceT]:
+
+ resource_data = []
+
+ for (
+ sdk_resource,
+ scope_data,
+ ) in sdk_resource_scope_data.items():
+ collector_resource = PB2Resource(
+ attributes=_encode_attributes(sdk_resource.attributes)
+ )
+ resource_data.append(
+ resource_class(
+ **{
+ "resource": collector_resource,
+ "scope_{}".format(name): scope_data.values(),
+ }
+ )
+ )
+ return resource_data
+
+
+# Work around API change between backoff 1.x and 2.x. Since 2.0.0 the backoff
+# wait generator API requires a first .send(None) before reading the backoff
+# values from the generator.
+_is_backoff_v2 = next(backoff.expo()) is None
+
+
+def _create_exp_backoff_generator(*args, **kwargs):
+ gen = backoff.expo(*args, **kwargs)
+ if _is_backoff_v2:
+ gen.send(None)
+ return gen
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/_log_encoder/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/_log_encoder/__init__.py
new file mode 100644
index 0000000000..47c254033b
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/_log_encoder/__init__.py
@@ -0,0 +1,83 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from collections import defaultdict
+from typing import Sequence, List
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_instrumentation_scope,
+ _encode_resource,
+ _encode_span_id,
+ _encode_trace_id,
+ _encode_value,
+ _encode_attributes,
+)
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2 import (
+ ExportLogsServiceRequest,
+)
+from opentelemetry.proto.logs.v1.logs_pb2 import (
+ ScopeLogs,
+ ResourceLogs,
+)
+from opentelemetry.proto.logs.v1.logs_pb2 import LogRecord as PB2LogRecord
+
+from opentelemetry.sdk._logs import LogData
+
+
+def encode_logs(batch: Sequence[LogData]) -> ExportLogsServiceRequest:
+ return ExportLogsServiceRequest(resource_logs=_encode_resource_logs(batch))
+
+
+def _encode_log(log_data: LogData) -> PB2LogRecord:
+ return PB2LogRecord(
+ time_unix_nano=log_data.log_record.timestamp,
+ span_id=_encode_span_id(log_data.log_record.span_id),
+ trace_id=_encode_trace_id(log_data.log_record.trace_id),
+ flags=int(log_data.log_record.trace_flags),
+ body=_encode_value(log_data.log_record.body),
+ severity_text=log_data.log_record.severity_text,
+ attributes=_encode_attributes(log_data.log_record.attributes),
+ dropped_attributes_count=log_data.log_record.dropped_attributes,
+ severity_number=log_data.log_record.severity_number.value,
+ )
+
+
+def _encode_resource_logs(batch: Sequence[LogData]) -> List[ResourceLogs]:
+ sdk_resource_logs = defaultdict(lambda: defaultdict(list))
+
+ for sdk_log in batch:
+ sdk_resource = sdk_log.log_record.resource
+ sdk_instrumentation = sdk_log.instrumentation_scope or None
+ pb2_log = _encode_log(sdk_log)
+
+ sdk_resource_logs[sdk_resource][sdk_instrumentation].append(pb2_log)
+
+ pb2_resource_logs = []
+
+ for sdk_resource, sdk_instrumentations in sdk_resource_logs.items():
+ scope_logs = []
+ for sdk_instrumentation, pb2_logs in sdk_instrumentations.items():
+ scope_logs.append(
+ ScopeLogs(
+ scope=(_encode_instrumentation_scope(sdk_instrumentation)),
+ log_records=pb2_logs,
+ )
+ )
+ pb2_resource_logs.append(
+ ResourceLogs(
+ resource=_encode_resource(sdk_resource),
+ scope_logs=scope_logs,
+ )
+ )
+
+ return pb2_resource_logs
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/metrics_encoder/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/metrics_encoder/__init__.py
new file mode 100644
index 0000000000..d604786108
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/metrics_encoder/__init__.py
@@ -0,0 +1,318 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import logging
+
+from opentelemetry.sdk.metrics.export import (
+ MetricExporter,
+)
+from os import environ
+from opentelemetry.sdk.metrics import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_attributes,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+)
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
+ ExportMetricsServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import InstrumentationScope
+from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2
+from opentelemetry.sdk.metrics.export import (
+ MetricsData,
+ Gauge,
+ Histogram as HistogramType,
+ Sum,
+ ExponentialHistogram as ExponentialHistogramType,
+)
+from typing import Dict
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as PB2Resource,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION,
+)
+from opentelemetry.sdk.metrics.view import (
+ ExponentialBucketHistogramAggregation,
+ ExplicitBucketHistogramAggregation,
+)
+
+_logger = logging.getLogger(__name__)
+
+
+class OTLPMetricExporterMixin:
+ def _common_configuration(
+ self,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ ) -> None:
+
+ instrument_class_temporality = {}
+
+ otel_exporter_otlp_metrics_temporality_preference = (
+ environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE,
+ "CUMULATIVE",
+ )
+ .upper()
+ .strip()
+ )
+
+ if otel_exporter_otlp_metrics_temporality_preference == "DELTA":
+ instrument_class_temporality = {
+ Counter: AggregationTemporality.DELTA,
+ UpDownCounter: AggregationTemporality.CUMULATIVE,
+ Histogram: AggregationTemporality.DELTA,
+ ObservableCounter: AggregationTemporality.DELTA,
+ ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
+ ObservableGauge: AggregationTemporality.CUMULATIVE,
+ }
+
+ elif otel_exporter_otlp_metrics_temporality_preference == "LOWMEMORY":
+ instrument_class_temporality = {
+ Counter: AggregationTemporality.DELTA,
+ UpDownCounter: AggregationTemporality.CUMULATIVE,
+ Histogram: AggregationTemporality.DELTA,
+ ObservableCounter: AggregationTemporality.CUMULATIVE,
+ ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
+ ObservableGauge: AggregationTemporality.CUMULATIVE,
+ }
+
+ else:
+ if otel_exporter_otlp_metrics_temporality_preference != (
+ "CUMULATIVE"
+ ):
+ _logger.warning(
+ "Unrecognized OTEL_EXPORTER_METRICS_TEMPORALITY_PREFERENCE"
+ " value found: "
+ f"{otel_exporter_otlp_metrics_temporality_preference}, "
+ "using CUMULATIVE"
+ )
+ instrument_class_temporality = {
+ Counter: AggregationTemporality.CUMULATIVE,
+ UpDownCounter: AggregationTemporality.CUMULATIVE,
+ Histogram: AggregationTemporality.CUMULATIVE,
+ ObservableCounter: AggregationTemporality.CUMULATIVE,
+ ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
+ ObservableGauge: AggregationTemporality.CUMULATIVE,
+ }
+
+ instrument_class_temporality.update(preferred_temporality or {})
+
+ otel_exporter_otlp_metrics_default_histogram_aggregation = environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION,
+ "explicit_bucket_histogram",
+ )
+
+ if otel_exporter_otlp_metrics_default_histogram_aggregation == (
+ "base2_exponential_bucket_histogram"
+ ):
+
+ histogram_aggregation_type = ExponentialBucketHistogramAggregation
+
+ else:
+
+ if otel_exporter_otlp_metrics_default_histogram_aggregation != (
+ "explicit_bucket_histogram"
+ ):
+
+ _logger.warning(
+ (
+ "Invalid value for %s: %s, using explicit bucket "
+ "histogram aggregation"
+ ),
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION,
+ otel_exporter_otlp_metrics_default_histogram_aggregation,
+ )
+
+ histogram_aggregation_type = ExplicitBucketHistogramAggregation
+
+ MetricExporter.__init__(
+ self,
+ preferred_temporality=instrument_class_temporality,
+ preferred_aggregation={Histogram: histogram_aggregation_type()},
+ )
+
+
+def encode_metrics(data: MetricsData) -> ExportMetricsServiceRequest:
+ resource_metrics_dict = {}
+
+ for resource_metrics in data.resource_metrics:
+
+ resource = resource_metrics.resource
+
+ # It is safe to assume that each entry in data.resource_metrics is
+ # associated with an unique resource.
+ scope_metrics_dict = {}
+
+ resource_metrics_dict[resource] = scope_metrics_dict
+
+ for scope_metrics in resource_metrics.scope_metrics:
+
+ instrumentation_scope = scope_metrics.scope
+
+ # The SDK groups metrics in instrumentation scopes already so
+ # there is no need to check for existing instrumentation scopes
+ # here.
+ pb2_scope_metrics = pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name=instrumentation_scope.name,
+ version=instrumentation_scope.version,
+ )
+ )
+
+ scope_metrics_dict[instrumentation_scope] = pb2_scope_metrics
+
+ for metric in scope_metrics.metrics:
+ pb2_metric = pb2.Metric(
+ name=metric.name,
+ description=metric.description,
+ unit=metric.unit,
+ )
+
+ if isinstance(metric.data, Gauge):
+ for data_point in metric.data.data_points:
+ pt = pb2.NumberDataPoint(
+ attributes=_encode_attributes(
+ data_point.attributes
+ ),
+ time_unix_nano=data_point.time_unix_nano,
+ )
+ if isinstance(data_point.value, int):
+ pt.as_int = data_point.value
+ else:
+ pt.as_double = data_point.value
+ pb2_metric.gauge.data_points.append(pt)
+
+ elif isinstance(metric.data, HistogramType):
+ for data_point in metric.data.data_points:
+ pt = pb2.HistogramDataPoint(
+ attributes=_encode_attributes(
+ data_point.attributes
+ ),
+ time_unix_nano=data_point.time_unix_nano,
+ start_time_unix_nano=(
+ data_point.start_time_unix_nano
+ ),
+ count=data_point.count,
+ sum=data_point.sum,
+ bucket_counts=data_point.bucket_counts,
+ explicit_bounds=data_point.explicit_bounds,
+ max=data_point.max,
+ min=data_point.min,
+ )
+ pb2_metric.histogram.aggregation_temporality = (
+ metric.data.aggregation_temporality
+ )
+ pb2_metric.histogram.data_points.append(pt)
+
+ elif isinstance(metric.data, Sum):
+ for data_point in metric.data.data_points:
+ pt = pb2.NumberDataPoint(
+ attributes=_encode_attributes(
+ data_point.attributes
+ ),
+ start_time_unix_nano=(
+ data_point.start_time_unix_nano
+ ),
+ time_unix_nano=data_point.time_unix_nano,
+ )
+ if isinstance(data_point.value, int):
+ pt.as_int = data_point.value
+ else:
+ pt.as_double = data_point.value
+ # note that because sum is a message type, the
+ # fields must be set individually rather than
+ # instantiating a pb2.Sum and setting it once
+ pb2_metric.sum.aggregation_temporality = (
+ metric.data.aggregation_temporality
+ )
+ pb2_metric.sum.is_monotonic = metric.data.is_monotonic
+ pb2_metric.sum.data_points.append(pt)
+
+ elif isinstance(metric.data, ExponentialHistogramType):
+ for data_point in metric.data.data_points:
+
+ if data_point.positive.bucket_counts:
+ positive = pb2.ExponentialHistogramDataPoint.Buckets(
+ offset=data_point.positive.offset,
+ bucket_counts=data_point.positive.bucket_counts,
+ )
+ else:
+ positive = None
+
+ if data_point.negative.bucket_counts:
+ negative = pb2.ExponentialHistogramDataPoint.Buckets(
+ offset=data_point.negative.offset,
+ bucket_counts=data_point.negative.bucket_counts,
+ )
+ else:
+ negative = None
+
+ pt = pb2.ExponentialHistogramDataPoint(
+ attributes=_encode_attributes(
+ data_point.attributes
+ ),
+ time_unix_nano=data_point.time_unix_nano,
+ start_time_unix_nano=(
+ data_point.start_time_unix_nano
+ ),
+ count=data_point.count,
+ sum=data_point.sum,
+ scale=data_point.scale,
+ zero_count=data_point.zero_count,
+ positive=positive,
+ negative=negative,
+ flags=data_point.flags,
+ max=data_point.max,
+ min=data_point.min,
+ )
+ pb2_metric.exponential_histogram.aggregation_temporality = (
+ metric.data.aggregation_temporality
+ )
+ pb2_metric.exponential_histogram.data_points.append(pt)
+
+ else:
+ _logger.warning(
+ "unsupported data type %s",
+ metric.data.__class__.__name__,
+ )
+ continue
+
+ pb2_scope_metrics.metrics.append(pb2_metric)
+
+ resource_data = []
+ for (
+ sdk_resource,
+ scope_data,
+ ) in resource_metrics_dict.items():
+ resource_data.append(
+ pb2.ResourceMetrics(
+ resource=PB2Resource(
+ attributes=_encode_attributes(sdk_resource.attributes)
+ ),
+ scope_metrics=scope_data.values(),
+ )
+ )
+ resource_metrics = resource_data
+ return ExportMetricsServiceRequest(resource_metrics=resource_metrics)
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/trace_encoder/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/trace_encoder/__init__.py
new file mode 100644
index 0000000000..46cf628dd1
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/trace_encoder/__init__.py
@@ -0,0 +1,181 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+from collections import defaultdict
+from typing import List, Optional, Sequence
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_trace_id,
+ _encode_span_id,
+ _encode_instrumentation_scope,
+ _encode_attributes,
+ _encode_resource,
+)
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
+ ExportTraceServiceRequest as PB2ExportTraceServiceRequest,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import (
+ ScopeSpans as PB2ScopeSpans,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import (
+ ResourceSpans as PB2ResourceSpans,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import Span as PB2SPan
+from opentelemetry.proto.trace.v1.trace_pb2 import Status as PB2Status
+from opentelemetry.sdk.trace import Event, ReadableSpan
+from opentelemetry.trace import Link
+from opentelemetry.trace import SpanKind
+from opentelemetry.trace.span import SpanContext, TraceState, Status
+
+# pylint: disable=E1101
+_SPAN_KIND_MAP = {
+ SpanKind.INTERNAL: PB2SPan.SpanKind.SPAN_KIND_INTERNAL,
+ SpanKind.SERVER: PB2SPan.SpanKind.SPAN_KIND_SERVER,
+ SpanKind.CLIENT: PB2SPan.SpanKind.SPAN_KIND_CLIENT,
+ SpanKind.PRODUCER: PB2SPan.SpanKind.SPAN_KIND_PRODUCER,
+ SpanKind.CONSUMER: PB2SPan.SpanKind.SPAN_KIND_CONSUMER,
+}
+
+_logger = logging.getLogger(__name__)
+
+
+def encode_spans(
+ sdk_spans: Sequence[ReadableSpan],
+) -> PB2ExportTraceServiceRequest:
+ return PB2ExportTraceServiceRequest(
+ resource_spans=_encode_resource_spans(sdk_spans)
+ )
+
+
+def _encode_resource_spans(
+ sdk_spans: Sequence[ReadableSpan],
+) -> List[PB2ResourceSpans]:
+ # We need to inspect the spans and group + structure them as:
+ #
+ # Resource
+ # Instrumentation Library
+ # Spans
+ #
+ # First loop organizes the SDK spans in this structure. Protobuf messages
+ # are not hashable so we stick with SDK data in this phase.
+ #
+ # Second loop encodes the data into Protobuf format.
+ #
+ sdk_resource_spans = defaultdict(lambda: defaultdict(list))
+
+ for sdk_span in sdk_spans:
+ sdk_resource = sdk_span.resource
+ sdk_instrumentation = sdk_span.instrumentation_scope or None
+ pb2_span = _encode_span(sdk_span)
+
+ sdk_resource_spans[sdk_resource][sdk_instrumentation].append(pb2_span)
+
+ pb2_resource_spans = []
+
+ for sdk_resource, sdk_instrumentations in sdk_resource_spans.items():
+ scope_spans = []
+ for sdk_instrumentation, pb2_spans in sdk_instrumentations.items():
+ scope_spans.append(
+ PB2ScopeSpans(
+ scope=(_encode_instrumentation_scope(sdk_instrumentation)),
+ spans=pb2_spans,
+ )
+ )
+ pb2_resource_spans.append(
+ PB2ResourceSpans(
+ resource=_encode_resource(sdk_resource),
+ scope_spans=scope_spans,
+ )
+ )
+
+ return pb2_resource_spans
+
+
+def _encode_span(sdk_span: ReadableSpan) -> PB2SPan:
+ span_context = sdk_span.get_span_context()
+ return PB2SPan(
+ trace_id=_encode_trace_id(span_context.trace_id),
+ span_id=_encode_span_id(span_context.span_id),
+ trace_state=_encode_trace_state(span_context.trace_state),
+ parent_span_id=_encode_parent_id(sdk_span.parent),
+ name=sdk_span.name,
+ kind=_SPAN_KIND_MAP[sdk_span.kind],
+ start_time_unix_nano=sdk_span.start_time,
+ end_time_unix_nano=sdk_span.end_time,
+ attributes=_encode_attributes(sdk_span.attributes),
+ events=_encode_events(sdk_span.events),
+ links=_encode_links(sdk_span.links),
+ status=_encode_status(sdk_span.status),
+ dropped_attributes_count=sdk_span.dropped_attributes,
+ dropped_events_count=sdk_span.dropped_events,
+ dropped_links_count=sdk_span.dropped_links,
+ )
+
+
+def _encode_events(
+ events: Sequence[Event],
+) -> Optional[List[PB2SPan.Event]]:
+ pb2_events = None
+ if events:
+ pb2_events = []
+ for event in events:
+ encoded_event = PB2SPan.Event(
+ name=event.name,
+ time_unix_nano=event.timestamp,
+ attributes=_encode_attributes(event.attributes),
+ dropped_attributes_count=event.attributes.dropped,
+ )
+ pb2_events.append(encoded_event)
+ return pb2_events
+
+
+def _encode_links(links: Sequence[Link]) -> Sequence[PB2SPan.Link]:
+ pb2_links = None
+ if links:
+ pb2_links = []
+ for link in links:
+ encoded_link = PB2SPan.Link(
+ trace_id=_encode_trace_id(link.context.trace_id),
+ span_id=_encode_span_id(link.context.span_id),
+ attributes=_encode_attributes(link.attributes),
+ dropped_attributes_count=link.attributes.dropped,
+ )
+ pb2_links.append(encoded_link)
+ return pb2_links
+
+
+def _encode_status(status: Status) -> Optional[PB2Status]:
+ pb2_status = None
+ if status is not None:
+ pb2_status = PB2Status(
+ code=status.status_code.value,
+ message=status.description,
+ )
+ return pb2_status
+
+
+def _encode_trace_state(trace_state: TraceState) -> Optional[str]:
+ pb2_trace_state = None
+ if trace_state is not None:
+ pb2_trace_state = ",".join(
+ [f"{key}={value}" for key, value in (trace_state.items())]
+ )
+ return pb2_trace_state
+
+
+def _encode_parent_id(context: Optional[SpanContext]) -> Optional[bytes]:
+ if context:
+ return _encode_span_id(context.span_id)
+ return None
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_log_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_log_encoder.py
new file mode 100644
index 0000000000..f34ff8223c
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_log_encoder.py
@@ -0,0 +1,20 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.exporter.otlp.proto.common._internal._log_encoder import (
+ encode_logs,
+)
+
+__all__ = ["encode_logs"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/metrics_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/metrics_encoder.py
new file mode 100644
index 0000000000..14f8fc3f0d
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/metrics_encoder.py
@@ -0,0 +1,20 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.exporter.otlp.proto.common._internal.metrics_encoder import (
+ encode_metrics,
+)
+
+__all__ = ["encode_metrics"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/py.typed b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/trace_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/trace_encoder.py
new file mode 100644
index 0000000000..2af5765200
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/trace_encoder.py
@@ -0,0 +1,20 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.exporter.otlp.proto.common._internal.trace_encoder import (
+ encode_spans,
+)
+
+__all__ = ["encode_spans"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/version.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/tests/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_log_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_log_encoder.py
new file mode 100644
index 0000000000..1fdb1977ba
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_log_encoder.py
@@ -0,0 +1,305 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from typing import List, Tuple
+
+from opentelemetry._logs import SeverityNumber
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_attributes,
+ _encode_span_id,
+ _encode_trace_id,
+ _encode_value,
+)
+from opentelemetry.exporter.otlp.proto.common._log_encoder import encode_logs
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2 import (
+ ExportLogsServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue
+from opentelemetry.proto.common.v1.common_pb2 import (
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue
+from opentelemetry.proto.logs.v1.logs_pb2 import LogRecord as PB2LogRecord
+from opentelemetry.proto.logs.v1.logs_pb2 import (
+ ResourceLogs as PB2ResourceLogs,
+)
+from opentelemetry.proto.logs.v1.logs_pb2 import ScopeLogs as PB2ScopeLogs
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as PB2Resource,
+)
+from opentelemetry.sdk._logs import LogData, LogLimits
+from opentelemetry.sdk._logs import LogRecord as SDKLogRecord
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.trace import TraceFlags
+
+
+class TestOTLPLogEncoder(unittest.TestCase):
+ def test_encode(self):
+ sdk_logs, expected_encoding = self.get_test_logs()
+ self.assertEqual(encode_logs(sdk_logs), expected_encoding)
+
+ def test_dropped_attributes_count(self):
+ sdk_logs = self._get_test_logs_dropped_attributes()
+ encoded_logs = encode_logs(sdk_logs)
+ self.assertTrue(hasattr(sdk_logs[0].log_record, "dropped_attributes"))
+ self.assertEqual(
+ # pylint:disable=no-member
+ encoded_logs.resource_logs[0]
+ .scope_logs[0]
+ .log_records[0]
+ .dropped_attributes_count,
+ 2,
+ )
+
+ @staticmethod
+ def _get_sdk_log_data() -> List[LogData]:
+ log1 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650195189786880,
+ trace_id=89564621134313219400156819398935297684,
+ span_id=1312458408527513268,
+ trace_flags=TraceFlags(0x01),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Do not go gentle into that good night. Rage, rage against the dying of the light",
+ resource=SDKResource({"first_resource": "value"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+
+ log2 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650249738562048,
+ trace_id=0,
+ span_id=0,
+ trace_flags=TraceFlags.DEFAULT,
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Cooper, this is no time for caution!",
+ resource=SDKResource({"second_resource": "CASE"}),
+ attributes={},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "second_name", "second_version"
+ ),
+ )
+
+ log3 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650427658989056,
+ trace_id=271615924622795969659406376515024083555,
+ span_id=4242561578944770265,
+ trace_flags=TraceFlags(0x01),
+ severity_text="DEBUG",
+ severity_number=SeverityNumber.DEBUG,
+ body="To our galaxy",
+ resource=SDKResource({"second_resource": "CASE"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=None,
+ )
+
+ log4 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650584292683008,
+ trace_id=212592107417388365804938480559624925555,
+ span_id=6077757853989569223,
+ trace_flags=TraceFlags(0x01),
+ severity_text="INFO",
+ severity_number=SeverityNumber.INFO,
+ body="Love is the one thing that transcends time and space",
+ resource=SDKResource({"first_resource": "value"}),
+ attributes={"filename": "model.py", "func_name": "run_method"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "another_name", "another_version"
+ ),
+ )
+
+ return [log1, log2, log3, log4]
+
+ def get_test_logs(
+ self,
+ ) -> Tuple[List[SDKLogRecord], ExportLogsServiceRequest]:
+ sdk_logs = self._get_sdk_log_data()
+
+ pb2_service_request = ExportLogsServiceRequest(
+ resource_logs=[
+ PB2ResourceLogs(
+ resource=PB2Resource(
+ attributes=[
+ PB2KeyValue(
+ key="first_resource",
+ value=PB2AnyValue(string_value="value"),
+ )
+ ]
+ ),
+ scope_logs=[
+ PB2ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ log_records=[
+ PB2LogRecord(
+ time_unix_nano=1644650195189786880,
+ trace_id=_encode_trace_id(
+ 89564621134313219400156819398935297684
+ ),
+ span_id=_encode_span_id(
+ 1312458408527513268
+ ),
+ flags=int(TraceFlags(0x01)),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN.value,
+ body=_encode_value(
+ "Do not go gentle into that good night. Rage, rage against the dying of the light"
+ ),
+ attributes=_encode_attributes(
+ {"a": 1, "b": "c"}
+ ),
+ )
+ ],
+ ),
+ PB2ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="another_name",
+ version="another_version",
+ ),
+ log_records=[
+ PB2LogRecord(
+ time_unix_nano=1644650584292683008,
+ trace_id=_encode_trace_id(
+ 212592107417388365804938480559624925555
+ ),
+ span_id=_encode_span_id(
+ 6077757853989569223
+ ),
+ flags=int(TraceFlags(0x01)),
+ severity_text="INFO",
+ severity_number=SeverityNumber.INFO.value,
+ body=_encode_value(
+ "Love is the one thing that transcends time and space"
+ ),
+ attributes=_encode_attributes(
+ {
+ "filename": "model.py",
+ "func_name": "run_method",
+ }
+ ),
+ )
+ ],
+ ),
+ ],
+ ),
+ PB2ResourceLogs(
+ resource=PB2Resource(
+ attributes=[
+ PB2KeyValue(
+ key="second_resource",
+ value=PB2AnyValue(string_value="CASE"),
+ )
+ ]
+ ),
+ scope_logs=[
+ PB2ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="second_name",
+ version="second_version",
+ ),
+ log_records=[
+ PB2LogRecord(
+ time_unix_nano=1644650249738562048,
+ trace_id=_encode_trace_id(0),
+ span_id=_encode_span_id(0),
+ flags=int(TraceFlags.DEFAULT),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN.value,
+ body=_encode_value(
+ "Cooper, this is no time for caution!"
+ ),
+ attributes={},
+ ),
+ ],
+ ),
+ PB2ScopeLogs(
+ scope=PB2InstrumentationScope(),
+ log_records=[
+ PB2LogRecord(
+ time_unix_nano=1644650427658989056,
+ trace_id=_encode_trace_id(
+ 271615924622795969659406376515024083555
+ ),
+ span_id=_encode_span_id(
+ 4242561578944770265
+ ),
+ flags=int(TraceFlags(0x01)),
+ severity_text="DEBUG",
+ severity_number=SeverityNumber.DEBUG.value,
+ body=_encode_value("To our galaxy"),
+ attributes=_encode_attributes(
+ {"a": 1, "b": "c"}
+ ),
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ )
+
+ return sdk_logs, pb2_service_request
+
+ @staticmethod
+ def _get_test_logs_dropped_attributes() -> List[LogData]:
+ log1 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650195189786880,
+ trace_id=89564621134313219400156819398935297684,
+ span_id=1312458408527513268,
+ trace_flags=TraceFlags(0x01),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Do not go gentle into that good night. Rage, rage against the dying of the light",
+ resource=SDKResource({"first_resource": "value"}),
+ attributes={"a": 1, "b": "c", "user_id": "B121092"},
+ limits=LogLimits(max_attributes=1),
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+
+ log2 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650249738562048,
+ trace_id=0,
+ span_id=0,
+ trace_flags=TraceFlags.DEFAULT,
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Cooper, this is no time for caution!",
+ resource=SDKResource({"second_resource": "CASE"}),
+ attributes={},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "second_name", "second_version"
+ ),
+ )
+
+ return [log1, log2]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_metrics_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_metrics_encoder.py
new file mode 100644
index 0000000000..69e7cda39f
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_metrics_encoder.py
@@ -0,0 +1,808 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=protected-access
+import unittest
+
+from opentelemetry.exporter.otlp.proto.common.metrics_encoder import (
+ encode_metrics,
+)
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
+ ExportMetricsServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import (
+ AnyValue,
+ InstrumentationScope,
+ KeyValue,
+)
+from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as OTLPResource,
+)
+from opentelemetry.sdk.metrics.export import AggregationTemporality, Buckets
+from opentelemetry.sdk.metrics.export import (
+ ExponentialHistogram as ExponentialHistogramType,
+)
+from opentelemetry.sdk.metrics.export import ExponentialHistogramDataPoint
+from opentelemetry.sdk.metrics.export import Histogram as HistogramType
+from opentelemetry.sdk.metrics.export import (
+ HistogramDataPoint,
+ Metric,
+ MetricsData,
+ ResourceMetrics,
+ ScopeMetrics,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import (
+ InstrumentationScope as SDKInstrumentationScope,
+)
+from opentelemetry.test.metrictestutil import _generate_gauge, _generate_sum
+
+
+class TestOTLPMetricsEncoder(unittest.TestCase):
+ histogram = Metric(
+ name="histogram",
+ description="foo",
+ unit="s",
+ data=HistogramType(
+ data_points=[
+ HistogramDataPoint(
+ attributes={"a": 1, "b": True},
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ min=8,
+ max=18,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+
+ def test_encode_sum_int(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_sum("sum_int", 33)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="sum_int",
+ unit="s",
+ description="foo",
+ sum=pb2.Sum(
+ data_points=[
+ pb2.NumberDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946015139533244,
+ time_unix_nano=1641946016139533244,
+ as_int=33,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.CUMULATIVE,
+ is_monotonic=True,
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_sum_double(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_sum("sum_double", 2.98)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="sum_double",
+ unit="s",
+ description="foo",
+ sum=pb2.Sum(
+ data_points=[
+ pb2.NumberDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946015139533244,
+ time_unix_nano=1641946016139533244,
+ as_double=2.98,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.CUMULATIVE,
+ is_monotonic=True,
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_gauge_int(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_gauge("gauge_int", 9000)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="gauge_int",
+ unit="s",
+ description="foo",
+ gauge=pb2.Gauge(
+ data_points=[
+ pb2.NumberDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ time_unix_nano=1641946016139533244,
+ as_int=9000,
+ )
+ ],
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_gauge_double(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_gauge("gauge_double", 52.028)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="gauge_double",
+ unit="s",
+ description="foo",
+ gauge=pb2.Gauge(
+ data_points=[
+ pb2.NumberDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ time_unix_nano=1641946016139533244,
+ as_double=52.028,
+ )
+ ],
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_histogram(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[self.histogram],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="histogram",
+ unit="s",
+ description="foo",
+ histogram=pb2.Histogram(
+ data_points=[
+ pb2.HistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ exemplars=[],
+ max=18.0,
+ min=8.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_multiple_scope_histogram(self):
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[self.histogram, self.histogram],
+ schema_url="instrumentation_scope_schema_url",
+ ),
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="second_name",
+ version="second_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[self.histogram],
+ schema_url="instrumentation_scope_schema_url",
+ ),
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="third_name",
+ version="third_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[self.histogram],
+ schema_url="instrumentation_scope_schema_url",
+ ),
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="histogram",
+ unit="s",
+ description="foo",
+ histogram=pb2.Histogram(
+ data_points=[
+ pb2.HistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ exemplars=[],
+ max=18.0,
+ min=8.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ ),
+ pb2.Metric(
+ name="histogram",
+ unit="s",
+ description="foo",
+ histogram=pb2.Histogram(
+ data_points=[
+ pb2.HistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ exemplars=[],
+ max=18.0,
+ min=8.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ ),
+ ],
+ ),
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="second_name", version="second_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="histogram",
+ unit="s",
+ description="foo",
+ histogram=pb2.Histogram(
+ data_points=[
+ pb2.HistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ exemplars=[],
+ max=18.0,
+ min=8.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+ ],
+ ),
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="third_name", version="third_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="histogram",
+ unit="s",
+ description="foo",
+ histogram=pb2.Histogram(
+ data_points=[
+ pb2.HistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=5,
+ sum=67,
+ bucket_counts=[1, 4],
+ explicit_bounds=[10.0, 20.0],
+ exemplars=[],
+ max=18.0,
+ min=8.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+ ],
+ ),
+ ],
+ )
+ ]
+ )
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
+
+ def test_encode_exponential_histogram(self):
+ exponential_histogram = Metric(
+ name="exponential_histogram",
+ description="description",
+ unit="unit",
+ data=ExponentialHistogramType(
+ data_points=[
+ ExponentialHistogramDataPoint(
+ attributes={"a": 1, "b": True},
+ start_time_unix_nano=0,
+ time_unix_nano=1,
+ count=2,
+ sum=3,
+ scale=4,
+ zero_count=5,
+ positive=Buckets(offset=6, bucket_counts=[7, 8]),
+ negative=Buckets(offset=9, bucket_counts=[10, 11]),
+ flags=12,
+ min=13.0,
+ max=14.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[exponential_histogram],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ expected = ExportMetricsServiceRequest(
+ resource_metrics=[
+ pb2.ResourceMetrics(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_metrics=[
+ pb2.ScopeMetrics(
+ scope=InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ metrics=[
+ pb2.Metric(
+ name="exponential_histogram",
+ unit="unit",
+ description="description",
+ exponential_histogram=pb2.ExponentialHistogram(
+ data_points=[
+ pb2.ExponentialHistogramDataPoint(
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ start_time_unix_nano=0,
+ time_unix_nano=1,
+ count=2,
+ sum=3,
+ scale=4,
+ zero_count=5,
+ positive=pb2.ExponentialHistogramDataPoint.Buckets(
+ offset=6,
+ bucket_counts=[7, 8],
+ ),
+ negative=pb2.ExponentialHistogramDataPoint.Buckets(
+ offset=9,
+ bucket_counts=[10, 11],
+ ),
+ flags=12,
+ exemplars=[],
+ min=13.0,
+ max=14.0,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+ ],
+ )
+ ],
+ )
+ ]
+ )
+ # pylint: disable=protected-access
+ actual = encode_metrics(metrics_data)
+ self.assertEqual(expected, actual)
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_trace_encoder.py b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_trace_encoder.py
new file mode 100644
index 0000000000..c0a05483f1
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/tests/test_trace_encoder.py
@@ -0,0 +1,378 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=protected-access
+
+import unittest
+from typing import List, Tuple
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_span_id,
+ _encode_trace_id,
+)
+from opentelemetry.exporter.otlp.proto.common._internal.trace_encoder import (
+ _SPAN_KIND_MAP,
+ _encode_status,
+)
+from opentelemetry.exporter.otlp.proto.common.trace_encoder import encode_spans
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
+ ExportTraceServiceRequest as PB2ExportTraceServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue
+from opentelemetry.proto.common.v1.common_pb2 import (
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as PB2Resource,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import (
+ ResourceSpans as PB2ResourceSpans,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ScopeSpans as PB2ScopeSpans
+from opentelemetry.proto.trace.v1.trace_pb2 import Span as PB2SPan
+from opentelemetry.proto.trace.v1.trace_pb2 import Status as PB2Status
+from opentelemetry.sdk.trace import Event as SDKEvent
+from opentelemetry.sdk.trace import Resource as SDKResource
+from opentelemetry.sdk.trace import SpanContext as SDKSpanContext
+from opentelemetry.sdk.trace import _Span as SDKSpan
+from opentelemetry.sdk.util.instrumentation import (
+ InstrumentationScope as SDKInstrumentationScope,
+)
+from opentelemetry.trace import Link as SDKLink
+from opentelemetry.trace import SpanKind as SDKSpanKind
+from opentelemetry.trace import TraceFlags as SDKTraceFlags
+from opentelemetry.trace.status import Status as SDKStatus
+from opentelemetry.trace.status import StatusCode as SDKStatusCode
+
+
+class TestOTLPTraceEncoder(unittest.TestCase):
+ def test_encode_spans(self):
+ otel_spans, expected_encoding = self.get_exhaustive_test_spans()
+ self.assertEqual(encode_spans(otel_spans), expected_encoding)
+
+ @staticmethod
+ def get_exhaustive_otel_span_list() -> List[SDKSpan]:
+ trace_id = 0x3E0C63257DE34C926F9EFCD03927272E
+
+ base_time = 683647322 * 10**9 # in ns
+ start_times = (
+ base_time,
+ base_time + 150 * 10**6,
+ base_time + 300 * 10**6,
+ base_time + 400 * 10**6,
+ )
+ end_times = (
+ start_times[0] + (50 * 10**6),
+ start_times[1] + (100 * 10**6),
+ start_times[2] + (200 * 10**6),
+ start_times[3] + (300 * 10**6),
+ )
+
+ parent_span_context = SDKSpanContext(
+ trace_id, 0x1111111111111111, is_remote=False
+ )
+
+ other_context = SDKSpanContext(
+ trace_id, 0x2222222222222222, is_remote=False
+ )
+
+ span1 = SDKSpan(
+ name="test-span-1",
+ context=SDKSpanContext(
+ trace_id,
+ 0x34BF92DEEFC58C92,
+ is_remote=False,
+ trace_flags=SDKTraceFlags(SDKTraceFlags.SAMPLED),
+ ),
+ parent=parent_span_context,
+ events=(
+ SDKEvent(
+ name="event0",
+ timestamp=base_time + 50 * 10**6,
+ attributes={
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ },
+ ),
+ ),
+ links=(
+ SDKLink(context=other_context, attributes={"key_bool": True}),
+ ),
+ resource=SDKResource({}),
+ )
+ span1.start(start_time=start_times[0])
+ span1.set_attribute("key_bool", False)
+ span1.set_attribute("key_string", "hello_world")
+ span1.set_attribute("key_float", 111.22)
+ span1.set_status(SDKStatus(SDKStatusCode.ERROR, "Example description"))
+ span1.end(end_time=end_times[0])
+
+ span2 = SDKSpan(
+ name="test-span-2",
+ context=parent_span_context,
+ parent=None,
+ resource=SDKResource(attributes={"key_resource": "some_resource"}),
+ )
+ span2.start(start_time=start_times[1])
+ span2.end(end_time=end_times[1])
+
+ span3 = SDKSpan(
+ name="test-span-3",
+ context=other_context,
+ parent=None,
+ resource=SDKResource(attributes={"key_resource": "some_resource"}),
+ )
+ span3.start(start_time=start_times[2])
+ span3.set_attribute("key_string", "hello_world")
+ span3.end(end_time=end_times[2])
+
+ span4 = SDKSpan(
+ name="test-span-4",
+ context=other_context,
+ parent=None,
+ resource=SDKResource({}),
+ instrumentation_scope=SDKInstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+ span4.start(start_time=start_times[3])
+ span4.end(end_time=end_times[3])
+
+ return [span1, span2, span3, span4]
+
+ def get_exhaustive_test_spans(
+ self,
+ ) -> Tuple[List[SDKSpan], PB2ExportTraceServiceRequest]:
+ otel_spans = self.get_exhaustive_otel_span_list()
+ trace_id = _encode_trace_id(otel_spans[0].context.trace_id)
+ span_kind = _SPAN_KIND_MAP[SDKSpanKind.INTERNAL]
+
+ pb2_service_request = PB2ExportTraceServiceRequest(
+ resource_spans=[
+ PB2ResourceSpans(
+ resource=PB2Resource(),
+ scope_spans=[
+ PB2ScopeSpans(
+ scope=PB2InstrumentationScope(),
+ spans=[
+ PB2SPan(
+ trace_id=trace_id,
+ span_id=_encode_span_id(
+ otel_spans[0].context.span_id
+ ),
+ trace_state=None,
+ parent_span_id=_encode_span_id(
+ otel_spans[0].parent.span_id
+ ),
+ name=otel_spans[0].name,
+ kind=span_kind,
+ start_time_unix_nano=otel_spans[
+ 0
+ ].start_time,
+ end_time_unix_nano=otel_spans[0].end_time,
+ attributes=[
+ PB2KeyValue(
+ key="key_bool",
+ value=PB2AnyValue(
+ bool_value=False
+ ),
+ ),
+ PB2KeyValue(
+ key="key_string",
+ value=PB2AnyValue(
+ string_value="hello_world"
+ ),
+ ),
+ PB2KeyValue(
+ key="key_float",
+ value=PB2AnyValue(
+ double_value=111.22
+ ),
+ ),
+ ],
+ events=[
+ PB2SPan.Event(
+ name="event0",
+ time_unix_nano=otel_spans[0]
+ .events[0]
+ .timestamp,
+ attributes=[
+ PB2KeyValue(
+ key="annotation_bool",
+ value=PB2AnyValue(
+ bool_value=True
+ ),
+ ),
+ PB2KeyValue(
+ key="annotation_string",
+ value=PB2AnyValue(
+ string_value="annotation_test"
+ ),
+ ),
+ PB2KeyValue(
+ key="key_float",
+ value=PB2AnyValue(
+ double_value=0.3
+ ),
+ ),
+ ],
+ )
+ ],
+ links=[
+ PB2SPan.Link(
+ trace_id=_encode_trace_id(
+ otel_spans[0]
+ .links[0]
+ .context.trace_id
+ ),
+ span_id=_encode_span_id(
+ otel_spans[0]
+ .links[0]
+ .context.span_id
+ ),
+ attributes=[
+ PB2KeyValue(
+ key="key_bool",
+ value=PB2AnyValue(
+ bool_value=True
+ ),
+ ),
+ ],
+ )
+ ],
+ status=PB2Status(
+ code=SDKStatusCode.ERROR.value,
+ message="Example description",
+ ),
+ )
+ ],
+ ),
+ PB2ScopeSpans(
+ scope=PB2InstrumentationScope(
+ name="name",
+ version="version",
+ ),
+ spans=[
+ PB2SPan(
+ trace_id=trace_id,
+ span_id=_encode_span_id(
+ otel_spans[3].context.span_id
+ ),
+ trace_state=None,
+ parent_span_id=None,
+ name=otel_spans[3].name,
+ kind=span_kind,
+ start_time_unix_nano=otel_spans[
+ 3
+ ].start_time,
+ end_time_unix_nano=otel_spans[3].end_time,
+ attributes=None,
+ events=None,
+ links=None,
+ status={},
+ )
+ ],
+ ),
+ ],
+ ),
+ PB2ResourceSpans(
+ resource=PB2Resource(
+ attributes=[
+ PB2KeyValue(
+ key="key_resource",
+ value=PB2AnyValue(
+ string_value="some_resource"
+ ),
+ )
+ ]
+ ),
+ scope_spans=[
+ PB2ScopeSpans(
+ scope=PB2InstrumentationScope(),
+ spans=[
+ PB2SPan(
+ trace_id=trace_id,
+ span_id=_encode_span_id(
+ otel_spans[1].context.span_id
+ ),
+ trace_state=None,
+ parent_span_id=None,
+ name=otel_spans[1].name,
+ kind=span_kind,
+ start_time_unix_nano=otel_spans[
+ 1
+ ].start_time,
+ end_time_unix_nano=otel_spans[1].end_time,
+ attributes=None,
+ events=None,
+ links=None,
+ status={},
+ ),
+ PB2SPan(
+ trace_id=trace_id,
+ span_id=_encode_span_id(
+ otel_spans[2].context.span_id
+ ),
+ trace_state=None,
+ parent_span_id=None,
+ name=otel_spans[2].name,
+ kind=span_kind,
+ start_time_unix_nano=otel_spans[
+ 2
+ ].start_time,
+ end_time_unix_nano=otel_spans[2].end_time,
+ attributes=[
+ PB2KeyValue(
+ key="key_string",
+ value=PB2AnyValue(
+ string_value="hello_world"
+ ),
+ ),
+ ],
+ events=None,
+ links=None,
+ status={},
+ ),
+ ],
+ )
+ ],
+ ),
+ ]
+ )
+
+ return otel_spans, pb2_service_request
+
+ def test_encode_status_code_translations(self):
+ self.assertEqual(
+ _encode_status(SDKStatus(status_code=SDKStatusCode.UNSET)),
+ PB2Status(
+ code=SDKStatusCode.UNSET.value,
+ ),
+ )
+
+ self.assertEqual(
+ _encode_status(SDKStatus(status_code=SDKStatusCode.OK)),
+ PB2Status(
+ code=SDKStatusCode.OK.value,
+ ),
+ )
+
+ self.assertEqual(
+ _encode_status(SDKStatus(status_code=SDKStatusCode.ERROR)),
+ PB2Status(
+ code=SDKStatusCode.ERROR.value,
+ ),
+ )
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/LICENSE b/exporter/opentelemetry-exporter-otlp-proto-grpc/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/README.rst b/exporter/opentelemetry-exporter-otlp-proto-grpc/README.rst
new file mode 100644
index 0000000000..279e1aed21
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/README.rst
@@ -0,0 +1,25 @@
+OpenTelemetry Collector Protobuf over gRPC Exporter
+===================================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-otlp-proto-grpc.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-otlp-proto-grpc/
+
+This library allows to export data to the OpenTelemetry Collector using the OpenTelemetry Protocol using Protobuf over gRPC.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-otlp-proto-grpc
+
+
+References
+----------
+
+* `OpenTelemetry Collector Exporter `_
+* `OpenTelemetry Collector `_
+* `OpenTelemetry `_
+* `OpenTelemetry Protocol Specification `_
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/pyproject.toml b/exporter/opentelemetry-exporter-otlp-proto-grpc/pyproject.toml
new file mode 100644
index 0000000000..bdb152021a
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/pyproject.toml
@@ -0,0 +1,66 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-otlp-proto-grpc"
+dynamic = ["version"]
+description = "OpenTelemetry Collector Protobuf over gRPC Exporter"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "Deprecated >= 1.2.6",
+ "backoff >= 1.10.0, < 2.0.0; python_version<'3.7'",
+ "backoff >= 1.10.0, < 3.0.0; python_version>='3.7'",
+ "googleapis-common-protos ~= 1.52",
+ "grpcio >= 1.0.0, < 2.0.0",
+ "opentelemetry-api ~= 1.15",
+ "opentelemetry-proto == 1.23.0.dev",
+ "opentelemetry-sdk ~= 1.23.0.dev",
+ "opentelemetry-exporter-otlp-proto-common == 1.23.0.dev",
+]
+
+[project.optional-dependencies]
+test = [
+ "pytest-grpc",
+]
+
+[project.entry-points.opentelemetry_logs_exporter]
+otlp_proto_grpc = "opentelemetry.exporter.otlp.proto.grpc._log_exporter:OTLPLogExporter"
+
+[project.entry-points.opentelemetry_metrics_exporter]
+otlp_proto_grpc = "opentelemetry.exporter.otlp.proto.grpc.metric_exporter:OTLPMetricExporter"
+
+[project.entry-points.opentelemetry_traces_exporter]
+otlp_proto_grpc = "opentelemetry.exporter.otlp.proto.grpc.trace_exporter:OTLPSpanExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-otlp-proto-grpc"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/otlp/proto/grpc/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/__init__.py
new file mode 100644
index 0000000000..07553d69d0
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/__init__.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+"""
+This library allows to export tracing data to an OTLP collector.
+
+Usage
+-----
+
+The **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the
+`OTLP`_ collector.
+
+You can configure the exporter with the following environment variables:
+
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_TIMEOUT`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_PROTOCOL`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_HEADERS`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_COMPRESSION`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE`
+- :envvar:`OTEL_EXPORTER_OTLP_TIMEOUT`
+- :envvar:`OTEL_EXPORTER_OTLP_PROTOCOL`
+- :envvar:`OTEL_EXPORTER_OTLP_HEADERS`
+- :envvar:`OTEL_EXPORTER_OTLP_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION`
+- :envvar:`OTEL_EXPORTER_OTLP_CERTIFICATE`
+
+.. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+
+.. code:: python
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
+ from opentelemetry.sdk.resources import Resource
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+ # Resource can be required for some backends, e.g. Jaeger
+ # If resource wouldn't be set - traces wouldn't appears in Jaeger
+ resource = Resource(attributes={
+ "service.name": "service"
+ })
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ tracer = trace.get_tracer(__name__)
+
+ otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
+
+ span_processor = BatchSpanProcessor(otlp_exporter)
+
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ print("Hello world!")
+
+API
+---
+"""
+from .version import __version__
+
+_USER_AGENT_HEADER_VALUE = "OTel-OTLP-Exporter-Python/" + __version__
+_OTLP_GRPC_HEADERS = [("user-agent", _USER_AGENT_HEADER_VALUE)]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_log_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_log_exporter/__init__.py
new file mode 100644
index 0000000000..3a87ef1223
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/_log_exporter/__init__.py
@@ -0,0 +1,119 @@
+# Copyright The OpenTelemetry Authors
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from os import environ
+from typing import Dict, Optional, Tuple, Union, Sequence
+from typing import Sequence as TypingSequence
+from grpc import ChannelCredentials, Compression
+
+from opentelemetry.exporter.otlp.proto.common._log_encoder import encode_logs
+from opentelemetry.exporter.otlp.proto.grpc.exporter import (
+ OTLPExporterMixin,
+ _get_credentials,
+ environ_to_compression,
+)
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2 import (
+ ExportLogsServiceRequest,
+)
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2_grpc import (
+ LogsServiceStub,
+)
+from opentelemetry.sdk._logs import LogRecord as SDKLogRecord
+from opentelemetry.sdk._logs import LogData
+from opentelemetry.sdk._logs.export import LogExporter, LogExportResult
+
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS,
+ OTEL_EXPORTER_OTLP_LOGS_INSECURE,
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
+)
+
+
+class OTLPLogExporter(
+ LogExporter,
+ OTLPExporterMixin[SDKLogRecord, ExportLogsServiceRequest, LogExportResult],
+):
+
+ _result = LogExportResult
+ _stub = LogsServiceStub
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ insecure: Optional[bool] = None,
+ credentials: Optional[ChannelCredentials] = None,
+ headers: Optional[
+ Union[TypingSequence[Tuple[str, str]], Dict[str, str], str]
+ ] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ ):
+ if insecure is None:
+ insecure = environ.get(OTEL_EXPORTER_OTLP_LOGS_INSECURE)
+ if insecure is not None:
+ insecure = insecure.lower() == "true"
+
+ if (
+ not insecure
+ and environ.get(OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE) is not None
+ ):
+ credentials = _get_credentials(
+ credentials, OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE
+ )
+
+ environ_timeout = environ.get(OTEL_EXPORTER_OTLP_LOGS_TIMEOUT)
+ environ_timeout = (
+ int(environ_timeout) if environ_timeout is not None else None
+ )
+
+ compression = (
+ environ_to_compression(OTEL_EXPORTER_OTLP_LOGS_COMPRESSION)
+ if compression is None
+ else compression
+ )
+ endpoint = endpoint or environ.get(OTEL_EXPORTER_OTLP_LOGS_ENDPOINT)
+
+ headers = headers or environ.get(OTEL_EXPORTER_OTLP_LOGS_HEADERS)
+
+ super().__init__(
+ **{
+ "endpoint": endpoint,
+ "insecure": insecure,
+ "credentials": credentials,
+ "headers": headers,
+ "timeout": timeout or environ_timeout,
+ "compression": compression,
+ }
+ )
+
+ def _translate_data(
+ self, data: Sequence[LogData]
+ ) -> ExportLogsServiceRequest:
+ return encode_logs(data)
+
+ def export(self, batch: Sequence[LogData]) -> LogExportResult:
+ return self._export(batch)
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ OTLPExporterMixin.shutdown(self, timeout_millis=timeout_millis)
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
+
+ @property
+ def _exporting(self) -> str:
+ return "logs"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py
new file mode 100644
index 0000000000..b422682828
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/exporter.py
@@ -0,0 +1,337 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""OTLP Exporter"""
+
+import threading
+from abc import ABC, abstractmethod
+from collections.abc import Sequence # noqa: F401
+from logging import getLogger
+from os import environ
+from time import sleep
+from typing import ( # noqa: F401
+ Any,
+ Callable,
+ Dict,
+ Generic,
+ List,
+ Optional,
+ Tuple,
+ Union,
+)
+from typing import Sequence as TypingSequence
+from typing import TypeVar
+from urllib.parse import urlparse
+
+from deprecated import deprecated
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _get_resource_data,
+ _create_exp_backoff_generator,
+)
+from google.rpc.error_details_pb2 import RetryInfo
+from grpc import (
+ ChannelCredentials,
+ Compression,
+ RpcError,
+ StatusCode,
+ insecure_channel,
+ secure_channel,
+ ssl_channel_credentials,
+)
+
+from opentelemetry.exporter.otlp.proto.grpc import (
+ _OTLP_GRPC_HEADERS,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ AnyValue,
+ ArrayValue,
+ KeyValue,
+)
+from opentelemetry.proto.resource.v1.resource_pb2 import Resource # noqa: F401
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_INSECURE,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+)
+from opentelemetry.sdk.metrics.export import MetricsData
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.trace import ReadableSpan
+from opentelemetry.util.re import parse_env_headers
+
+logger = getLogger(__name__)
+SDKDataT = TypeVar("SDKDataT")
+ResourceDataT = TypeVar("ResourceDataT")
+TypingResourceT = TypeVar("TypingResourceT")
+ExportServiceRequestT = TypeVar("ExportServiceRequestT")
+ExportResultT = TypeVar("ExportResultT")
+
+_ENVIRON_TO_COMPRESSION = {
+ None: None,
+ "gzip": Compression.Gzip,
+}
+
+
+class InvalidCompressionValueException(Exception):
+ def __init__(self, environ_key: str, environ_value: str):
+ super().__init__(
+ 'Invalid value "{}" for compression envvar {}'.format(
+ environ_value, environ_key
+ )
+ )
+
+
+def environ_to_compression(environ_key: str) -> Optional[Compression]:
+ environ_value = (
+ environ[environ_key].lower().strip()
+ if environ_key in environ
+ else None
+ )
+ if environ_value not in _ENVIRON_TO_COMPRESSION:
+ raise InvalidCompressionValueException(environ_key, environ_value)
+ return _ENVIRON_TO_COMPRESSION[environ_value]
+
+
+@deprecated(
+ version="1.18.0",
+ reason="Use one of the encoders from opentelemetry-exporter-otlp-proto-common instead",
+)
+def get_resource_data(
+ sdk_resource_scope_data: Dict[SDKResource, ResourceDataT],
+ resource_class: Callable[..., TypingResourceT],
+ name: str,
+) -> List[TypingResourceT]:
+ return _get_resource_data(sdk_resource_scope_data, resource_class, name)
+
+
+def _load_credential_from_file(filepath) -> ChannelCredentials:
+ try:
+ with open(filepath, "rb") as creds_file:
+ credential = creds_file.read()
+ return ssl_channel_credentials(credential)
+ except FileNotFoundError:
+ logger.exception("Failed to read credential file")
+ return None
+
+
+def _get_credentials(creds, environ_key):
+ if creds is not None:
+ return creds
+ creds_env = environ.get(environ_key)
+ if creds_env:
+ return _load_credential_from_file(creds_env)
+ return ssl_channel_credentials()
+
+
+# pylint: disable=no-member
+class OTLPExporterMixin(
+ ABC, Generic[SDKDataT, ExportServiceRequestT, ExportResultT]
+):
+ """OTLP span exporter
+
+ Args:
+ endpoint: OpenTelemetry Collector receiver endpoint
+ insecure: Connection type
+ credentials: ChannelCredentials object for server authentication
+ headers: Headers to send when exporting
+ timeout: Backend request timeout in seconds
+ compression: gRPC compression method to use
+ """
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ insecure: Optional[bool] = None,
+ credentials: Optional[ChannelCredentials] = None,
+ headers: Optional[
+ Union[TypingSequence[Tuple[str, str]], Dict[str, str], str]
+ ] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ ):
+ super().__init__()
+
+ self._endpoint = endpoint or environ.get(
+ OTEL_EXPORTER_OTLP_ENDPOINT, "http://localhost:4317"
+ )
+
+ parsed_url = urlparse(self._endpoint)
+
+ if parsed_url.scheme == "https":
+ insecure = False
+ if insecure is None:
+ insecure = environ.get(OTEL_EXPORTER_OTLP_INSECURE)
+ if insecure is not None:
+ insecure = insecure.lower() == "true"
+ else:
+ if parsed_url.scheme == "http":
+ insecure = True
+ else:
+ insecure = False
+
+ if parsed_url.netloc:
+ self._endpoint = parsed_url.netloc
+
+ self._headers = headers or environ.get(OTEL_EXPORTER_OTLP_HEADERS)
+ if isinstance(self._headers, str):
+ temp_headers = parse_env_headers(self._headers)
+ self._headers = tuple(temp_headers.items())
+ elif isinstance(self._headers, dict):
+ self._headers = tuple(self._headers.items())
+ if self._headers is None:
+ self._headers = tuple(_OTLP_GRPC_HEADERS)
+ else:
+ self._headers = tuple(self._headers) + tuple(_OTLP_GRPC_HEADERS)
+
+ self._timeout = timeout or int(
+ environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, 10)
+ )
+ self._collector_kwargs = None
+
+ compression = (
+ environ_to_compression(OTEL_EXPORTER_OTLP_COMPRESSION)
+ if compression is None
+ else compression
+ ) or Compression.NoCompression
+
+ if insecure:
+ self._client = self._stub(
+ insecure_channel(self._endpoint, compression=compression)
+ )
+ else:
+ credentials = _get_credentials(
+ credentials, OTEL_EXPORTER_OTLP_CERTIFICATE
+ )
+ self._client = self._stub(
+ secure_channel(
+ self._endpoint, credentials, compression=compression
+ )
+ )
+
+ self._export_lock = threading.Lock()
+ self._shutdown = False
+
+ @abstractmethod
+ def _translate_data(
+ self, data: TypingSequence[SDKDataT]
+ ) -> ExportServiceRequestT:
+ pass
+
+ def _export(
+ self, data: Union[TypingSequence[ReadableSpan], MetricsData]
+ ) -> ExportResultT:
+ # After the call to shutdown, subsequent calls to Export are
+ # not allowed and should return a Failure result.
+ if self._shutdown:
+ logger.warning("Exporter already shutdown, ignoring batch")
+ return self._result.FAILURE
+
+ # FIXME remove this check if the export type for traces
+ # gets updated to a class that represents the proto
+ # TracesData and use the code below instead.
+ # logger.warning(
+ # "Transient error %s encountered while exporting %s, retrying in %ss.",
+ # error.code(),
+ # data.__class__.__name__,
+ # delay,
+ # )
+ max_value = 64
+ # expo returns a generator that yields delay values which grow
+ # exponentially. Once delay is greater than max_value, the yielded
+ # value will remain constant.
+ for delay in _create_exp_backoff_generator(max_value=max_value):
+ if delay == max_value or self._shutdown:
+ return self._result.FAILURE
+
+ with self._export_lock:
+ try:
+ self._client.Export(
+ request=self._translate_data(data),
+ metadata=self._headers,
+ timeout=self._timeout,
+ )
+
+ return self._result.SUCCESS
+
+ except RpcError as error:
+
+ if error.code() in [
+ StatusCode.CANCELLED,
+ StatusCode.DEADLINE_EXCEEDED,
+ StatusCode.RESOURCE_EXHAUSTED,
+ StatusCode.ABORTED,
+ StatusCode.OUT_OF_RANGE,
+ StatusCode.UNAVAILABLE,
+ StatusCode.DATA_LOSS,
+ ]:
+
+ retry_info_bin = dict(error.trailing_metadata()).get(
+ "google.rpc.retryinfo-bin"
+ )
+ if retry_info_bin is not None:
+ retry_info = RetryInfo()
+ retry_info.ParseFromString(retry_info_bin)
+ delay = (
+ retry_info.retry_delay.seconds
+ + retry_info.retry_delay.nanos / 1.0e9
+ )
+
+ logger.warning(
+ (
+ "Transient error %s encountered while exporting "
+ "%s to %s, retrying in %ss."
+ ),
+ error.code(),
+ self._exporting,
+ self._endpoint,
+ delay,
+ )
+ sleep(delay)
+ continue
+ else:
+ logger.error(
+ "Failed to export %s to %s, error code: %s",
+ self._exporting,
+ self._endpoint,
+ error.code(),
+ exc_info=error.code() == StatusCode.UNKNOWN,
+ )
+
+ if error.code() == StatusCode.OK:
+ return self._result.SUCCESS
+
+ return self._result.FAILURE
+
+ return self._result.FAILURE
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ if self._shutdown:
+ logger.warning("Exporter already shutdown, ignoring call")
+ return
+ # wait for the last export if any
+ self._export_lock.acquire(timeout=timeout_millis / 1e3)
+ self._shutdown = True
+ self._export_lock.release()
+
+ @property
+ @abstractmethod
+ def _exporting(self) -> str:
+ """
+ Returns a string that describes the overall exporter, to be used in
+ warning messages.
+ """
+ pass
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/metric_exporter/__init__.py
new file mode 100644
index 0000000000..2560c5c305
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/metric_exporter/__init__.py
@@ -0,0 +1,262 @@
+# Copyright The OpenTelemetry Authors
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import replace
+from logging import getLogger
+from os import environ
+from typing import Dict, Iterable, List, Optional, Tuple, Union
+from typing import Sequence as TypingSequence
+from grpc import ChannelCredentials, Compression
+
+from opentelemetry.exporter.otlp.proto.common.metrics_encoder import (
+ encode_metrics,
+)
+from opentelemetry.sdk.metrics._internal.aggregation import Aggregation
+from opentelemetry.exporter.otlp.proto.grpc.exporter import (
+ OTLPExporterMixin,
+ _get_credentials,
+ environ_to_compression,
+)
+from opentelemetry.exporter.otlp.proto.grpc.exporter import ( # noqa: F401
+ get_resource_data,
+)
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
+ ExportMetricsServiceRequest,
+)
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (
+ MetricsServiceStub,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ InstrumentationScope,
+)
+from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2 # noqa: F401
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS,
+ OTEL_EXPORTER_OTLP_METRICS_INSECURE,
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ DataPointT,
+ Metric,
+ MetricExporter,
+ MetricExportResult,
+ MetricsData,
+ ResourceMetrics,
+ ScopeMetrics,
+)
+from opentelemetry.sdk.metrics.export import ( # noqa: F401
+ Gauge,
+ Histogram as HistogramType,
+ Sum,
+ ExponentialHistogram as ExponentialHistogramType,
+)
+from opentelemetry.exporter.otlp.proto.common._internal.metrics_encoder import (
+ OTLPMetricExporterMixin,
+)
+
+_logger = getLogger(__name__)
+
+
+class OTLPMetricExporter(
+ MetricExporter,
+ OTLPExporterMixin[Metric, ExportMetricsServiceRequest, MetricExportResult],
+ OTLPMetricExporterMixin,
+):
+ """OTLP metric exporter
+
+ Args:
+ endpoint: Target URL to which the exporter is going to send metrics
+ max_export_batch_size: Maximum number of data points to export in a single request. This is to deal with
+ gRPC's 4MB message size limit. If not set there is no limit to the number of data points in a request.
+ If it is set and the number of data points exceeds the max, the request will be split.
+ """
+
+ _result = MetricExportResult
+ _stub = MetricsServiceStub
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ insecure: Optional[bool] = None,
+ credentials: Optional[ChannelCredentials] = None,
+ headers: Optional[
+ Union[TypingSequence[Tuple[str, str]], Dict[str, str], str]
+ ] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[type, Aggregation] = None,
+ max_export_batch_size: Optional[int] = None,
+ ):
+
+ if insecure is None:
+ insecure = environ.get(OTEL_EXPORTER_OTLP_METRICS_INSECURE)
+ if insecure is not None:
+ insecure = insecure.lower() == "true"
+
+ if (
+ not insecure
+ and environ.get(OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE) is not None
+ ):
+ credentials = _get_credentials(
+ credentials, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE
+ )
+
+ environ_timeout = environ.get(OTEL_EXPORTER_OTLP_METRICS_TIMEOUT)
+ environ_timeout = (
+ int(environ_timeout) if environ_timeout is not None else None
+ )
+
+ compression = (
+ environ_to_compression(OTEL_EXPORTER_OTLP_METRICS_COMPRESSION)
+ if compression is None
+ else compression
+ )
+
+ self._common_configuration(preferred_temporality)
+
+ OTLPExporterMixin.__init__(
+ self,
+ endpoint=endpoint
+ or environ.get(OTEL_EXPORTER_OTLP_METRICS_ENDPOINT),
+ insecure=insecure,
+ credentials=credentials,
+ headers=headers or environ.get(OTEL_EXPORTER_OTLP_METRICS_HEADERS),
+ timeout=timeout or environ_timeout,
+ compression=compression,
+ )
+
+ self._max_export_batch_size: Optional[int] = max_export_batch_size
+
+ def _translate_data(
+ self, data: MetricsData
+ ) -> ExportMetricsServiceRequest:
+ return encode_metrics(data)
+
+ def export(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ # TODO(#2663): OTLPExporterMixin should pass timeout to gRPC
+ if self._max_export_batch_size is None:
+ return self._export(data=metrics_data)
+
+ export_result = MetricExportResult.SUCCESS
+
+ for split_metrics_data in self._split_metrics_data(metrics_data):
+ split_export_result = self._export(data=split_metrics_data)
+
+ if split_export_result is MetricExportResult.FAILURE:
+ export_result = MetricExportResult.FAILURE
+
+ return export_result
+
+ def _split_metrics_data(
+ self,
+ metrics_data: MetricsData,
+ ) -> Iterable[MetricsData]:
+ batch_size: int = 0
+ split_resource_metrics: List[ResourceMetrics] = []
+
+ for resource_metrics in metrics_data.resource_metrics:
+ split_scope_metrics: List[ScopeMetrics] = []
+ split_resource_metrics.append(
+ replace(
+ resource_metrics,
+ scope_metrics=split_scope_metrics,
+ )
+ )
+ for scope_metrics in resource_metrics.scope_metrics:
+ split_metrics: List[Metric] = []
+ split_scope_metrics.append(
+ replace(
+ scope_metrics,
+ metrics=split_metrics,
+ )
+ )
+ for metric in scope_metrics.metrics:
+ split_data_points: List[DataPointT] = []
+ split_metrics.append(
+ replace(
+ metric,
+ data=replace(
+ metric.data,
+ data_points=split_data_points,
+ ),
+ )
+ )
+
+ for data_point in metric.data.data_points:
+ split_data_points.append(data_point)
+ batch_size += 1
+
+ if batch_size >= self._max_export_batch_size:
+ yield MetricsData(
+ resource_metrics=split_resource_metrics
+ )
+ # Reset all the variables
+ batch_size = 0
+ split_data_points = []
+ split_metrics = [
+ replace(
+ metric,
+ data=replace(
+ metric.data,
+ data_points=split_data_points,
+ ),
+ )
+ ]
+ split_scope_metrics = [
+ replace(
+ scope_metrics,
+ metrics=split_metrics,
+ )
+ ]
+ split_resource_metrics = [
+ replace(
+ resource_metrics,
+ scope_metrics=split_scope_metrics,
+ )
+ ]
+
+ if not split_data_points:
+ # If data_points is empty remove the whole metric
+ split_metrics.pop()
+
+ if not split_metrics:
+ # If metrics is empty remove the whole scope_metrics
+ split_scope_metrics.pop()
+
+ if not split_scope_metrics:
+ # If scope_metrics is empty remove the whole resource_metrics
+ split_resource_metrics.pop()
+
+ if batch_size > 0:
+ yield MetricsData(resource_metrics=split_resource_metrics)
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ OTLPExporterMixin.shutdown(self, timeout_millis=timeout_millis)
+
+ @property
+ def _exporting(self) -> str:
+ return "metrics"
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/py.typed b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py
new file mode 100644
index 0000000000..bd120ac787
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py
@@ -0,0 +1,152 @@
+# Copyright The OpenTelemetry Authors
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""OTLP Span Exporter"""
+
+import logging
+from os import environ
+from typing import Dict, Optional, Sequence, Tuple, Union
+from typing import Sequence as TypingSequence
+
+
+from grpc import ChannelCredentials, Compression
+
+from opentelemetry.exporter.otlp.proto.common.trace_encoder import (
+ encode_spans,
+)
+from opentelemetry.exporter.otlp.proto.grpc.exporter import (
+ OTLPExporterMixin,
+ _get_credentials,
+ environ_to_compression,
+)
+from opentelemetry.exporter.otlp.proto.grpc.exporter import ( # noqa: F401
+ get_resource_data,
+)
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
+ ExportTraceServiceRequest,
+)
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2_grpc import (
+ TraceServiceStub,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ InstrumentationScope,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ( # noqa: F401
+ ScopeSpans,
+ ResourceSpans,
+ Span as CollectorSpan,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import Status # noqa: F401
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS,
+ OTEL_EXPORTER_OTLP_TRACES_INSECURE,
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
+)
+from opentelemetry.sdk.trace import ReadableSpan
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+
+logger = logging.getLogger(__name__)
+
+
+# pylint: disable=no-member
+class OTLPSpanExporter(
+ SpanExporter,
+ OTLPExporterMixin[
+ ReadableSpan, ExportTraceServiceRequest, SpanExportResult
+ ],
+):
+ # pylint: disable=unsubscriptable-object
+ """OTLP span exporter
+
+ Args:
+ endpoint: OpenTelemetry Collector receiver endpoint
+ insecure: Connection type
+ credentials: Credentials object for server authentication
+ headers: Headers to send when exporting
+ timeout: Backend request timeout in seconds
+ compression: gRPC compression method to use
+ """
+
+ _result = SpanExportResult
+ _stub = TraceServiceStub
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ insecure: Optional[bool] = None,
+ credentials: Optional[ChannelCredentials] = None,
+ headers: Optional[
+ Union[TypingSequence[Tuple[str, str]], Dict[str, str], str]
+ ] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ ):
+
+ if insecure is None:
+ insecure = environ.get(OTEL_EXPORTER_OTLP_TRACES_INSECURE)
+ if insecure is not None:
+ insecure = insecure.lower() == "true"
+
+ if (
+ not insecure
+ and environ.get(OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE) is not None
+ ):
+ credentials = _get_credentials(
+ credentials, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE
+ )
+
+ environ_timeout = environ.get(OTEL_EXPORTER_OTLP_TRACES_TIMEOUT)
+ environ_timeout = (
+ int(environ_timeout) if environ_timeout is not None else None
+ )
+
+ compression = (
+ environ_to_compression(OTEL_EXPORTER_OTLP_TRACES_COMPRESSION)
+ if compression is None
+ else compression
+ )
+
+ super().__init__(
+ **{
+ "endpoint": endpoint
+ or environ.get(OTEL_EXPORTER_OTLP_TRACES_ENDPOINT),
+ "insecure": insecure,
+ "credentials": credentials,
+ "headers": headers
+ or environ.get(OTEL_EXPORTER_OTLP_TRACES_HEADERS),
+ "timeout": timeout or environ_timeout,
+ "compression": compression,
+ }
+ )
+
+ def _translate_data(
+ self, data: Sequence[ReadableSpan]
+ ) -> ExportTraceServiceRequest:
+ return encode_spans(data)
+
+ def export(self, spans: Sequence[ReadableSpan]) -> SpanExportResult:
+ return self._export(spans)
+
+ def shutdown(self) -> None:
+ OTLPExporterMixin.shutdown(self)
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
+
+ @property
+ def _exporting(self):
+ return "traces"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/version.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/src/opentelemetry/exporter/otlp/proto/grpc/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/fixtures/test.cert b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/fixtures/test.cert
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/logs/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/logs/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/logs/test_otlp_logs_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/logs/test_otlp_logs_exporter.py
new file mode 100644
index 0000000000..a6479a1474
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/logs/test_otlp_logs_exporter.py
@@ -0,0 +1,536 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+from concurrent.futures import ThreadPoolExecutor
+from os.path import dirname
+from unittest import TestCase
+from unittest.mock import patch
+
+from google.protobuf.duration_pb2 import Duration
+from google.rpc.error_details_pb2 import RetryInfo
+from grpc import ChannelCredentials, Compression, StatusCode, server
+
+from opentelemetry._logs import SeverityNumber
+from opentelemetry.exporter.otlp.proto.common._internal import _encode_value
+from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
+ OTLPLogExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.version import __version__
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2 import (
+ ExportLogsServiceRequest,
+ ExportLogsServiceResponse,
+)
+from opentelemetry.proto.collector.logs.v1.logs_service_pb2_grpc import (
+ LogsServiceServicer,
+ add_LogsServiceServicer_to_server,
+)
+from opentelemetry.proto.common.v1.common_pb2 import AnyValue
+from opentelemetry.proto.common.v1.common_pb2 import (
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.common.v1.common_pb2 import KeyValue
+from opentelemetry.proto.logs.v1.logs_pb2 import LogRecord as PB2LogRecord
+from opentelemetry.proto.logs.v1.logs_pb2 import ResourceLogs, ScopeLogs
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as OTLPResource,
+)
+from opentelemetry.sdk._logs import LogData, LogRecord
+from opentelemetry.sdk._logs.export import LogExportResult
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS,
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
+)
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.trace import TraceFlags
+
+THIS_DIR = dirname(__file__)
+
+
+class LogsServiceServicerUNAVAILABLEDelay(LogsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ context.send_initial_metadata(
+ (("google.rpc.retryinfo-bin", RetryInfo().SerializeToString()),)
+ )
+ context.set_trailing_metadata(
+ (
+ (
+ "google.rpc.retryinfo-bin",
+ RetryInfo(
+ retry_delay=Duration(seconds=4)
+ ).SerializeToString(),
+ ),
+ )
+ )
+
+ return ExportLogsServiceResponse()
+
+
+class LogsServiceServicerUNAVAILABLE(LogsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ return ExportLogsServiceResponse()
+
+
+class LogsServiceServicerSUCCESS(LogsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.OK)
+
+ return ExportLogsServiceResponse()
+
+
+class LogsServiceServicerALREADY_EXISTS(LogsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.ALREADY_EXISTS)
+
+ return ExportLogsServiceResponse()
+
+
+class TestOTLPLogExporter(TestCase):
+ def setUp(self):
+ self.exporter = OTLPLogExporter()
+
+ self.server = server(ThreadPoolExecutor(max_workers=10))
+
+ self.server.add_insecure_port("127.0.0.1:4317")
+
+ self.server.start()
+
+ self.log_data_1 = LogData(
+ log_record=LogRecord(
+ timestamp=int(time.time() * 1e9),
+ trace_id=2604504634922341076776623263868986797,
+ span_id=5213367945872657620,
+ trace_flags=TraceFlags(0x01),
+ severity_text="WARNING",
+ severity_number=SeverityNumber.WARN,
+ body="Zhengzhou, We have a heaviest rains in 1000 years",
+ resource=SDKResource({"key": "value"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+ self.log_data_2 = LogData(
+ log_record=LogRecord(
+ timestamp=int(time.time() * 1e9),
+ trace_id=2604504634922341076776623263868986799,
+ span_id=5213367945872657623,
+ trace_flags=TraceFlags(0x01),
+ severity_text="INFO",
+ severity_number=SeverityNumber.INFO2,
+ body="Sydney, Opera House is closed",
+ resource=SDKResource({"key": "value"}),
+ attributes={"custom_attr": [1, 2, 3]},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "second_name", "second_version"
+ ),
+ )
+ self.log_data_3 = LogData(
+ log_record=LogRecord(
+ timestamp=int(time.time() * 1e9),
+ trace_id=2604504634922341076776623263868986800,
+ span_id=5213367945872657628,
+ trace_flags=TraceFlags(0x01),
+ severity_text="ERROR",
+ severity_number=SeverityNumber.WARN,
+ body="Mumbai, Boil water before drinking",
+ resource=SDKResource({"service": "myapp"}),
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "third_name", "third_version"
+ ),
+ )
+
+ def tearDown(self):
+ self.server.stop(None)
+
+ def test_exporting(self):
+ # pylint: disable=protected-access
+ self.assertEqual(self.exporter._exporting, "logs")
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT: "logs:4317",
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE: THIS_DIR
+ + "/../fixtures/test.cert",
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS: " key1=value1,KEY2 = VALUE=2",
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT: "10",
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION: "gzip",
+ },
+ )
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.OTLPExporterMixin.__init__"
+ )
+ def test_env_variables(self, mock_exporter_mixin):
+ OTLPLogExporter()
+
+ self.assertTrue(len(mock_exporter_mixin.call_args_list) == 1)
+ _, kwargs = mock_exporter_mixin.call_args_list[0]
+ self.assertEqual(kwargs["endpoint"], "logs:4317")
+ self.assertEqual(kwargs["headers"], " key1=value1,KEY2 = VALUE=2")
+ self.assertEqual(kwargs["timeout"], 10)
+ self.assertEqual(kwargs["compression"], Compression.Gzip)
+ self.assertIsNotNone(kwargs["credentials"])
+ self.assertIsInstance(kwargs["credentials"], ChannelCredentials)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc._log_exporter.OTLPLogExporter._stub"
+ )
+ # pylint: disable=unused-argument
+ def test_no_credentials_error(
+ self, mock_ssl_channel, mock_secure, mock_stub
+ ):
+ OTLPLogExporter(insecure=False)
+ self.assertTrue(mock_ssl_channel.called)
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ def test_otlp_exporter_endpoint(self, mock_secure, mock_insecure):
+ expected_endpoint = "localhost:4317"
+ endpoints = [
+ (
+ "http://localhost:4317",
+ None,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "http://localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "http://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ True,
+ mock_secure,
+ ),
+ ]
+
+ # pylint: disable=C0209
+ for endpoint, insecure, mock_method in endpoints:
+ OTLPLogExporter(endpoint=endpoint, insecure=insecure)
+ self.assertEqual(
+ 1,
+ mock_method.call_count,
+ "expected {} to be called for {} {}".format(
+ mock_method, endpoint, insecure
+ ),
+ )
+ self.assertEqual(
+ expected_endpoint,
+ mock_method.call_args[0][0],
+ "expected {} got {} {}".format(
+ expected_endpoint, mock_method.call_args[0][0], endpoint
+ ),
+ )
+ mock_method.reset_mock()
+
+ def test_otlp_headers_from_env(self):
+ # pylint: disable=protected-access
+ self.assertEqual(
+ self.exporter._headers,
+ (("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),),
+ )
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_LogsServiceServicer_to_server(
+ LogsServiceServicerUNAVAILABLE(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.log_data_1]), LogExportResult.FAILURE
+ )
+ mock_sleep.assert_called_with(1)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable_delay(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_LogsServiceServicer_to_server(
+ LogsServiceServicerUNAVAILABLEDelay(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.log_data_1]), LogExportResult.FAILURE
+ )
+ mock_sleep.assert_called_with(4)
+
+ def test_success(self):
+ add_LogsServiceServicer_to_server(
+ LogsServiceServicerSUCCESS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.log_data_1]), LogExportResult.SUCCESS
+ )
+
+ def test_failure(self):
+ add_LogsServiceServicer_to_server(
+ LogsServiceServicerALREADY_EXISTS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.log_data_1]), LogExportResult.FAILURE
+ )
+
+ def test_translate_log_data(self):
+
+ expected = ExportLogsServiceRequest(
+ resource_logs=[
+ ResourceLogs(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(
+ key="key", value=AnyValue(string_value="value")
+ ),
+ ]
+ ),
+ scope_logs=[
+ ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ log_records=[
+ PB2LogRecord(
+ # pylint: disable=no-member
+ time_unix_nano=self.log_data_1.log_record.timestamp,
+ severity_number=self.log_data_1.log_record.severity_number.value,
+ severity_text="WARNING",
+ span_id=int.to_bytes(
+ 5213367945872657620, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 2604504634922341076776623263868986797,
+ 16,
+ "big",
+ ),
+ body=_encode_value(
+ "Zhengzhou, We have a heaviest rains in 1000 years"
+ ),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(int_value=1),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(string_value="c"),
+ ),
+ ],
+ flags=int(
+ self.log_data_1.log_record.trace_flags
+ ),
+ )
+ ],
+ )
+ ],
+ ),
+ ]
+ )
+
+ # pylint: disable=protected-access
+ self.assertEqual(
+ expected, self.exporter._translate_data([self.log_data_1])
+ )
+
+ def test_translate_multiple_logs(self):
+ expected = ExportLogsServiceRequest(
+ resource_logs=[
+ ResourceLogs(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(
+ key="key", value=AnyValue(string_value="value")
+ ),
+ ]
+ ),
+ scope_logs=[
+ ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="first_name", version="first_version"
+ ),
+ log_records=[
+ PB2LogRecord(
+ # pylint: disable=no-member
+ time_unix_nano=self.log_data_1.log_record.timestamp,
+ severity_number=self.log_data_1.log_record.severity_number.value,
+ severity_text="WARNING",
+ span_id=int.to_bytes(
+ 5213367945872657620, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 2604504634922341076776623263868986797,
+ 16,
+ "big",
+ ),
+ body=_encode_value(
+ "Zhengzhou, We have a heaviest rains in 1000 years"
+ ),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(int_value=1),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(string_value="c"),
+ ),
+ ],
+ flags=int(
+ self.log_data_1.log_record.trace_flags
+ ),
+ )
+ ],
+ ),
+ ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="second_name", version="second_version"
+ ),
+ log_records=[
+ PB2LogRecord(
+ # pylint: disable=no-member
+ time_unix_nano=self.log_data_2.log_record.timestamp,
+ severity_number=self.log_data_2.log_record.severity_number.value,
+ severity_text="INFO",
+ span_id=int.to_bytes(
+ 5213367945872657623, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 2604504634922341076776623263868986799,
+ 16,
+ "big",
+ ),
+ body=_encode_value(
+ "Sydney, Opera House is closed"
+ ),
+ attributes=[
+ KeyValue(
+ key="custom_attr",
+ value=_encode_value([1, 2, 3]),
+ ),
+ ],
+ flags=int(
+ self.log_data_2.log_record.trace_flags
+ ),
+ )
+ ],
+ ),
+ ],
+ ),
+ ResourceLogs(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(
+ key="service",
+ value=AnyValue(string_value="myapp"),
+ ),
+ ]
+ ),
+ scope_logs=[
+ ScopeLogs(
+ scope=PB2InstrumentationScope(
+ name="third_name", version="third_version"
+ ),
+ log_records=[
+ PB2LogRecord(
+ # pylint: disable=no-member
+ time_unix_nano=self.log_data_3.log_record.timestamp,
+ severity_number=self.log_data_3.log_record.severity_number.value,
+ severity_text="ERROR",
+ span_id=int.to_bytes(
+ 5213367945872657628, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 2604504634922341076776623263868986800,
+ 16,
+ "big",
+ ),
+ body=_encode_value(
+ "Mumbai, Boil water before drinking"
+ ),
+ attributes=[],
+ flags=int(
+ self.log_data_3.log_record.trace_flags
+ ),
+ )
+ ],
+ )
+ ],
+ ),
+ ]
+ )
+
+ # pylint: disable=protected-access
+ self.assertEqual(
+ expected,
+ self.exporter._translate_data(
+ [self.log_data_1, self.log_data_2, self.log_data_3]
+ ),
+ )
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/performance/benchmarks/test_benchmark_trace_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/performance/benchmarks/test_benchmark_trace_exporter.py
new file mode 100644
index 0000000000..2b39a8feb3
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/performance/benchmarks/test_benchmark_trace_exporter.py
@@ -0,0 +1,87 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest.mock import patch
+
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider, sampling
+from opentelemetry.sdk.trace.export import (
+ BatchSpanProcessor,
+ SimpleSpanProcessor,
+)
+
+
+def get_tracer_with_processor(span_processor_class):
+ span_processor = span_processor_class(OTLPSpanExporter())
+ tracer = TracerProvider(
+ active_span_processor=span_processor,
+ sampler=sampling.DEFAULT_ON,
+ ).get_tracer("pipeline_benchmark_tracer")
+ return tracer
+
+
+class MockTraceServiceStub:
+ def __init__(self, channel):
+ self.Export = lambda *args, **kwargs: None
+
+
+@patch(
+ "opentelemetry.exporter.otlp.proto.grpc.trace_exporter.OTLPSpanExporter._stub",
+ new=MockTraceServiceStub,
+)
+def test_simple_span_processor(benchmark):
+ tracer = get_tracer_with_processor(SimpleSpanProcessor)
+
+ def create_spans_to_be_exported():
+ span = tracer.start_span(
+ "benchmarkedSpan",
+ )
+ for i in range(10):
+ span.set_attribute(
+ f"benchmarkAttribute_{i}",
+ f"benchmarkAttrValue_{i}",
+ )
+ span.end()
+
+ benchmark(create_spans_to_be_exported)
+
+
+@patch(
+ "opentelemetry.exporter.otlp.proto.grpc.trace_exporter.OTLPSpanExporter._stub",
+ new=MockTraceServiceStub,
+)
+def test_batch_span_processor(benchmark):
+ """Runs benchmark tests using BatchSpanProcessor.
+
+ One particular call by pytest-benchmark will be much more expensive since
+ the batch export thread will activate and consume a lot of CPU to process
+ all the spans. For this reason, focus on the average measurement. Do not
+ focus on the min/max measurements which will be misleading.
+ """
+ tracer = get_tracer_with_processor(BatchSpanProcessor)
+
+ def create_spans_to_be_exported():
+ span = tracer.start_span(
+ "benchmarkedSpan",
+ )
+ for i in range(10):
+ span.set_attribute(
+ f"benchmarkAttribute_{i}",
+ f"benchmarkAttrValue_{i}",
+ )
+ span.end()
+
+ benchmark(create_spans_to_be_exported)
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_exporter_mixin.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_exporter_mixin.py
new file mode 100644
index 0000000000..4dfed3e154
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_exporter_mixin.py
@@ -0,0 +1,206 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import threading
+import time
+from logging import WARNING
+from types import MethodType
+from typing import Sequence
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from google.protobuf.duration_pb2 import Duration
+from google.rpc.error_details_pb2 import RetryInfo
+from grpc import Compression
+
+from opentelemetry.exporter.otlp.proto.grpc.exporter import (
+ ExportServiceRequestT,
+ InvalidCompressionValueException,
+ OTLPExporterMixin,
+ RpcError,
+ SDKDataT,
+ StatusCode,
+ environ_to_compression,
+)
+
+
+class TestOTLPExporterMixin(TestCase):
+ def test_environ_to_compression(self):
+ with patch.dict(
+ "os.environ",
+ {
+ "test_gzip": "gzip",
+ "test_gzip_caseinsensitive_with_whitespace": " GzIp ",
+ "test_invalid": "some invalid compression",
+ },
+ ):
+ self.assertEqual(
+ environ_to_compression("test_gzip"), Compression.Gzip
+ )
+ self.assertEqual(
+ environ_to_compression(
+ "test_gzip_caseinsensitive_with_whitespace"
+ ),
+ Compression.Gzip,
+ )
+ self.assertIsNone(
+ environ_to_compression("missing_key"),
+ )
+ with self.assertRaises(InvalidCompressionValueException):
+ environ_to_compression("test_invalid")
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ def test_export_warning(self, mock_expo):
+ mock_expo.configure_mock(**{"return_value": [0]})
+
+ rpc_error = RpcError()
+
+ def code(self):
+ return None
+
+ rpc_error.code = MethodType(code, rpc_error)
+
+ class OTLPMockExporter(OTLPExporterMixin):
+ _result = Mock()
+ _stub = Mock(
+ **{"return_value": Mock(**{"Export.side_effect": rpc_error})}
+ )
+
+ def _translate_data(
+ self, data: Sequence[SDKDataT]
+ ) -> ExportServiceRequestT:
+ pass
+
+ @property
+ def _exporting(self) -> str:
+ return "mock"
+
+ otlp_mock_exporter = OTLPMockExporter()
+
+ with self.assertLogs(level=WARNING) as warning:
+ # pylint: disable=protected-access
+ otlp_mock_exporter._export(Mock())
+ self.assertEqual(
+ warning.records[0].message,
+ "Failed to export mock to localhost:4317, error code: None",
+ )
+
+ def code(self): # pylint: disable=function-redefined
+ return StatusCode.CANCELLED
+
+ def trailing_metadata(self):
+ return {}
+
+ rpc_error.code = MethodType(code, rpc_error)
+ rpc_error.trailing_metadata = MethodType(trailing_metadata, rpc_error)
+
+ with self.assertLogs(level=WARNING) as warning:
+ # pylint: disable=protected-access
+ otlp_mock_exporter._export([])
+ self.assertEqual(
+ warning.records[0].message,
+ (
+ "Transient error StatusCode.CANCELLED encountered "
+ "while exporting mock to localhost:4317, retrying in 0s."
+ ),
+ )
+
+ def test_shutdown(self):
+ result_mock = Mock()
+
+ class OTLPMockExporter(OTLPExporterMixin):
+ _result = result_mock
+ _stub = Mock(**{"return_value": Mock()})
+
+ def _translate_data(
+ self, data: Sequence[SDKDataT]
+ ) -> ExportServiceRequestT:
+ pass
+
+ @property
+ def _exporting(self) -> str:
+ return "mock"
+
+ otlp_mock_exporter = OTLPMockExporter()
+
+ with self.assertLogs(level=WARNING) as warning:
+ # pylint: disable=protected-access
+ self.assertEqual(
+ otlp_mock_exporter._export(data={}), result_mock.SUCCESS
+ )
+ otlp_mock_exporter.shutdown()
+ # pylint: disable=protected-access
+ self.assertEqual(
+ otlp_mock_exporter._export(data={}), result_mock.FAILURE
+ )
+ self.assertEqual(
+ warning.records[0].message,
+ "Exporter already shutdown, ignoring batch",
+ )
+
+ def test_shutdown_wait_last_export(self):
+ result_mock = Mock()
+ rpc_error = RpcError()
+
+ def code(self):
+ return StatusCode.UNAVAILABLE
+
+ def trailing_metadata(self):
+ return {
+ "google.rpc.retryinfo-bin": RetryInfo(
+ retry_delay=Duration(seconds=1)
+ ).SerializeToString()
+ }
+
+ rpc_error.code = MethodType(code, rpc_error)
+ rpc_error.trailing_metadata = MethodType(trailing_metadata, rpc_error)
+
+ class OTLPMockExporter(OTLPExporterMixin):
+ _result = result_mock
+ _stub = Mock(
+ **{"return_value": Mock(**{"Export.side_effect": rpc_error})}
+ )
+
+ def _translate_data(
+ self, data: Sequence[SDKDataT]
+ ) -> ExportServiceRequestT:
+ pass
+
+ @property
+ def _exporting(self) -> str:
+ return "mock"
+
+ otlp_mock_exporter = OTLPMockExporter()
+
+ # pylint: disable=protected-access
+ export_thread = threading.Thread(
+ target=otlp_mock_exporter._export, args=({},)
+ )
+ export_thread.start()
+ try:
+ # pylint: disable=protected-access
+ self.assertTrue(otlp_mock_exporter._export_lock.locked())
+ # delay is 1 second while the default shutdown timeout is 30_000 milliseconds
+ start_time = time.time()
+ otlp_mock_exporter.shutdown()
+ now = time.time()
+ self.assertGreaterEqual(now, (start_time + 30 / 1000))
+ # pylint: disable=protected-access
+ self.assertTrue(otlp_mock_exporter._shutdown)
+ # pylint: disable=protected-access
+ self.assertFalse(otlp_mock_exporter._export_lock.locked())
+ finally:
+ export_thread.join()
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_metrics_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_metrics_exporter.py
new file mode 100644
index 0000000000..291e9457ef
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_metrics_exporter.py
@@ -0,0 +1,1008 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import threading
+import time
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=too-many-lines
+from logging import WARNING
+from os import environ
+from os.path import dirname
+from typing import List
+from unittest import TestCase
+from unittest.mock import patch
+
+from google.protobuf.duration_pb2 import Duration
+from google.rpc.error_details_pb2 import RetryInfo
+from grpc import ChannelCredentials, Compression, StatusCode, server
+
+from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
+ OTLPMetricExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.version import __version__
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import (
+ ExportMetricsServiceResponse,
+)
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2_grpc import (
+ MetricsServiceServicer,
+ add_MetricsServiceServicer_to_server,
+)
+from opentelemetry.proto.common.v1.common_pb2 import InstrumentationScope
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS,
+ OTEL_EXPORTER_OTLP_METRICS_INSECURE,
+ OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE,
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
+)
+from opentelemetry.sdk.metrics import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Gauge,
+ Metric,
+ MetricExportResult,
+ MetricsData,
+ NumberDataPoint,
+ ResourceMetrics,
+ ScopeMetrics,
+)
+from opentelemetry.sdk.metrics.view import (
+ ExplicitBucketHistogramAggregation,
+ ExponentialBucketHistogramAggregation,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import (
+ InstrumentationScope as SDKInstrumentationScope,
+)
+from opentelemetry.test.metrictestutil import _generate_sum
+
+THIS_DIR = dirname(__file__)
+
+
+class MetricsServiceServicerUNAVAILABLEDelay(MetricsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ context.send_initial_metadata(
+ (("google.rpc.retryinfo-bin", RetryInfo().SerializeToString()),)
+ )
+ context.set_trailing_metadata(
+ (
+ (
+ "google.rpc.retryinfo-bin",
+ RetryInfo(
+ retry_delay=Duration(seconds=4)
+ ).SerializeToString(),
+ ),
+ )
+ )
+
+ return ExportMetricsServiceResponse()
+
+
+class MetricsServiceServicerUNAVAILABLE(MetricsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ return ExportMetricsServiceResponse()
+
+
+class MetricsServiceServicerUNKNOWN(MetricsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNKNOWN)
+
+ return ExportMetricsServiceResponse()
+
+
+class MetricsServiceServicerSUCCESS(MetricsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.OK)
+
+ return ExportMetricsServiceResponse()
+
+
+class MetricsServiceServicerALREADY_EXISTS(MetricsServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.ALREADY_EXISTS)
+
+ return ExportMetricsServiceResponse()
+
+
+class TestOTLPMetricExporter(TestCase):
+ # pylint: disable=too-many-public-methods
+
+ def setUp(self):
+
+ self.exporter = OTLPMetricExporter()
+
+ self.server = server(ThreadPoolExecutor(max_workers=10))
+
+ self.server.add_insecure_port("127.0.0.1:4317")
+
+ self.server.start()
+
+ self.metrics = {
+ "sum_int": MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_sum("sum_int", 33)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ )
+ }
+
+ def tearDown(self):
+ self.server.stop(None)
+
+ def test_exporting(self):
+ # pylint: disable=protected-access
+ self.assertEqual(self.exporter._exporting, "metrics")
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "DELTA"},
+ )
+ def test_preferred_temporality(self):
+ # pylint: disable=protected-access
+ exporter = OTLPMetricExporter(
+ preferred_temporality={Counter: AggregationTemporality.CUMULATIVE}
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[Counter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[ObservableCounter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[ObservableUpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ exporter._preferred_temporality[ObservableGauge],
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT: "collector:4317",
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE: THIS_DIR
+ + "/fixtures/test.cert",
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS: " key1=value1,KEY2 = value=2",
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT: "10",
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION: "gzip",
+ },
+ )
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.OTLPExporterMixin.__init__"
+ )
+ def test_env_variables(self, mock_exporter_mixin):
+ OTLPMetricExporter()
+
+ self.assertTrue(len(mock_exporter_mixin.call_args_list) == 1)
+ _, kwargs = mock_exporter_mixin.call_args_list[0]
+
+ self.assertEqual(kwargs["endpoint"], "collector:4317")
+ self.assertEqual(kwargs["headers"], " key1=value1,KEY2 = value=2")
+ self.assertEqual(kwargs["timeout"], 10)
+ self.assertEqual(kwargs["compression"], Compression.Gzip)
+ self.assertIsNotNone(kwargs["credentials"])
+ self.assertIsInstance(kwargs["credentials"], ChannelCredentials)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.metric_exporter.OTLPMetricExporter._stub"
+ )
+ # pylint: disable=unused-argument
+ def test_no_credentials_error(
+ self, mock_ssl_channel, mock_secure, mock_stub
+ ):
+ OTLPMetricExporter(insecure=False)
+ self.assertTrue(mock_ssl_channel.called)
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_METRICS_HEADERS: " key1=value1,KEY2 = VALUE=2 "},
+ )
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ # pylint: disable=unused-argument
+ def test_otlp_headers_from_env(self, mock_ssl_channel, mock_secure):
+ exporter = OTLPMetricExporter()
+ # pylint: disable=protected-access
+ self.assertEqual(
+ exporter._headers,
+ (
+ ("key1", "value1"),
+ ("key2", "VALUE=2"),
+ ("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),
+ ),
+ )
+ exporter = OTLPMetricExporter(
+ headers=(("key3", "value3"), ("key4", "value4"))
+ )
+ # pylint: disable=protected-access
+ self.assertEqual(
+ exporter._headers,
+ (
+ ("key3", "value3"),
+ ("key4", "value4"),
+ ("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),
+ ),
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_METRICS_INSECURE: "True"},
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ # pylint: disable=unused-argument
+ def test_otlp_insecure_from_env(self, mock_insecure):
+ OTLPMetricExporter()
+ # pylint: disable=protected-access
+ self.assertTrue(mock_insecure.called)
+ self.assertEqual(
+ 1,
+ mock_insecure.call_count,
+ f"expected {mock_insecure} to be called",
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ def test_otlp_exporter_endpoint(self, mock_secure, mock_insecure):
+ expected_endpoint = "localhost:4317"
+ endpoints = [
+ (
+ "http://localhost:4317",
+ None,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "http://localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "http://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ True,
+ mock_secure,
+ ),
+ ]
+ # pylint: disable=C0209
+ for endpoint, insecure, mock_method in endpoints:
+ OTLPMetricExporter(endpoint=endpoint, insecure=insecure)
+ self.assertEqual(
+ 1,
+ mock_method.call_count,
+ "expected {} to be called for {} {}".format(
+ mock_method, endpoint, insecure
+ ),
+ )
+ self.assertEqual(
+ expected_endpoint,
+ mock_method.call_args[0][0],
+ "expected {} got {} {}".format(
+ expected_endpoint, mock_method.call_args[0][0], endpoint
+ ),
+ )
+ mock_method.reset_mock()
+
+ # pylint: disable=no-self-use
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {OTEL_EXPORTER_OTLP_COMPRESSION: "gzip"})
+ def test_otlp_exporter_otlp_compression_envvar(
+ self, mock_insecure_channel, mock_expo
+ ):
+ """Just OTEL_EXPORTER_OTLP_COMPRESSION should work"""
+ OTLPMetricExporter(insecure=True)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.Gzip
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {OTEL_EXPORTER_OTLP_COMPRESSION: "gzip"})
+ def test_otlp_exporter_otlp_compression_kwarg(self, mock_insecure_channel):
+ """Specifying kwarg should take precedence over env"""
+ OTLPMetricExporter(
+ insecure=True, compression=Compression.NoCompression
+ )
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.NoCompression
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {})
+ def test_otlp_exporter_otlp_compression_unspecified(
+ self, mock_insecure_channel
+ ):
+ """No env or kwarg should be NoCompression"""
+ OTLPMetricExporter(insecure=True)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.NoCompression
+ )
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerUNAVAILABLE(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+ mock_sleep.assert_called_with(1)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable_delay(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerUNAVAILABLEDelay(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+ mock_sleep.assert_called_with(4)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.logger.error")
+ def test_unknown_logs(self, mock_logger_error, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerUNKNOWN(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+ mock_sleep.assert_not_called()
+ mock_logger_error.assert_called_with(
+ "Failed to export %s to %s, error code: %s",
+ "metrics",
+ "localhost:4317",
+ StatusCode.UNKNOWN,
+ exc_info=True,
+ )
+
+ def test_success(self):
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerSUCCESS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.SUCCESS,
+ )
+
+ def test_failure(self):
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerALREADY_EXISTS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+
+ def test_split_metrics_data_many_data_points(self):
+ # GIVEN
+ metrics_data = MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ _number_data_point(12),
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ )
+ # WHEN
+ split_metrics_data: List[MetricsData] = list(
+ # pylint: disable=protected-access
+ OTLPMetricExporter(max_export_batch_size=2)._split_metrics_data(
+ metrics_data=metrics_data,
+ )
+ )
+ # THEN
+ self.assertEqual(
+ [
+ MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ _number_data_point(12),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ ),
+ MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ ),
+ ],
+ split_metrics_data,
+ )
+
+ def test_split_metrics_data_nb_data_points_equal_batch_size(self):
+ # GIVEN
+ metrics_data = MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ _number_data_point(12),
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ )
+ # WHEN
+ split_metrics_data: List[MetricsData] = list(
+ # pylint: disable=protected-access
+ OTLPMetricExporter(max_export_batch_size=3)._split_metrics_data(
+ metrics_data=metrics_data,
+ )
+ )
+ # THEN
+ self.assertEqual(
+ [
+ MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ _number_data_point(12),
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ ),
+ ],
+ split_metrics_data,
+ )
+
+ def test_split_metrics_data_many_resources_scopes_metrics(self):
+ # GIVEN
+ metrics_data = MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ ],
+ ),
+ _gauge(
+ index=2,
+ data_points=[
+ _number_data_point(12),
+ ],
+ ),
+ ],
+ ),
+ _scope_metrics(
+ index=2,
+ metrics=[
+ _gauge(
+ index=3,
+ data_points=[
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ _resource_metrics(
+ index=2,
+ scope_metrics=[
+ _scope_metrics(
+ index=3,
+ metrics=[
+ _gauge(
+ index=4,
+ data_points=[
+ _number_data_point(14),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ )
+ # WHEN
+ split_metrics_data: List[MetricsData] = list(
+ # pylint: disable=protected-access
+ OTLPMetricExporter(max_export_batch_size=2)._split_metrics_data(
+ metrics_data=metrics_data,
+ )
+ )
+ # THEN
+ self.assertEqual(
+ [
+ MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=1,
+ metrics=[
+ _gauge(
+ index=1,
+ data_points=[
+ _number_data_point(11),
+ ],
+ ),
+ _gauge(
+ index=2,
+ data_points=[
+ _number_data_point(12),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ ),
+ MetricsData(
+ resource_metrics=[
+ _resource_metrics(
+ index=1,
+ scope_metrics=[
+ _scope_metrics(
+ index=2,
+ metrics=[
+ _gauge(
+ index=3,
+ data_points=[
+ _number_data_point(13),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ _resource_metrics(
+ index=2,
+ scope_metrics=[
+ _scope_metrics(
+ index=3,
+ metrics=[
+ _gauge(
+ index=4,
+ data_points=[
+ _number_data_point(14),
+ ],
+ ),
+ ],
+ ),
+ ],
+ ),
+ ]
+ ),
+ ],
+ split_metrics_data,
+ )
+
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ def test_insecure_https_endpoint(self, mock_secure_channel):
+ OTLPMetricExporter(endpoint="https://ab.c:123", insecure=True)
+ mock_secure_channel.assert_called()
+
+ def test_shutdown(self):
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerSUCCESS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.SUCCESS,
+ )
+ self.exporter.shutdown()
+ with self.assertLogs(level=WARNING) as warning:
+ self.assertEqual(
+ self.exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+ self.assertEqual(
+ warning.records[0].message,
+ "Exporter already shutdown, ignoring batch",
+ )
+ self.exporter = OTLPMetricExporter()
+
+ def test_shutdown_wait_last_export(self):
+ add_MetricsServiceServicer_to_server(
+ MetricsServiceServicerUNAVAILABLEDelay(), self.server
+ )
+
+ export_thread = threading.Thread(
+ target=self.exporter.export, args=(self.metrics["sum_int"],)
+ )
+ export_thread.start()
+ try:
+ # pylint: disable=protected-access
+ self.assertTrue(self.exporter._export_lock.locked())
+ # delay is 4 seconds while the default shutdown timeout is 30_000 milliseconds
+ start_time = time.time()
+ self.exporter.shutdown()
+ now = time.time()
+ self.assertGreaterEqual(now, (start_time + 30 / 1000))
+ # pylint: disable=protected-access
+ self.assertTrue(self.exporter._shutdown)
+ # pylint: disable=protected-access
+ self.assertFalse(self.exporter._export_lock.locked())
+ finally:
+ export_thread.join()
+
+ def test_aggregation_temporality(self):
+ # pylint: disable=protected-access
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(temporality, AggregationTemporality.CUMULATIVE)
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "CUMULATIVE"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(
+ temporality, AggregationTemporality.CUMULATIVE
+ )
+
+ with patch.dict(
+ environ, {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "ABC"}
+ ):
+
+ with self.assertLogs(level=WARNING):
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(
+ temporality, AggregationTemporality.CUMULATIVE
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "DELTA"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Counter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableCounter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[
+ ObservableUpDownCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableGauge],
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "LOWMEMORY"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Counter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[
+ ObservableUpDownCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableGauge],
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ def test_exponential_explicit_bucket_histogram(self):
+
+ self.assertIsInstance(
+ # pylint: disable=protected-access
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+
+ with patch.dict(
+ environ,
+ {
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "base2_exponential_bucket_histogram"
+ },
+ ):
+ self.assertIsInstance(
+ # pylint: disable=protected-access
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExponentialBucketHistogramAggregation,
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "abc"},
+ ):
+ with self.assertLogs(level=WARNING) as log:
+ self.assertIsInstance(
+ # pylint: disable=protected-access
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+ self.assertIn(
+ (
+ "Invalid value for OTEL_EXPORTER_OTLP_METRICS_DEFAULT_"
+ "HISTOGRAM_AGGREGATION: abc, using explicit bucket "
+ "histogram aggregation"
+ ),
+ log.output[0],
+ )
+
+ with patch.dict(
+ environ,
+ {
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "explicit_bucket_histogram"
+ },
+ ):
+ self.assertIsInstance(
+ # pylint: disable=protected-access
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+
+
+def _resource_metrics(
+ index: int, scope_metrics: List[ScopeMetrics]
+) -> ResourceMetrics:
+ return ResourceMetrics(
+ resource=Resource(
+ attributes={"a": index},
+ schema_url=f"resource_url_{index}",
+ ),
+ schema_url=f"resource_url_{index}",
+ scope_metrics=scope_metrics,
+ )
+
+
+def _scope_metrics(index: int, metrics: List[Metric]) -> ScopeMetrics:
+ return ScopeMetrics(
+ scope=InstrumentationScope(name=f"scope_{index}"),
+ schema_url=f"scope_url_{index}",
+ metrics=metrics,
+ )
+
+
+def _gauge(index: int, data_points: List[NumberDataPoint]) -> Metric:
+ return Metric(
+ name=f"gauge_{index}",
+ description="description",
+ unit="unit",
+ data=Gauge(data_points=data_points),
+ )
+
+
+def _number_data_point(value: int) -> NumberDataPoint:
+ return NumberDataPoint(
+ attributes={"a": 1, "b": True},
+ start_time_unix_nano=1641946015139533244,
+ time_unix_nano=1641946016139533244,
+ value=value,
+ )
diff --git a/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_trace_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_trace_exporter.py
new file mode 100644
index 0000000000..5445ddf926
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-grpc/tests/test_otlp_trace_exporter.py
@@ -0,0 +1,996 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import threading
+import time
+from collections import OrderedDict
+from concurrent.futures import ThreadPoolExecutor
+from logging import WARNING
+from unittest import TestCase
+from unittest.mock import Mock, PropertyMock, patch
+
+from google.protobuf.duration_pb2 import Duration
+from google.rpc.error_details_pb2 import RetryInfo
+from grpc import ChannelCredentials, Compression, StatusCode, server
+
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _encode_key_value,
+ _is_backoff_v2,
+)
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.version import __version__
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
+ ExportTraceServiceRequest,
+ ExportTraceServiceResponse,
+)
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2_grpc import (
+ TraceServiceServicer,
+ add_TraceServiceServicer_to_server,
+)
+from opentelemetry.proto.common.v1.common_pb2 import AnyValue, ArrayValue
+from opentelemetry.proto.common.v1.common_pb2 import (
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.common.v1.common_pb2 import KeyValue
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as OTLPResource,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ResourceSpans, ScopeSpans
+from opentelemetry.proto.trace.v1.trace_pb2 import Span as OTLPSpan
+from opentelemetry.proto.trace.v1.trace_pb2 import Status
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS,
+ OTEL_EXPORTER_OTLP_TRACES_INSECURE,
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
+)
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.trace import Status as SDKStatus
+from opentelemetry.sdk.trace import StatusCode as SDKStatusCode
+from opentelemetry.sdk.trace import TracerProvider, _Span
+from opentelemetry.sdk.trace.export import (
+ SimpleSpanProcessor,
+ SpanExportResult,
+)
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.test.spantestutil import (
+ get_span_with_dropped_attributes_events_links,
+)
+
+THIS_DIR = os.path.dirname(__file__)
+
+
+class TraceServiceServicerUNAVAILABLEDelay(TraceServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ context.send_initial_metadata(
+ (("google.rpc.retryinfo-bin", RetryInfo().SerializeToString()),)
+ )
+ context.set_trailing_metadata(
+ (
+ (
+ "google.rpc.retryinfo-bin",
+ RetryInfo(
+ retry_delay=Duration(seconds=4)
+ ).SerializeToString(),
+ ),
+ )
+ )
+
+ return ExportTraceServiceResponse()
+
+
+class TraceServiceServicerUNAVAILABLE(TraceServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.UNAVAILABLE)
+
+ return ExportTraceServiceResponse()
+
+
+class TraceServiceServicerSUCCESS(TraceServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.OK)
+
+ return ExportTraceServiceResponse()
+
+
+class TraceServiceServicerALREADY_EXISTS(TraceServiceServicer):
+ # pylint: disable=invalid-name,unused-argument,no-self-use
+ def Export(self, request, context):
+ context.set_code(StatusCode.ALREADY_EXISTS)
+
+ return ExportTraceServiceResponse()
+
+
+class TestOTLPSpanExporter(TestCase):
+ # pylint: disable=too-many-public-methods
+
+ def setUp(self):
+ tracer_provider = TracerProvider()
+ self.exporter = OTLPSpanExporter(insecure=True)
+ tracer_provider.add_span_processor(SimpleSpanProcessor(self.exporter))
+ self.tracer = tracer_provider.get_tracer(__name__)
+
+ self.server = server(ThreadPoolExecutor(max_workers=10))
+
+ self.server.add_insecure_port("127.0.0.1:4317")
+
+ self.server.start()
+
+ event_mock = Mock(
+ **{
+ "timestamp": 1591240820506462784,
+ "attributes": BoundedAttributes(
+ attributes={"a": 1, "b": False}
+ ),
+ }
+ )
+
+ type(event_mock).name = PropertyMock(return_value="a")
+
+ self.span = _Span(
+ "a",
+ context=Mock(
+ **{
+ "trace_state": OrderedDict([("a", "b"), ("c", "d")]),
+ "span_id": 10217189687419569865,
+ "trace_id": 67545097771067222548457157018666467027,
+ }
+ ),
+ resource=SDKResource(OrderedDict([("a", 1), ("b", False)])),
+ parent=Mock(**{"span_id": 12345}),
+ attributes=BoundedAttributes(attributes={"a": 1, "b": True}),
+ events=[event_mock],
+ links=[
+ Mock(
+ **{
+ "context.trace_id": 1,
+ "context.span_id": 2,
+ "attributes": BoundedAttributes(
+ attributes={"a": 1, "b": False}
+ ),
+ "kind": OTLPSpan.SpanKind.SPAN_KIND_INTERNAL, # pylint: disable=no-member
+ }
+ )
+ ],
+ instrumentation_scope=InstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+
+ self.span2 = _Span(
+ "b",
+ context=Mock(
+ **{
+ "trace_state": OrderedDict([("a", "b"), ("c", "d")]),
+ "span_id": 10217189687419569865,
+ "trace_id": 67545097771067222548457157018666467027,
+ }
+ ),
+ resource=SDKResource(OrderedDict([("a", 2), ("b", False)])),
+ parent=Mock(**{"span_id": 12345}),
+ instrumentation_scope=InstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+
+ self.span3 = _Span(
+ "c",
+ context=Mock(
+ **{
+ "trace_state": OrderedDict([("a", "b"), ("c", "d")]),
+ "span_id": 10217189687419569865,
+ "trace_id": 67545097771067222548457157018666467027,
+ }
+ ),
+ resource=SDKResource(OrderedDict([("a", 1), ("b", False)])),
+ parent=Mock(**{"span_id": 12345}),
+ instrumentation_scope=InstrumentationScope(
+ name="name2", version="version2"
+ ),
+ )
+
+ self.span.start()
+ self.span.end()
+ self.span2.start()
+ self.span2.end()
+ self.span3.start()
+ self.span3.end()
+
+ def tearDown(self):
+ self.server.stop(None)
+
+ def test_exporting(self):
+ # pylint: disable=protected-access
+ self.assertEqual(self.exporter._exporting, "traces")
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: "collector:4317",
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE: THIS_DIR
+ + "/fixtures/test.cert",
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS: " key1=value1,KEY2 = value=2",
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT: "10",
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION: "gzip",
+ },
+ )
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.OTLPExporterMixin.__init__"
+ )
+ def test_env_variables(self, mock_exporter_mixin):
+ OTLPSpanExporter()
+
+ self.assertTrue(len(mock_exporter_mixin.call_args_list) == 1)
+ _, kwargs = mock_exporter_mixin.call_args_list[0]
+
+ self.assertEqual(kwargs["endpoint"], "collector:4317")
+ self.assertEqual(kwargs["headers"], " key1=value1,KEY2 = value=2")
+ self.assertEqual(kwargs["timeout"], 10)
+ self.assertEqual(kwargs["compression"], Compression.Gzip)
+ self.assertIsNotNone(kwargs["credentials"])
+ self.assertIsInstance(kwargs["credentials"], ChannelCredentials)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.trace_exporter.OTLPSpanExporter._stub"
+ )
+ # pylint: disable=unused-argument
+ def test_no_credentials_error(
+ self, mock_ssl_channel, mock_secure, mock_stub
+ ):
+ OTLPSpanExporter(insecure=False)
+ self.assertTrue(mock_ssl_channel.called)
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_TRACES_HEADERS: " key1=value1,KEY2 = VALUE=2 "},
+ )
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ # pylint: disable=unused-argument
+ def test_otlp_headers_from_env(self, mock_ssl_channel, mock_secure):
+ exporter = OTLPSpanExporter()
+ # pylint: disable=protected-access
+ self.assertEqual(
+ exporter._headers,
+ (
+ ("key1", "value1"),
+ ("key2", "VALUE=2"),
+ ("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),
+ ),
+ )
+ exporter = OTLPSpanExporter(
+ headers=(("key3", "value3"), ("key4", "value4"))
+ )
+ # pylint: disable=protected-access
+ self.assertEqual(
+ exporter._headers,
+ (
+ ("key3", "value3"),
+ ("key4", "value4"),
+ ("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),
+ ),
+ )
+ exporter = OTLPSpanExporter(
+ headers={"key5": "value5", "key6": "value6"}
+ )
+ # pylint: disable=protected-access
+ self.assertEqual(
+ exporter._headers,
+ (
+ ("key5", "value5"),
+ ("key6", "value6"),
+ ("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),
+ ),
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_TRACES_INSECURE: "True"},
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ # pylint: disable=unused-argument
+ def test_otlp_insecure_from_env(self, mock_insecure):
+ OTLPSpanExporter()
+ # pylint: disable=protected-access
+ self.assertTrue(mock_insecure.called)
+ self.assertEqual(
+ 1,
+ mock_insecure.call_count,
+ f"expected {mock_insecure} to be called",
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ def test_otlp_exporter_endpoint(self, mock_secure, mock_insecure):
+ """Just OTEL_EXPORTER_OTLP_COMPRESSION should work"""
+ expected_endpoint = "localhost:4317"
+ endpoints = [
+ (
+ "http://localhost:4317",
+ None,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "http://localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "localhost:4317",
+ True,
+ mock_insecure,
+ ),
+ (
+ "http://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ False,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ None,
+ mock_secure,
+ ),
+ (
+ "https://localhost:4317",
+ True,
+ mock_secure,
+ ),
+ ]
+ for endpoint, insecure, mock_method in endpoints:
+ OTLPSpanExporter(endpoint=endpoint, insecure=insecure)
+ self.assertEqual(
+ 1,
+ mock_method.call_count,
+ f"expected {mock_method} to be called for {endpoint} {insecure}",
+ )
+ self.assertEqual(
+ expected_endpoint,
+ mock_method.call_args[0][0],
+ f"expected {expected_endpoint} got {mock_method.call_args[0][0]} {endpoint}",
+ )
+ mock_method.reset_mock()
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {OTEL_EXPORTER_OTLP_COMPRESSION: "gzip"})
+ def test_otlp_exporter_otlp_compression_envvar(
+ self, mock_insecure_channel
+ ):
+ """Just OTEL_EXPORTER_OTLP_COMPRESSION should work"""
+ OTLPSpanExporter(insecure=True)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.Gzip
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {OTEL_EXPORTER_OTLP_COMPRESSION: "gzip"})
+ def test_otlp_exporter_otlp_compression_kwarg(self, mock_insecure_channel):
+ """Specifying kwarg should take precedence over env"""
+ OTLPSpanExporter(insecure=True, compression=Compression.NoCompression)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.NoCompression
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict("os.environ", {})
+ def test_otlp_exporter_otlp_compression_unspecified(
+ self, mock_insecure_channel
+ ):
+ """No env or kwarg should be NoCompression"""
+ OTLPSpanExporter(insecure=True)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.NoCompression
+ )
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.insecure_channel")
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_TRACES_COMPRESSION: "gzip"},
+ )
+ def test_otlp_exporter_otlp_compression_precendence(
+ self, mock_insecure_channel
+ ):
+ """OTEL_EXPORTER_OTLP_TRACES_COMPRESSION as higher priority than
+ OTEL_EXPORTER_OTLP_COMPRESSION
+ """
+ OTLPSpanExporter(insecure=True)
+ mock_insecure_channel.assert_called_once_with(
+ "localhost:4317", compression=Compression.Gzip
+ )
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter.ssl_channel_credentials"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.secure_channel")
+ # pylint: disable=unused-argument
+ def test_otlp_headers(self, mock_ssl_channel, mock_secure):
+ exporter = OTLPSpanExporter()
+ # pylint: disable=protected-access
+ # This ensures that there is no other header than standard user-agent.
+ self.assertEqual(
+ exporter._headers,
+ (("user-agent", "OTel-OTLP-Exporter-Python/" + __version__),),
+ )
+
+ @patch("opentelemetry.exporter.otlp.proto.common._internal.backoff")
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_handles_backoff_v2_api(self, mock_sleep, mock_backoff):
+ # In backoff ~= 2.0.0 the first value yielded from expo is None.
+ def generate_delays(*args, **kwargs):
+ if _is_backoff_v2:
+ yield None
+ yield 1
+
+ mock_backoff.expo.configure_mock(**{"side_effect": generate_delays})
+
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerUNAVAILABLE(), self.server
+ )
+ self.exporter.export([self.span])
+ mock_sleep.assert_called_once_with(1)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerUNAVAILABLE(), self.server
+ )
+ result = self.exporter.export([self.span])
+ self.assertEqual(result, SpanExportResult.FAILURE)
+ mock_sleep.assert_called_with(1)
+
+ @patch(
+ "opentelemetry.exporter.otlp.proto.grpc.exporter._create_exp_backoff_generator"
+ )
+ @patch("opentelemetry.exporter.otlp.proto.grpc.exporter.sleep")
+ def test_unavailable_delay(self, mock_sleep, mock_expo):
+
+ mock_expo.configure_mock(**{"return_value": [1]})
+
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerUNAVAILABLEDelay(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.span]), SpanExportResult.FAILURE
+ )
+ mock_sleep.assert_called_with(4)
+
+ def test_success(self):
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerSUCCESS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.span]), SpanExportResult.SUCCESS
+ )
+
+ def test_failure(self):
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerALREADY_EXISTS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.span]), SpanExportResult.FAILURE
+ )
+
+ def test_translate_spans(self):
+
+ expected = ExportTraceServiceRequest(
+ resource_spans=[
+ ResourceSpans(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_spans=[
+ ScopeSpans(
+ scope=PB2InstrumentationScope(
+ name="name", version="version"
+ ),
+ spans=[
+ OTLPSpan(
+ # pylint: disable=no-member
+ name="a",
+ start_time_unix_nano=self.span.start_time,
+ end_time_unix_nano=self.span.end_time,
+ trace_state="a=b,c=d",
+ span_id=int.to_bytes(
+ 10217189687419569865, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 67545097771067222548457157018666467027,
+ 16,
+ "big",
+ ),
+ parent_span_id=(
+ b"\000\000\000\000\000\00009"
+ ),
+ kind=(
+ OTLPSpan.SpanKind.SPAN_KIND_INTERNAL
+ ),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(int_value=1),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(bool_value=True),
+ ),
+ ],
+ events=[
+ OTLPSpan.Event(
+ name="a",
+ time_unix_nano=1591240820506462784,
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=False
+ ),
+ ),
+ ],
+ )
+ ],
+ status=Status(code=0, message=""),
+ links=[
+ OTLPSpan.Link(
+ trace_id=int.to_bytes(
+ 1, 16, "big"
+ ),
+ span_id=int.to_bytes(2, 8, "big"),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=False
+ ),
+ ),
+ ],
+ )
+ ],
+ )
+ ],
+ )
+ ],
+ ),
+ ]
+ )
+
+ # pylint: disable=protected-access
+ self.assertEqual(expected, self.exporter._translate_data([self.span]))
+
+ def test_translate_spans_multi(self):
+ expected = ExportTraceServiceRequest(
+ resource_spans=[
+ ResourceSpans(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=1)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_spans=[
+ ScopeSpans(
+ scope=PB2InstrumentationScope(
+ name="name", version="version"
+ ),
+ spans=[
+ OTLPSpan(
+ # pylint: disable=no-member
+ name="a",
+ start_time_unix_nano=self.span.start_time,
+ end_time_unix_nano=self.span.end_time,
+ trace_state="a=b,c=d",
+ span_id=int.to_bytes(
+ 10217189687419569865, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 67545097771067222548457157018666467027,
+ 16,
+ "big",
+ ),
+ parent_span_id=(
+ b"\000\000\000\000\000\00009"
+ ),
+ kind=(
+ OTLPSpan.SpanKind.SPAN_KIND_INTERNAL
+ ),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(int_value=1),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(bool_value=True),
+ ),
+ ],
+ events=[
+ OTLPSpan.Event(
+ name="a",
+ time_unix_nano=1591240820506462784,
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=False
+ ),
+ ),
+ ],
+ )
+ ],
+ status=Status(code=0, message=""),
+ links=[
+ OTLPSpan.Link(
+ trace_id=int.to_bytes(
+ 1, 16, "big"
+ ),
+ span_id=int.to_bytes(2, 8, "big"),
+ attributes=[
+ KeyValue(
+ key="a",
+ value=AnyValue(
+ int_value=1
+ ),
+ ),
+ KeyValue(
+ key="b",
+ value=AnyValue(
+ bool_value=False
+ ),
+ ),
+ ],
+ )
+ ],
+ )
+ ],
+ ),
+ ScopeSpans(
+ scope=PB2InstrumentationScope(
+ name="name2", version="version2"
+ ),
+ spans=[
+ OTLPSpan(
+ # pylint: disable=no-member
+ name="c",
+ start_time_unix_nano=self.span3.start_time,
+ end_time_unix_nano=self.span3.end_time,
+ trace_state="a=b,c=d",
+ span_id=int.to_bytes(
+ 10217189687419569865, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 67545097771067222548457157018666467027,
+ 16,
+ "big",
+ ),
+ parent_span_id=(
+ b"\000\000\000\000\000\00009"
+ ),
+ kind=(
+ OTLPSpan.SpanKind.SPAN_KIND_INTERNAL
+ ),
+ status=Status(code=0, message=""),
+ )
+ ],
+ ),
+ ],
+ ),
+ ResourceSpans(
+ resource=OTLPResource(
+ attributes=[
+ KeyValue(key="a", value=AnyValue(int_value=2)),
+ KeyValue(
+ key="b", value=AnyValue(bool_value=False)
+ ),
+ ]
+ ),
+ scope_spans=[
+ ScopeSpans(
+ scope=PB2InstrumentationScope(
+ name="name", version="version"
+ ),
+ spans=[
+ OTLPSpan(
+ # pylint: disable=no-member
+ name="b",
+ start_time_unix_nano=self.span2.start_time,
+ end_time_unix_nano=self.span2.end_time,
+ trace_state="a=b,c=d",
+ span_id=int.to_bytes(
+ 10217189687419569865, 8, "big"
+ ),
+ trace_id=int.to_bytes(
+ 67545097771067222548457157018666467027,
+ 16,
+ "big",
+ ),
+ parent_span_id=(
+ b"\000\000\000\000\000\00009"
+ ),
+ kind=(
+ OTLPSpan.SpanKind.SPAN_KIND_INTERNAL
+ ),
+ status=Status(code=0, message=""),
+ )
+ ],
+ )
+ ],
+ ),
+ ]
+ )
+
+ # pylint: disable=protected-access
+ self.assertEqual(
+ expected,
+ self.exporter._translate_data([self.span, self.span2, self.span3]),
+ )
+
+ def _check_translated_status(
+ self,
+ translated: ExportTraceServiceRequest,
+ code_expected: Status,
+ ):
+ status = translated.resource_spans[0].scope_spans[0].spans[0].status
+
+ self.assertEqual(
+ status.code,
+ code_expected,
+ )
+
+ def test_span_status_translate(self):
+ # pylint: disable=protected-access,no-member
+ unset = SDKStatus(status_code=SDKStatusCode.UNSET)
+ ok = SDKStatus(status_code=SDKStatusCode.OK)
+ error = SDKStatus(status_code=SDKStatusCode.ERROR)
+ unset_translated = self.exporter._translate_data(
+ [_create_span_with_status(unset)]
+ )
+ ok_translated = self.exporter._translate_data(
+ [_create_span_with_status(ok)]
+ )
+ error_translated = self.exporter._translate_data(
+ [_create_span_with_status(error)]
+ )
+ self._check_translated_status(
+ unset_translated,
+ Status.STATUS_CODE_UNSET,
+ )
+ self._check_translated_status(
+ ok_translated,
+ Status.STATUS_CODE_OK,
+ )
+ self._check_translated_status(
+ error_translated,
+ Status.STATUS_CODE_ERROR,
+ )
+
+ # pylint:disable=no-member
+ def test_translate_key_values(self):
+ bool_value = _encode_key_value("bool_type", False)
+ self.assertTrue(isinstance(bool_value, KeyValue))
+ self.assertEqual(bool_value.key, "bool_type")
+ self.assertTrue(isinstance(bool_value.value, AnyValue))
+ self.assertFalse(bool_value.value.bool_value)
+
+ str_value = _encode_key_value("str_type", "str")
+ self.assertTrue(isinstance(str_value, KeyValue))
+ self.assertEqual(str_value.key, "str_type")
+ self.assertTrue(isinstance(str_value.value, AnyValue))
+ self.assertEqual(str_value.value.string_value, "str")
+
+ int_value = _encode_key_value("int_type", 2)
+ self.assertTrue(isinstance(int_value, KeyValue))
+ self.assertEqual(int_value.key, "int_type")
+ self.assertTrue(isinstance(int_value.value, AnyValue))
+ self.assertEqual(int_value.value.int_value, 2)
+
+ double_value = _encode_key_value("double_type", 3.2)
+ self.assertTrue(isinstance(double_value, KeyValue))
+ self.assertEqual(double_value.key, "double_type")
+ self.assertTrue(isinstance(double_value.value, AnyValue))
+ self.assertEqual(double_value.value.double_value, 3.2)
+
+ seq_value = _encode_key_value("seq_type", ["asd", "123"])
+ self.assertTrue(isinstance(seq_value, KeyValue))
+ self.assertEqual(seq_value.key, "seq_type")
+ self.assertTrue(isinstance(seq_value.value, AnyValue))
+ self.assertTrue(isinstance(seq_value.value.array_value, ArrayValue))
+
+ arr_value = seq_value.value.array_value
+ self.assertTrue(isinstance(arr_value.values[0], AnyValue))
+ self.assertEqual(arr_value.values[0].string_value, "asd")
+ self.assertTrue(isinstance(arr_value.values[1], AnyValue))
+ self.assertEqual(arr_value.values[1].string_value, "123")
+
+ # Tracing specs currently does not support Mapping type attributes
+ # map_value = _translate_key_values(
+ # "map_type", {"asd": "123", "def": "456"}
+ # )
+ # self.assertTrue(isinstance(map_value, KeyValue))
+ # self.assertEqual(map_value.key, "map_type")
+ # self.assertTrue(isinstance(map_value.value, AnyValue))
+ # self.assertTrue(isinstance(map_value.value.kvlist_value, KeyValueList))
+
+ # kvlist_value = map_value.value.kvlist_value
+ # self.assertTrue(isinstance(kvlist_value.values[0], KeyValue))
+ # self.assertEqual(kvlist_value.values[0].key, "asd")
+ # self.assertEqual(kvlist_value.values[0].value.string_value, "123")
+
+ def test_dropped_values(self):
+ span = get_span_with_dropped_attributes_events_links()
+ # pylint:disable=protected-access
+ translated = self.exporter._translate_data([span])
+ self.assertEqual(
+ 1,
+ translated.resource_spans[0]
+ .scope_spans[0]
+ .spans[0]
+ .dropped_links_count,
+ )
+ self.assertEqual(
+ 2,
+ translated.resource_spans[0]
+ .scope_spans[0]
+ .spans[0]
+ .dropped_attributes_count,
+ )
+ self.assertEqual(
+ 3,
+ translated.resource_spans[0]
+ .scope_spans[0]
+ .spans[0]
+ .dropped_events_count,
+ )
+ self.assertEqual(
+ 2,
+ translated.resource_spans[0]
+ .scope_spans[0]
+ .spans[0]
+ .links[0]
+ .dropped_attributes_count,
+ )
+ self.assertEqual(
+ 2,
+ translated.resource_spans[0]
+ .scope_spans[0]
+ .spans[0]
+ .events[0]
+ .dropped_attributes_count,
+ )
+
+ def test_shutdown(self):
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerSUCCESS(), self.server
+ )
+ self.assertEqual(
+ self.exporter.export([self.span]), SpanExportResult.SUCCESS
+ )
+ self.exporter.shutdown()
+ with self.assertLogs(level=WARNING) as warning:
+ self.assertEqual(
+ self.exporter.export([self.span]), SpanExportResult.FAILURE
+ )
+ self.assertEqual(
+ warning.records[0].message,
+ "Exporter already shutdown, ignoring batch",
+ )
+
+ def test_shutdown_wait_last_export(self):
+ add_TraceServiceServicer_to_server(
+ TraceServiceServicerUNAVAILABLEDelay(), self.server
+ )
+
+ export_thread = threading.Thread(
+ target=self.exporter.export, args=([self.span],)
+ )
+ export_thread.start()
+ try:
+ # pylint: disable=protected-access
+ self.assertTrue(self.exporter._export_lock.locked())
+ # delay is 4 seconds while the default shutdown timeout is 30_000 milliseconds
+ start_time = time.time()
+ self.exporter.shutdown()
+ now = time.time()
+ self.assertGreaterEqual(now, (start_time + 30 / 1000))
+ # pylint: disable=protected-access
+ self.assertTrue(self.exporter._shutdown)
+ # pylint: disable=protected-access
+ self.assertFalse(self.exporter._export_lock.locked())
+ finally:
+ export_thread.join()
+
+
+def _create_span_with_status(status: SDKStatus):
+ span = _Span(
+ "a",
+ context=Mock(
+ **{
+ "trace_state": OrderedDict([("a", "b"), ("c", "d")]),
+ "span_id": 10217189687419569865,
+ "trace_id": 67545097771067222548457157018666467027,
+ }
+ ),
+ parent=Mock(**{"span_id": 12345}),
+ instrumentation_scope=InstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+ span.set_status(status)
+ return span
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/LICENSE b/exporter/opentelemetry-exporter-otlp-proto-http/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/README.rst b/exporter/opentelemetry-exporter-otlp-proto-http/README.rst
new file mode 100644
index 0000000000..394b4cf5e5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/README.rst
@@ -0,0 +1,25 @@
+OpenTelemetry Collector Protobuf over HTTP Exporter
+===================================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-otlp-proto-http.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-otlp-proto-http/
+
+This library allows to export data to the OpenTelemetry Collector using the OpenTelemetry Protocol using Protobuf over HTTP.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-otlp-proto-http
+
+
+References
+----------
+
+* `OpenTelemetry Collector Exporter `_
+* `OpenTelemetry Collector `_
+* `OpenTelemetry `_
+* `OpenTelemetry Protocol Specification `_
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/pyproject.toml b/exporter/opentelemetry-exporter-otlp-proto-http/pyproject.toml
new file mode 100644
index 0000000000..dfab84f6f9
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/pyproject.toml
@@ -0,0 +1,66 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-otlp-proto-http"
+dynamic = ["version"]
+description = "OpenTelemetry Collector Protobuf over HTTP Exporter"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "Deprecated >= 1.2.6",
+ "backoff >= 1.10.0, < 2.0.0; python_version<'3.7'",
+ "backoff >= 1.10.0, < 3.0.0; python_version>='3.7'",
+ "googleapis-common-protos ~= 1.52",
+ "opentelemetry-api ~= 1.15",
+ "opentelemetry-proto == 1.23.0.dev",
+ "opentelemetry-sdk ~= 1.23.0.dev",
+ "opentelemetry-exporter-otlp-proto-common == 1.23.0.dev",
+ "requests ~= 2.7",
+]
+
+[project.optional-dependencies]
+test = [
+ "responses >= 0.22.0, < 0.25",
+]
+
+[project.entry-points.opentelemetry_traces_exporter]
+otlp_proto_http = "opentelemetry.exporter.otlp.proto.http.trace_exporter:OTLPSpanExporter"
+
+[project.entry-points.opentelemetry_metrics_exporter]
+otlp_proto_http = "opentelemetry.exporter.otlp.proto.http.metric_exporter:OTLPMetricExporter"
+
+[project.entry-points.opentelemetry_logs_exporter]
+otlp_proto_http = "opentelemetry.exporter.otlp.proto.http._log_exporter:OTLPLogExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-otlp-proto-http"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/otlp/proto/http/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/__init__.py
new file mode 100644
index 0000000000..2c40b39590
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/__init__.py
@@ -0,0 +1,86 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+"""
+This library allows to export tracing data to an OTLP collector.
+
+Usage
+-----
+
+The **OTLP Span Exporter** allows to export `OpenTelemetry`_ traces to the
+`OTLP`_ collector.
+
+You can configure the exporter with the following environment variables:
+
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_TIMEOUT`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_PROTOCOL`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_HEADERS`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_COMPRESSION`
+- :envvar:`OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE`
+- :envvar:`OTEL_EXPORTER_OTLP_TIMEOUT`
+- :envvar:`OTEL_EXPORTER_OTLP_PROTOCOL`
+- :envvar:`OTEL_EXPORTER_OTLP_HEADERS`
+- :envvar:`OTEL_EXPORTER_OTLP_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION`
+- :envvar:`OTEL_EXPORTER_OTLP_CERTIFICATE`
+
+.. _OTLP: https://github.com/open-telemetry/opentelemetry-collector/
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+
+.. code:: python
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
+ from opentelemetry.sdk.resources import Resource
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+ # Resource can be required for some backends, e.g. Jaeger
+ # If resource wouldn't be set - traces wouldn't appears in Jaeger
+ resource = Resource(attributes={
+ "service.name": "service"
+ })
+
+ trace.set_tracer_provider(TracerProvider(resource=resource))
+ tracer = trace.get_tracer(__name__)
+
+ otlp_exporter = OTLPSpanExporter()
+
+ span_processor = BatchSpanProcessor(otlp_exporter)
+
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ print("Hello world!")
+
+API
+---
+"""
+import enum
+
+from .version import __version__
+
+
+_OTLP_HTTP_HEADERS = {
+ "Content-Type": "application/x-protobuf",
+ "User-Agent": "OTel-OTLP-Exporter-Python/" + __version__,
+}
+
+
+class Compression(enum.Enum):
+ NoCompression = "none"
+ Deflate = "deflate"
+ Gzip = "gzip"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/_log_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/_log_exporter/__init__.py
new file mode 100644
index 0000000000..4703b10286
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/_log_exporter/__init__.py
@@ -0,0 +1,195 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import gzip
+import logging
+import zlib
+from io import BytesIO
+from os import environ
+from typing import Dict, Optional, Sequence
+from time import sleep
+
+import requests
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _create_exp_backoff_generator,
+)
+from opentelemetry.exporter.otlp.proto.common._log_encoder import encode_logs
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS,
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
+)
+from opentelemetry.sdk._logs import LogData
+from opentelemetry.sdk._logs.export import (
+ LogExporter,
+ LogExportResult,
+)
+from opentelemetry.exporter.otlp.proto.http import (
+ _OTLP_HTTP_HEADERS,
+ Compression,
+)
+from opentelemetry.util.re import parse_env_headers
+
+
+_logger = logging.getLogger(__name__)
+
+
+DEFAULT_COMPRESSION = Compression.NoCompression
+DEFAULT_ENDPOINT = "http://localhost:4318/"
+DEFAULT_LOGS_EXPORT_PATH = "v1/logs"
+DEFAULT_TIMEOUT = 10 # in seconds
+
+
+class OTLPLogExporter(LogExporter):
+
+ _MAX_RETRY_TIMEOUT = 64
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ certificate_file: Optional[str] = None,
+ headers: Optional[Dict[str, str]] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ session: Optional[requests.Session] = None,
+ ):
+ self._endpoint = endpoint or environ.get(
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
+ _append_logs_path(
+ environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT)
+ ),
+ )
+ self._certificate_file = certificate_file or environ.get(
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE,
+ environ.get(OTEL_EXPORTER_OTLP_CERTIFICATE, True),
+ )
+ headers_string = environ.get(
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS,
+ environ.get(OTEL_EXPORTER_OTLP_HEADERS, ""),
+ )
+ self._headers = headers or parse_env_headers(headers_string)
+ self._timeout = timeout or int(
+ environ.get(
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
+ environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, DEFAULT_TIMEOUT),
+ )
+ )
+ self._compression = compression or _compression_from_env()
+ self._session = session or requests.Session()
+ self._session.headers.update(self._headers)
+ self._session.headers.update(_OTLP_HTTP_HEADERS)
+ if self._compression is not Compression.NoCompression:
+ self._session.headers.update(
+ {"Content-Encoding": self._compression.value}
+ )
+ self._shutdown = False
+
+ def _export(self, serialized_data: str):
+ data = serialized_data
+ if self._compression == Compression.Gzip:
+ gzip_data = BytesIO()
+ with gzip.GzipFile(fileobj=gzip_data, mode="w") as gzip_stream:
+ gzip_stream.write(serialized_data)
+ data = gzip_data.getvalue()
+ elif self._compression == Compression.Deflate:
+ data = zlib.compress(bytes(serialized_data))
+
+ return self._session.post(
+ url=self._endpoint,
+ data=data,
+ verify=self._certificate_file,
+ timeout=self._timeout,
+ )
+
+ @staticmethod
+ def _retryable(resp: requests.Response) -> bool:
+ if resp.status_code == 408:
+ return True
+ if resp.status_code >= 500 and resp.status_code <= 599:
+ return True
+ return False
+
+ def export(self, batch: Sequence[LogData]) -> LogExportResult:
+ # After the call to Shutdown subsequent calls to Export are
+ # not allowed and should return a Failure result.
+ if self._shutdown:
+ _logger.warning("Exporter already shutdown, ignoring batch")
+ return LogExportResult.FAILURE
+
+ serialized_data = encode_logs(batch).SerializeToString()
+
+ for delay in _create_exp_backoff_generator(
+ max_value=self._MAX_RETRY_TIMEOUT
+ ):
+
+ if delay == self._MAX_RETRY_TIMEOUT:
+ return LogExportResult.FAILURE
+
+ resp = self._export(serialized_data)
+ # pylint: disable=no-else-return
+ if resp.ok:
+ return LogExportResult.SUCCESS
+ elif self._retryable(resp):
+ _logger.warning(
+ "Transient error %s encountered while exporting logs batch, retrying in %ss.",
+ resp.reason,
+ delay,
+ )
+ sleep(delay)
+ continue
+ else:
+ _logger.error(
+ "Failed to export logs batch code: %s, reason: %s",
+ resp.status_code,
+ resp.text,
+ )
+ return LogExportResult.FAILURE
+ return LogExportResult.FAILURE
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
+
+ def shutdown(self):
+ if self._shutdown:
+ _logger.warning("Exporter already shutdown, ignoring call")
+ return
+ self._session.close()
+ self._shutdown = True
+
+
+def _compression_from_env() -> Compression:
+ compression = (
+ environ.get(
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
+ environ.get(OTEL_EXPORTER_OTLP_COMPRESSION, "none"),
+ )
+ .lower()
+ .strip()
+ )
+ return Compression(compression)
+
+
+def _append_logs_path(endpoint: str) -> str:
+ if endpoint.endswith("/"):
+ return endpoint + DEFAULT_LOGS_EXPORT_PATH
+ return endpoint + f"/{DEFAULT_LOGS_EXPORT_PATH}"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/metric_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/metric_exporter/__init__.py
new file mode 100644
index 0000000000..becdab257f
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/metric_exporter/__init__.py
@@ -0,0 +1,239 @@
+# Copyright The OpenTelemetry Authors
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import gzip
+import logging
+import zlib
+from os import environ
+from typing import Dict, Optional, Any, Callable, List
+from typing import Sequence, Mapping # noqa: F401
+
+from io import BytesIO
+from time import sleep
+from deprecated import deprecated
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _get_resource_data,
+ _create_exp_backoff_generator,
+)
+from opentelemetry.exporter.otlp.proto.common._internal.metrics_encoder import (
+ OTLPMetricExporterMixin,
+)
+from opentelemetry.exporter.otlp.proto.common.metrics_encoder import (
+ encode_metrics,
+)
+from opentelemetry.exporter.otlp.proto.http import Compression
+from opentelemetry.sdk.metrics._internal.aggregation import Aggregation
+from opentelemetry.proto.collector.metrics.v1.metrics_service_pb2 import ( # noqa: F401
+ ExportMetricsServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ AnyValue,
+ ArrayValue,
+ KeyValue,
+ KeyValueList,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ InstrumentationScope,
+)
+from opentelemetry.proto.resource.v1.resource_pb2 import Resource # noqa: F401
+from opentelemetry.proto.metrics.v1 import metrics_pb2 as pb2 # noqa: F401
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS,
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ MetricExporter,
+ MetricExportResult,
+ MetricsData,
+)
+from opentelemetry.sdk.metrics.export import ( # noqa: F401
+ Gauge,
+ Histogram as HistogramType,
+ Sum,
+)
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.util.re import parse_env_headers
+
+import requests
+from opentelemetry.proto.resource.v1.resource_pb2 import (
+ Resource as PB2Resource,
+)
+
+_logger = logging.getLogger(__name__)
+
+
+DEFAULT_COMPRESSION = Compression.NoCompression
+DEFAULT_ENDPOINT = "http://localhost:4318/"
+DEFAULT_METRICS_EXPORT_PATH = "v1/metrics"
+DEFAULT_TIMEOUT = 10 # in seconds
+
+
+class OTLPMetricExporter(MetricExporter, OTLPMetricExporterMixin):
+
+ _MAX_RETRY_TIMEOUT = 64
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ certificate_file: Optional[str] = None,
+ headers: Optional[Dict[str, str]] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ session: Optional[requests.Session] = None,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[type, Aggregation] = None,
+ ):
+ self._endpoint = endpoint or environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
+ _append_metrics_path(
+ environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT)
+ ),
+ )
+ self._certificate_file = certificate_file or environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE,
+ environ.get(OTEL_EXPORTER_OTLP_CERTIFICATE, True),
+ )
+ headers_string = environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS,
+ environ.get(OTEL_EXPORTER_OTLP_HEADERS, ""),
+ )
+ self._headers = headers or parse_env_headers(headers_string)
+ self._timeout = timeout or int(
+ environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
+ environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, DEFAULT_TIMEOUT),
+ )
+ )
+ self._compression = compression or _compression_from_env()
+ self._session = session or requests.Session()
+ self._session.headers.update(self._headers)
+ self._session.headers.update(
+ {"Content-Type": "application/x-protobuf"}
+ )
+ if self._compression is not Compression.NoCompression:
+ self._session.headers.update(
+ {"Content-Encoding": self._compression.value}
+ )
+
+ self._common_configuration(preferred_temporality)
+
+ def _export(self, serialized_data: str):
+ data = serialized_data
+ if self._compression == Compression.Gzip:
+ gzip_data = BytesIO()
+ with gzip.GzipFile(fileobj=gzip_data, mode="w") as gzip_stream:
+ gzip_stream.write(serialized_data)
+ data = gzip_data.getvalue()
+ elif self._compression == Compression.Deflate:
+ data = zlib.compress(bytes(serialized_data))
+
+ return self._session.post(
+ url=self._endpoint,
+ data=data,
+ verify=self._certificate_file,
+ timeout=self._timeout,
+ )
+
+ @staticmethod
+ def _retryable(resp: requests.Response) -> bool:
+ if resp.status_code == 408:
+ return True
+ if resp.status_code >= 500 and resp.status_code <= 599:
+ return True
+ return False
+
+ def export(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ serialized_data = encode_metrics(metrics_data)
+ for delay in _create_exp_backoff_generator(
+ max_value=self._MAX_RETRY_TIMEOUT
+ ):
+
+ if delay == self._MAX_RETRY_TIMEOUT:
+ return MetricExportResult.FAILURE
+
+ resp = self._export(serialized_data.SerializeToString())
+ # pylint: disable=no-else-return
+ if resp.ok:
+ return MetricExportResult.SUCCESS
+ elif self._retryable(resp):
+ _logger.warning(
+ "Transient error %s encountered while exporting metric batch, retrying in %ss.",
+ resp.reason,
+ delay,
+ )
+ sleep(delay)
+ continue
+ else:
+ _logger.error(
+ "Failed to export batch code: %s, reason: %s",
+ resp.status_code,
+ resp.text,
+ )
+ return MetricExportResult.FAILURE
+ return MetricExportResult.FAILURE
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+ @property
+ def _exporting(self) -> str:
+ return "metrics"
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
+
+
+@deprecated(
+ version="1.18.0",
+ reason="Use one of the encoders from opentelemetry-exporter-otlp-proto-common instead",
+)
+def get_resource_data(
+ sdk_resource_scope_data: Dict[SDKResource, Any], # ResourceDataT?
+ resource_class: Callable[..., PB2Resource],
+ name: str,
+) -> List[PB2Resource]:
+ return _get_resource_data(sdk_resource_scope_data, resource_class, name)
+
+
+def _compression_from_env() -> Compression:
+ compression = (
+ environ.get(
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
+ environ.get(OTEL_EXPORTER_OTLP_COMPRESSION, "none"),
+ )
+ .lower()
+ .strip()
+ )
+ return Compression(compression)
+
+
+def _append_metrics_path(endpoint: str) -> str:
+ if endpoint.endswith("/"):
+ return endpoint + DEFAULT_METRICS_EXPORT_PATH
+ return endpoint + f"/{DEFAULT_METRICS_EXPORT_PATH}"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/py.typed b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py
new file mode 100644
index 0000000000..d98a1b84a7
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py
@@ -0,0 +1,193 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import gzip
+import logging
+import zlib
+from io import BytesIO
+from os import environ
+from typing import Dict, Optional
+from time import sleep
+
+import requests
+
+from opentelemetry.exporter.otlp.proto.common._internal import (
+ _create_exp_backoff_generator,
+)
+from opentelemetry.exporter.otlp.proto.common.trace_encoder import (
+ encode_spans,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS,
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+)
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+from opentelemetry.exporter.otlp.proto.http import (
+ _OTLP_HTTP_HEADERS,
+ Compression,
+)
+from opentelemetry.util.re import parse_env_headers
+
+
+_logger = logging.getLogger(__name__)
+
+
+DEFAULT_COMPRESSION = Compression.NoCompression
+DEFAULT_ENDPOINT = "http://localhost:4318/"
+DEFAULT_TRACES_EXPORT_PATH = "v1/traces"
+DEFAULT_TIMEOUT = 10 # in seconds
+
+
+class OTLPSpanExporter(SpanExporter):
+
+ _MAX_RETRY_TIMEOUT = 64
+
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ certificate_file: Optional[str] = None,
+ headers: Optional[Dict[str, str]] = None,
+ timeout: Optional[int] = None,
+ compression: Optional[Compression] = None,
+ session: Optional[requests.Session] = None,
+ ):
+ self._endpoint = endpoint or environ.get(
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
+ _append_trace_path(
+ environ.get(OTEL_EXPORTER_OTLP_ENDPOINT, DEFAULT_ENDPOINT)
+ ),
+ )
+ self._certificate_file = certificate_file or environ.get(
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
+ environ.get(OTEL_EXPORTER_OTLP_CERTIFICATE, True),
+ )
+ headers_string = environ.get(
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS,
+ environ.get(OTEL_EXPORTER_OTLP_HEADERS, ""),
+ )
+ self._headers = headers or parse_env_headers(headers_string)
+ self._timeout = timeout or int(
+ environ.get(
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
+ environ.get(OTEL_EXPORTER_OTLP_TIMEOUT, DEFAULT_TIMEOUT),
+ )
+ )
+ self._compression = compression or _compression_from_env()
+ self._session = session or requests.Session()
+ self._session.headers.update(self._headers)
+ self._session.headers.update(_OTLP_HTTP_HEADERS)
+ if self._compression is not Compression.NoCompression:
+ self._session.headers.update(
+ {"Content-Encoding": self._compression.value}
+ )
+ self._shutdown = False
+
+ def _export(self, serialized_data: str):
+ data = serialized_data
+ if self._compression == Compression.Gzip:
+ gzip_data = BytesIO()
+ with gzip.GzipFile(fileobj=gzip_data, mode="w") as gzip_stream:
+ gzip_stream.write(serialized_data)
+ data = gzip_data.getvalue()
+ elif self._compression == Compression.Deflate:
+ data = zlib.compress(bytes(serialized_data))
+
+ return self._session.post(
+ url=self._endpoint,
+ data=data,
+ verify=self._certificate_file,
+ timeout=self._timeout,
+ )
+
+ @staticmethod
+ def _retryable(resp: requests.Response) -> bool:
+ if resp.status_code == 408:
+ return True
+ if resp.status_code >= 500 and resp.status_code <= 599:
+ return True
+ return False
+
+ def export(self, spans) -> SpanExportResult:
+ # After the call to Shutdown subsequent calls to Export are
+ # not allowed and should return a Failure result.
+ if self._shutdown:
+ _logger.warning("Exporter already shutdown, ignoring batch")
+ return SpanExportResult.FAILURE
+
+ serialized_data = encode_spans(spans).SerializeToString()
+
+ for delay in _create_exp_backoff_generator(
+ max_value=self._MAX_RETRY_TIMEOUT
+ ):
+
+ if delay == self._MAX_RETRY_TIMEOUT:
+ return SpanExportResult.FAILURE
+
+ resp = self._export(serialized_data)
+ # pylint: disable=no-else-return
+ if resp.ok:
+ return SpanExportResult.SUCCESS
+ elif self._retryable(resp):
+ _logger.warning(
+ "Transient error %s encountered while exporting span batch, retrying in %ss.",
+ resp.reason,
+ delay,
+ )
+ sleep(delay)
+ continue
+ else:
+ _logger.error(
+ "Failed to export batch code: %s, reason: %s",
+ resp.status_code,
+ resp.text,
+ )
+ return SpanExportResult.FAILURE
+ return SpanExportResult.FAILURE
+
+ def shutdown(self):
+ if self._shutdown:
+ _logger.warning("Exporter already shutdown, ignoring call")
+ return
+ self._session.close()
+ self._shutdown = True
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Nothing is buffered in this exporter, so this method does nothing."""
+ return True
+
+
+def _compression_from_env() -> Compression:
+ compression = (
+ environ.get(
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
+ environ.get(OTEL_EXPORTER_OTLP_COMPRESSION, "none"),
+ )
+ .lower()
+ .strip()
+ )
+ return Compression(compression)
+
+
+def _append_trace_path(endpoint: str) -> str:
+ if endpoint.endswith("/"):
+ return endpoint + DEFAULT_TRACES_EXPORT_PATH
+ return endpoint + f"/{DEFAULT_TRACES_EXPORT_PATH}"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/encoder/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/encoder/__init__.py
new file mode 100644
index 0000000000..a0036ecd24
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/trace_exporter/encoder/__init__.py
@@ -0,0 +1,62 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging # noqa: F401
+from collections import abc # noqa: F401
+from typing import Any, List, Optional, Sequence # noqa: F401
+
+from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import ( # noqa: F401
+ ExportTraceServiceRequest as PB2ExportTraceServiceRequest,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ AnyValue as PB2AnyValue,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ ArrayValue as PB2ArrayValue,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ InstrumentationScope as PB2InstrumentationScope,
+)
+from opentelemetry.proto.common.v1.common_pb2 import ( # noqa: F401
+ KeyValue as PB2KeyValue,
+)
+from opentelemetry.proto.resource.v1.resource_pb2 import ( # noqa: F401
+ Resource as PB2Resource,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ( # noqa: F401
+ ScopeSpans as PB2ScopeSpans,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ( # noqa: F401
+ ResourceSpans as PB2ResourceSpans,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ( # noqa: F401
+ Span as PB2SPan,
+)
+from opentelemetry.proto.trace.v1.trace_pb2 import ( # noqa: F401
+ Status as PB2Status,
+)
+from opentelemetry.sdk.trace import Event # noqa: F401
+from opentelemetry.sdk.util.instrumentation import ( # noqa: F401
+ InstrumentationScope,
+)
+from opentelemetry.sdk.trace import Resource # noqa: F401
+from opentelemetry.sdk.trace import Span as SDKSpan # noqa: F401
+from opentelemetry.trace import Link # noqa: F401
+from opentelemetry.trace import SpanKind # noqa: F401
+from opentelemetry.trace.span import ( # noqa: F401
+ SpanContext,
+ TraceState,
+ Status,
+)
+from opentelemetry.util.types import Attributes # noqa: F401
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/version.py b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/src/opentelemetry/exporter/otlp/proto/http/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/tests/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/tests/metrics/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-http/tests/metrics/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/tests/metrics/test_otlp_metrics_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-http/tests/metrics/test_otlp_metrics_exporter.py
new file mode 100644
index 0000000000..c06b5db3c2
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/tests/metrics/test_otlp_metrics_exporter.py
@@ -0,0 +1,489 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import WARNING
+from os import environ
+from unittest import TestCase
+from unittest.mock import MagicMock, Mock, patch
+
+from requests import Session
+from requests.models import Response
+from responses import POST, activate, add
+
+from opentelemetry.exporter.otlp.proto.common._internal import _is_backoff_v2
+from opentelemetry.exporter.otlp.proto.common.metrics_encoder import (
+ encode_metrics,
+)
+from opentelemetry.exporter.otlp.proto.http import Compression
+from opentelemetry.exporter.otlp.proto.http.metric_exporter import (
+ DEFAULT_COMPRESSION,
+ DEFAULT_ENDPOINT,
+ DEFAULT_METRICS_EXPORT_PATH,
+ DEFAULT_TIMEOUT,
+ OTLPMetricExporter,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS,
+ OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE,
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+)
+from opentelemetry.sdk.metrics import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ MetricExportResult,
+ MetricsData,
+ ResourceMetrics,
+ ScopeMetrics,
+)
+from opentelemetry.sdk.metrics.view import (
+ ExplicitBucketHistogramAggregation,
+ ExponentialBucketHistogramAggregation,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import (
+ InstrumentationScope as SDKInstrumentationScope,
+)
+from opentelemetry.test.metrictestutil import _generate_sum
+
+OS_ENV_ENDPOINT = "os.env.base"
+OS_ENV_CERTIFICATE = "os/env/base.crt"
+OS_ENV_HEADERS = "envHeader1=val1,envHeader2=val2"
+OS_ENV_TIMEOUT = "30"
+
+
+# pylint: disable=protected-access
+class TestOTLPMetricExporter(TestCase):
+ def setUp(self):
+
+ self.metrics = {
+ "sum_int": MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Resource(
+ attributes={"a": 1, "b": False},
+ schema_url="resource_schema_url",
+ ),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=SDKInstrumentationScope(
+ name="first_name",
+ version="first_version",
+ schema_url="insrumentation_scope_schema_url",
+ ),
+ metrics=[_generate_sum("sum_int", 33)],
+ schema_url="instrumentation_scope_schema_url",
+ )
+ ],
+ schema_url="resource_schema_url",
+ )
+ ]
+ ),
+ }
+
+ def test_constructor_default(self):
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter._endpoint, DEFAULT_ENDPOINT + DEFAULT_METRICS_EXPORT_PATH
+ )
+ self.assertEqual(exporter._certificate_file, True)
+ self.assertEqual(exporter._timeout, DEFAULT_TIMEOUT)
+ self.assertIs(exporter._compression, DEFAULT_COMPRESSION)
+ self.assertEqual(exporter._headers, {})
+ self.assertIsInstance(exporter._session, Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE: "metrics/certificate.env",
+ OTEL_EXPORTER_OTLP_METRICS_COMPRESSION: Compression.Deflate.value,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT: "https://metrics.endpoint.env",
+ OTEL_EXPORTER_OTLP_METRICS_HEADERS: "metricsEnv1=val1,metricsEnv2=val2,metricEnv3===val3==",
+ OTEL_EXPORTER_OTLP_METRICS_TIMEOUT: "40",
+ },
+ )
+ def test_exporter_metrics_env_take_priority(self):
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(exporter._endpoint, "https://metrics.endpoint.env")
+ self.assertEqual(exporter._certificate_file, "metrics/certificate.env")
+ self.assertEqual(exporter._timeout, 40)
+ self.assertIs(exporter._compression, Compression.Deflate)
+ self.assertEqual(
+ exporter._headers,
+ {
+ "metricsenv1": "val1",
+ "metricsenv2": "val2",
+ "metricenv3": "==val3==",
+ },
+ )
+ self.assertIsInstance(exporter._session, Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_METRICS_ENDPOINT: "https://metrics.endpoint.env",
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_constructor_take_priority(self):
+ exporter = OTLPMetricExporter(
+ endpoint="example.com/1234",
+ certificate_file="path/to/service.crt",
+ headers={"testHeader1": "value1", "testHeader2": "value2"},
+ timeout=20,
+ compression=Compression.NoCompression,
+ session=Session(),
+ )
+
+ self.assertEqual(exporter._endpoint, "example.com/1234")
+ self.assertEqual(exporter._certificate_file, "path/to/service.crt")
+ self.assertEqual(exporter._timeout, 20)
+ self.assertIs(exporter._compression, Compression.NoCompression)
+ self.assertEqual(
+ exporter._headers,
+ {"testHeader1": "value1", "testHeader2": "value2"},
+ )
+ self.assertIsInstance(exporter._session, Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_env(self):
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(exporter._certificate_file, OS_ENV_CERTIFICATE)
+ self.assertEqual(exporter._timeout, int(OS_ENV_TIMEOUT))
+ self.assertIs(exporter._compression, Compression.Gzip)
+ self.assertEqual(
+ exporter._headers, {"envheader1": "val1", "envheader2": "val2"}
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT},
+ )
+ def test_exporter_env_endpoint_without_slash(self):
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter._endpoint,
+ OS_ENV_ENDPOINT + f"/{DEFAULT_METRICS_EXPORT_PATH}",
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT + "/"},
+ )
+ def test_exporter_env_endpoint_with_slash(self):
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter._endpoint,
+ OS_ENV_ENDPOINT + f"/{DEFAULT_METRICS_EXPORT_PATH}",
+ )
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_HEADERS: "envHeader1=val1,envHeader2=val2,missingValue"
+ },
+ )
+ def test_headers_parse_from_env(self):
+
+ with self.assertLogs(level="WARNING") as cm:
+ _ = OTLPMetricExporter()
+
+ self.assertEqual(
+ cm.records[0].message,
+ (
+ "Header format invalid! Header values in environment "
+ "variables must be URL encoded per the OpenTelemetry "
+ "Protocol Exporter specification: missingValue"
+ ),
+ )
+
+ @patch.object(Session, "post")
+ def test_success(self, mock_post):
+ resp = Response()
+ resp.status_code = 200
+ mock_post.return_value = resp
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.SUCCESS,
+ )
+
+ @patch.object(Session, "post")
+ def test_failure(self, mock_post):
+ resp = Response()
+ resp.status_code = 401
+ mock_post.return_value = resp
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.FAILURE,
+ )
+
+ @patch.object(Session, "post")
+ def test_serialization(self, mock_post):
+
+ resp = Response()
+ resp.status_code = 200
+ mock_post.return_value = resp
+
+ exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ exporter.export(self.metrics["sum_int"]),
+ MetricExportResult.SUCCESS,
+ )
+
+ serialized_data = encode_metrics(self.metrics["sum_int"])
+ mock_post.assert_called_once_with(
+ url=exporter._endpoint,
+ data=serialized_data.SerializeToString(),
+ verify=exporter._certificate_file,
+ timeout=exporter._timeout,
+ )
+
+ @activate
+ @patch("opentelemetry.exporter.otlp.proto.common._internal.backoff")
+ @patch("opentelemetry.exporter.otlp.proto.http.metric_exporter.sleep")
+ def test_handles_backoff_v2_api(self, mock_sleep, mock_backoff):
+ # In backoff ~= 2.0.0 the first value yielded from expo is None.
+ def generate_delays(*args, **kwargs):
+ if _is_backoff_v2:
+ yield None
+ yield 1
+
+ mock_backoff.expo.configure_mock(**{"side_effect": generate_delays})
+
+ # return a retryable error
+ add(
+ POST,
+ "http://metrics.example.com/export",
+ json={"error": "something exploded"},
+ status=500,
+ )
+
+ exporter = OTLPMetricExporter(
+ endpoint="http://metrics.example.com/export"
+ )
+ metrics_data = self.metrics["sum_int"]
+
+ exporter.export(metrics_data)
+ mock_sleep.assert_called_once_with(1)
+
+ def test_aggregation_temporality(self):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(temporality, AggregationTemporality.CUMULATIVE)
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "CUMULATIVE"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(
+ temporality, AggregationTemporality.CUMULATIVE
+ )
+
+ with patch.dict(
+ environ, {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "ABC"}
+ ):
+
+ with self.assertLogs(level=WARNING):
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ for (
+ temporality
+ ) in otlp_metric_exporter._preferred_temporality.values():
+ self.assertEqual(
+ temporality, AggregationTemporality.CUMULATIVE
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "DELTA"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Counter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableCounter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[
+ ObservableUpDownCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableGauge],
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE: "LOWMEMORY"},
+ ):
+
+ otlp_metric_exporter = OTLPMetricExporter()
+
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Counter],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[
+ ObservableUpDownCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ otlp_metric_exporter._preferred_temporality[ObservableGauge],
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ def test_exponential_explicit_bucket_histogram(self):
+
+ self.assertIsInstance(
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+
+ with patch.dict(
+ environ,
+ {
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "base2_exponential_bucket_histogram"
+ },
+ ):
+ self.assertIsInstance(
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExponentialBucketHistogramAggregation,
+ )
+
+ with patch.dict(
+ environ,
+ {OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "abc"},
+ ):
+ with self.assertLogs(level=WARNING) as log:
+ self.assertIsInstance(
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+ self.assertIn(
+ (
+ "Invalid value for OTEL_EXPORTER_OTLP_METRICS_DEFAULT_"
+ "HISTOGRAM_AGGREGATION: abc, using explicit bucket "
+ "histogram aggregation"
+ ),
+ log.output[0],
+ )
+
+ with patch.dict(
+ environ,
+ {
+ OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION: "explicit_bucket_histogram"
+ },
+ ):
+ self.assertIsInstance(
+ OTLPMetricExporter()._preferred_aggregation[Histogram],
+ ExplicitBucketHistogramAggregation,
+ )
+
+ @patch.object(OTLPMetricExporter, "_export", return_value=Mock(ok=True))
+ def test_2xx_status_code(self, mock_otlp_metric_exporter):
+ """
+ Test that any HTTP 2XX code returns a successful result
+ """
+
+ self.assertEqual(
+ OTLPMetricExporter().export(MagicMock()),
+ MetricExportResult.SUCCESS,
+ )
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_log_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_log_exporter.py
new file mode 100644
index 0000000000..e601e5d00c
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_log_exporter.py
@@ -0,0 +1,275 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=protected-access
+
+import unittest
+from typing import List
+from unittest.mock import MagicMock, Mock, patch
+
+import requests
+import responses
+
+from opentelemetry._logs import SeverityNumber
+from opentelemetry.exporter.otlp.proto.common._internal import _is_backoff_v2
+from opentelemetry.exporter.otlp.proto.http import Compression
+from opentelemetry.exporter.otlp.proto.http._log_exporter import (
+ DEFAULT_COMPRESSION,
+ DEFAULT_ENDPOINT,
+ DEFAULT_LOGS_EXPORT_PATH,
+ DEFAULT_TIMEOUT,
+ OTLPLogExporter,
+)
+from opentelemetry.exporter.otlp.proto.http.version import __version__
+from opentelemetry.sdk._logs import LogData
+from opentelemetry.sdk._logs import LogRecord as SDKLogRecord
+from opentelemetry.sdk._logs.export import LogExportResult
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS,
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+)
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.trace import TraceFlags
+
+ENV_ENDPOINT = "http://localhost.env:8080/"
+ENV_CERTIFICATE = "/etc/base.crt"
+ENV_HEADERS = "envHeader1=val1,envHeader2=val2"
+ENV_TIMEOUT = "30"
+
+
+class TestOTLPHTTPLogExporter(unittest.TestCase):
+ def test_constructor_default(self):
+
+ exporter = OTLPLogExporter()
+
+ self.assertEqual(
+ exporter._endpoint, DEFAULT_ENDPOINT + DEFAULT_LOGS_EXPORT_PATH
+ )
+ self.assertEqual(exporter._certificate_file, True)
+ self.assertEqual(exporter._timeout, DEFAULT_TIMEOUT)
+ self.assertIs(exporter._compression, DEFAULT_COMPRESSION)
+ self.assertEqual(exporter._headers, {})
+ self.assertIsInstance(exporter._session, requests.Session)
+ self.assertIn("User-Agent", exporter._session.headers)
+ self.assertEqual(
+ exporter._session.headers.get("Content-Type"),
+ "application/x-protobuf",
+ )
+ self.assertEqual(
+ exporter._session.headers.get("User-Agent"),
+ "OTel-OTLP-Exporter-Python/" + __version__,
+ )
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS: ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: ENV_TIMEOUT,
+ OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE: "logs/certificate.env",
+ OTEL_EXPORTER_OTLP_LOGS_COMPRESSION: Compression.Deflate.value,
+ OTEL_EXPORTER_OTLP_LOGS_ENDPOINT: "https://logs.endpoint.env",
+ OTEL_EXPORTER_OTLP_LOGS_HEADERS: "logsEnv1=val1,logsEnv2=val2,logsEnv3===val3==",
+ OTEL_EXPORTER_OTLP_LOGS_TIMEOUT: "40",
+ },
+ )
+ def test_exporter_metrics_env_take_priority(self):
+ exporter = OTLPLogExporter()
+
+ self.assertEqual(exporter._endpoint, "https://logs.endpoint.env")
+ self.assertEqual(exporter._certificate_file, "logs/certificate.env")
+ self.assertEqual(exporter._timeout, 40)
+ self.assertIs(exporter._compression, Compression.Deflate)
+ self.assertEqual(
+ exporter._headers,
+ {
+ "logsenv1": "val1",
+ "logsenv2": "val2",
+ "logsenv3": "==val3==",
+ },
+ )
+ self.assertIsInstance(exporter._session, requests.Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS: ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_constructor_take_priority(self):
+ sess = MagicMock()
+ exporter = OTLPLogExporter(
+ endpoint="endpoint.local:69/logs",
+ certificate_file="/hello.crt",
+ headers={"testHeader1": "value1", "testHeader2": "value2"},
+ timeout=70,
+ compression=Compression.NoCompression,
+ session=sess(),
+ )
+
+ self.assertEqual(exporter._endpoint, "endpoint.local:69/logs")
+ self.assertEqual(exporter._certificate_file, "/hello.crt")
+ self.assertEqual(exporter._timeout, 70)
+ self.assertIs(exporter._compression, Compression.NoCompression)
+ self.assertEqual(
+ exporter._headers,
+ {"testHeader1": "value1", "testHeader2": "value2"},
+ )
+ self.assertTrue(sess.called)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS: ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_env(self):
+
+ exporter = OTLPLogExporter()
+
+ self.assertEqual(
+ exporter._endpoint, ENV_ENDPOINT + DEFAULT_LOGS_EXPORT_PATH
+ )
+ self.assertEqual(exporter._certificate_file, ENV_CERTIFICATE)
+ self.assertEqual(exporter._timeout, int(ENV_TIMEOUT))
+ self.assertIs(exporter._compression, Compression.Gzip)
+ self.assertEqual(
+ exporter._headers, {"envheader1": "val1", "envheader2": "val2"}
+ )
+ self.assertIsInstance(exporter._session, requests.Session)
+
+ @responses.activate
+ @patch("opentelemetry.exporter.otlp.proto.common._internal.backoff")
+ @patch("opentelemetry.exporter.otlp.proto.http._log_exporter.sleep")
+ def test_handles_backoff_v2_api(self, mock_sleep, mock_backoff):
+ # In backoff ~= 2.0.0 the first value yielded from expo is None.
+ def generate_delays(*args, **kwargs):
+ if _is_backoff_v2:
+ yield None
+ yield 1
+
+ mock_backoff.expo.configure_mock(**{"side_effect": generate_delays})
+
+ # return a retryable error
+ responses.add(
+ responses.POST,
+ "http://logs.example.com/export",
+ json={"error": "something exploded"},
+ status=500,
+ )
+
+ exporter = OTLPLogExporter(endpoint="http://logs.example.com/export")
+ logs = self._get_sdk_log_data()
+
+ exporter.export(logs)
+ mock_sleep.assert_called_once_with(1)
+
+ @staticmethod
+ def _get_sdk_log_data() -> List[LogData]:
+ log1 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650195189786880,
+ trace_id=89564621134313219400156819398935297684,
+ span_id=1312458408527513268,
+ trace_flags=TraceFlags(0x01),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Do not go gentle into that good night. Rage, rage against the dying of the light",
+ resource=SDKResource({"first_resource": "value"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+
+ log2 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650249738562048,
+ trace_id=0,
+ span_id=0,
+ trace_flags=TraceFlags.DEFAULT,
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Cooper, this is no time for caution!",
+ resource=SDKResource({"second_resource": "CASE"}),
+ attributes={},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "second_name", "second_version"
+ ),
+ )
+
+ log3 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650427658989056,
+ trace_id=271615924622795969659406376515024083555,
+ span_id=4242561578944770265,
+ trace_flags=TraceFlags(0x01),
+ severity_text="DEBUG",
+ severity_number=SeverityNumber.DEBUG,
+ body="To our galaxy",
+ resource=SDKResource({"second_resource": "CASE"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=None,
+ )
+
+ log4 = LogData(
+ log_record=SDKLogRecord(
+ timestamp=1644650584292683008,
+ trace_id=212592107417388365804938480559624925555,
+ span_id=6077757853989569223,
+ trace_flags=TraceFlags(0x01),
+ severity_text="INFO",
+ severity_number=SeverityNumber.INFO,
+ body="Love is the one thing that transcends time and space",
+ resource=SDKResource({"first_resource": "value"}),
+ attributes={"filename": "model.py", "func_name": "run_method"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "another_name", "another_version"
+ ),
+ )
+
+ return [log1, log2, log3, log4]
+
+ @patch.object(OTLPLogExporter, "_export", return_value=Mock(ok=True))
+ def test_2xx_status_code(self, mock_otlp_metric_exporter):
+ """
+ Test that any HTTP 2XX code returns a successful result
+ """
+
+ self.assertEqual(
+ OTLPLogExporter().export(MagicMock()), LogExportResult.SUCCESS
+ )
diff --git a/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_span_exporter.py b/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_span_exporter.py
new file mode 100644
index 0000000000..eb5b375e40
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp-proto-http/tests/test_proto_span_exporter.py
@@ -0,0 +1,252 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from collections import OrderedDict
+from unittest.mock import MagicMock, Mock, patch
+
+import requests
+import responses
+
+from opentelemetry.exporter.otlp.proto.common._internal import _is_backoff_v2
+from opentelemetry.exporter.otlp.proto.http import Compression
+from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
+ DEFAULT_COMPRESSION,
+ DEFAULT_ENDPOINT,
+ DEFAULT_TIMEOUT,
+ DEFAULT_TRACES_EXPORT_PATH,
+ OTLPSpanExporter,
+)
+from opentelemetry.exporter.otlp.proto.http.version import __version__
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_OTLP_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION,
+ OTEL_EXPORTER_OTLP_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT,
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS,
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
+)
+from opentelemetry.sdk.trace import _Span
+from opentelemetry.sdk.trace.export import SpanExportResult
+
+OS_ENV_ENDPOINT = "os.env.base"
+OS_ENV_CERTIFICATE = "os/env/base.crt"
+OS_ENV_HEADERS = "envHeader1=val1,envHeader2=val2"
+OS_ENV_TIMEOUT = "30"
+
+
+# pylint: disable=protected-access
+class TestOTLPSpanExporter(unittest.TestCase):
+ def test_constructor_default(self):
+
+ exporter = OTLPSpanExporter()
+
+ self.assertEqual(
+ exporter._endpoint, DEFAULT_ENDPOINT + DEFAULT_TRACES_EXPORT_PATH
+ )
+ self.assertEqual(exporter._certificate_file, True)
+ self.assertEqual(exporter._timeout, DEFAULT_TIMEOUT)
+ self.assertIs(exporter._compression, DEFAULT_COMPRESSION)
+ self.assertEqual(exporter._headers, {})
+ self.assertIsInstance(exporter._session, requests.Session)
+ self.assertIn("User-Agent", exporter._session.headers)
+ self.assertEqual(
+ exporter._session.headers.get("Content-Type"),
+ "application/x-protobuf",
+ )
+ self.assertEqual(
+ exporter._session.headers.get("User-Agent"),
+ "OTel-OTLP-Exporter-Python/" + __version__,
+ )
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE: "traces/certificate.env",
+ OTEL_EXPORTER_OTLP_TRACES_COMPRESSION: Compression.Deflate.value,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: "https://traces.endpoint.env",
+ OTEL_EXPORTER_OTLP_TRACES_HEADERS: "tracesEnv1=val1,tracesEnv2=val2,traceEnv3===val3==",
+ OTEL_EXPORTER_OTLP_TRACES_TIMEOUT: "40",
+ },
+ )
+ def test_exporter_traces_env_take_priority(self):
+ exporter = OTLPSpanExporter()
+
+ self.assertEqual(exporter._endpoint, "https://traces.endpoint.env")
+ self.assertEqual(exporter._certificate_file, "traces/certificate.env")
+ self.assertEqual(exporter._timeout, 40)
+ self.assertIs(exporter._compression, Compression.Deflate)
+ self.assertEqual(
+ exporter._headers,
+ {
+ "tracesenv1": "val1",
+ "tracesenv2": "val2",
+ "traceenv3": "==val3==",
+ },
+ )
+ self.assertIsInstance(exporter._session, requests.Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT,
+ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: "https://traces.endpoint.env",
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_constructor_take_priority(self):
+ exporter = OTLPSpanExporter(
+ endpoint="example.com/1234",
+ certificate_file="path/to/service.crt",
+ headers={"testHeader1": "value1", "testHeader2": "value2"},
+ timeout=20,
+ compression=Compression.NoCompression,
+ session=requests.Session(),
+ )
+
+ self.assertEqual(exporter._endpoint, "example.com/1234")
+ self.assertEqual(exporter._certificate_file, "path/to/service.crt")
+ self.assertEqual(exporter._timeout, 20)
+ self.assertIs(exporter._compression, Compression.NoCompression)
+ self.assertEqual(
+ exporter._headers,
+ {"testHeader1": "value1", "testHeader2": "value2"},
+ )
+ self.assertIsInstance(exporter._session, requests.Session)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_CERTIFICATE: OS_ENV_CERTIFICATE,
+ OTEL_EXPORTER_OTLP_COMPRESSION: Compression.Gzip.value,
+ OTEL_EXPORTER_OTLP_HEADERS: OS_ENV_HEADERS,
+ OTEL_EXPORTER_OTLP_TIMEOUT: OS_ENV_TIMEOUT,
+ },
+ )
+ def test_exporter_env(self):
+
+ exporter = OTLPSpanExporter()
+
+ self.assertEqual(exporter._certificate_file, OS_ENV_CERTIFICATE)
+ self.assertEqual(exporter._timeout, int(OS_ENV_TIMEOUT))
+ self.assertIs(exporter._compression, Compression.Gzip)
+ self.assertEqual(
+ exporter._headers, {"envheader1": "val1", "envheader2": "val2"}
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT},
+ )
+ def test_exporter_env_endpoint_without_slash(self):
+
+ exporter = OTLPSpanExporter()
+
+ self.assertEqual(
+ exporter._endpoint,
+ OS_ENV_ENDPOINT + f"/{DEFAULT_TRACES_EXPORT_PATH}",
+ )
+
+ @patch.dict(
+ "os.environ",
+ {OTEL_EXPORTER_OTLP_ENDPOINT: OS_ENV_ENDPOINT + "/"},
+ )
+ def test_exporter_env_endpoint_with_slash(self):
+
+ exporter = OTLPSpanExporter()
+
+ self.assertEqual(
+ exporter._endpoint,
+ OS_ENV_ENDPOINT + f"/{DEFAULT_TRACES_EXPORT_PATH}",
+ )
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_EXPORTER_OTLP_HEADERS: "envHeader1=val1,envHeader2=val2,missingValue"
+ },
+ )
+ def test_headers_parse_from_env(self):
+
+ with self.assertLogs(level="WARNING") as cm:
+ _ = OTLPSpanExporter()
+
+ self.assertEqual(
+ cm.records[0].message,
+ (
+ "Header format invalid! Header values in environment "
+ "variables must be URL encoded per the OpenTelemetry "
+ "Protocol Exporter specification: missingValue"
+ ),
+ )
+
+ # pylint: disable=no-self-use
+ @responses.activate
+ @patch("opentelemetry.exporter.otlp.proto.common._internal.backoff")
+ @patch("opentelemetry.exporter.otlp.proto.http.trace_exporter.sleep")
+ def test_handles_backoff_v2_api(self, mock_sleep, mock_backoff):
+ # In backoff ~= 2.0.0 the first value yielded from expo is None.
+ def generate_delays(*args, **kwargs):
+ if _is_backoff_v2:
+ yield None
+ yield 1
+
+ mock_backoff.expo.configure_mock(**{"side_effect": generate_delays})
+
+ # return a retryable error
+ responses.add(
+ responses.POST,
+ "http://traces.example.com/export",
+ json={"error": "something exploded"},
+ status=500,
+ )
+
+ exporter = OTLPSpanExporter(
+ endpoint="http://traces.example.com/export"
+ )
+ span = _Span(
+ "abc",
+ context=Mock(
+ **{
+ "trace_state": OrderedDict([("a", "b"), ("c", "d")]),
+ "span_id": 10217189687419569865,
+ "trace_id": 67545097771067222548457157018666467027,
+ }
+ ),
+ )
+
+ exporter.export([span])
+ mock_sleep.assert_called_once_with(1)
+
+ @patch.object(OTLPSpanExporter, "_export", return_value=Mock(ok=True))
+ def test_2xx_status_code(self, mock_otlp_metric_exporter):
+ """
+ Test that any HTTP 2XX code returns a successful result
+ """
+
+ self.assertEqual(
+ OTLPSpanExporter().export(MagicMock()), SpanExportResult.SUCCESS
+ )
diff --git a/exporter/opentelemetry-exporter-otlp/LICENSE b/exporter/opentelemetry-exporter-otlp/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-otlp/README.rst b/exporter/opentelemetry-exporter-otlp/README.rst
new file mode 100644
index 0000000000..7d6d15ad20
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp/README.rst
@@ -0,0 +1,34 @@
+OpenTelemetry Collector Exporters
+=================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-otlp.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-otlp/
+
+This library is provided as a convenience to install all supported OpenTelemetry Collector Exporters. Currently it installs:
+
+* opentelemetry-exporter-otlp-proto-grpc
+* opentelemetry-exporter-otlp-proto-http
+
+In the future, additional packages will be available:
+* opentelemetry-exporter-otlp-json-http
+
+To avoid unnecessary dependencies, users should install the specific package once they've determined their
+preferred serialization and protocol method.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-otlp
+
+
+References
+----------
+
+* `OpenTelemetry Collector Exporter `_
+* `OpenTelemetry Collector `_
+* `OpenTelemetry `_
+* `OpenTelemetry Protocol Specification `_
diff --git a/exporter/opentelemetry-exporter-otlp/pyproject.toml b/exporter/opentelemetry-exporter-otlp/pyproject.toml
new file mode 100644
index 0000000000..0d909c0e12
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp/pyproject.toml
@@ -0,0 +1,58 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-otlp"
+dynamic = ["version"]
+description = "OpenTelemetry Collector Exporters"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-exporter-otlp-proto-grpc == 1.23.0.dev",
+ "opentelemetry-exporter-otlp-proto-http == 1.23.0.dev",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_logs_exporter]
+otlp = "opentelemetry.exporter.otlp.proto.grpc._log_exporter:OTLPLogExporter"
+
+[project.entry-points.opentelemetry_metrics_exporter]
+otlp = "opentelemetry.exporter.otlp.proto.grpc.metric_exporter:OTLPMetricExporter"
+
+[project.entry-points.opentelemetry_traces_exporter]
+otlp = "opentelemetry.exporter.otlp.proto.grpc.trace_exporter:OTLPSpanExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-otlp"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/otlp/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/py.typed b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/version.py b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp/src/opentelemetry/exporter/otlp/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-otlp/tests/__init__.py b/exporter/opentelemetry-exporter-otlp/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-otlp/tests/test_otlp.py b/exporter/opentelemetry-exporter-otlp/tests/test_otlp.py
new file mode 100644
index 0000000000..7e18002289
--- /dev/null
+++ b/exporter/opentelemetry-exporter-otlp/tests/test_otlp.py
@@ -0,0 +1,40 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
+ OTLPLogExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
+ OTLPMetricExporter,
+)
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
+ OTLPSpanExporter as HTTPSpanExporter,
+)
+from opentelemetry.test import TestCase
+
+
+class TestOTLPExporters(TestCase):
+ def test_constructors(self):
+ for exporter in [
+ OTLPSpanExporter,
+ HTTPSpanExporter,
+ OTLPLogExporter,
+ OTLPMetricExporter,
+ ]:
+ with self.assertNotRaises(Exception):
+ exporter()
diff --git a/exporter/opentelemetry-exporter-prometheus/LICENSE b/exporter/opentelemetry-exporter-prometheus/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-prometheus/README.rst b/exporter/opentelemetry-exporter-prometheus/README.rst
new file mode 100644
index 0000000000..a3eb920000
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/README.rst
@@ -0,0 +1,23 @@
+OpenTelemetry Prometheus Exporter
+=================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-prometheus.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-prometheus/
+
+This library allows to export metrics data to `Prometheus `_.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-prometheus
+
+References
+----------
+
+* `OpenTelemetry Prometheus Exporter `_
+* `Prometheus `_
+* `OpenTelemetry Project `_
diff --git a/exporter/opentelemetry-exporter-prometheus/pyproject.toml b/exporter/opentelemetry-exporter-prometheus/pyproject.toml
new file mode 100644
index 0000000000..b634c3df88
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/pyproject.toml
@@ -0,0 +1,53 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-prometheus"
+dynamic = ["version"]
+description = "Prometheus Metric Exporter for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "opentelemetry-api ~= 1.12",
+ # DONOTMERGE: confirm that this will becomes ~= 1.21 in the next release
+ "opentelemetry-sdk ~= 1.23.0.dev",
+ "prometheus_client >= 0.5.0, < 1.0.0",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_metrics_exporter]
+prometheus = "opentelemetry.exporter.prometheus:_AutoPrometheusMetricReader"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-prometheus"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/prometheus/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
new file mode 100644
index 0000000000..4c90329778
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
@@ -0,0 +1,405 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+This library allows export of metrics data to `Prometheus `_.
+
+Usage
+-----
+
+The **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_
+metrics to `Prometheus`_.
+
+
+.. _Prometheus: https://prometheus.io/
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+
+.. code:: python
+
+ from prometheus_client import start_http_server
+
+ from opentelemetry.exporter.prometheus import PrometheusMetricReader
+ from opentelemetry.metrics import get_meter_provider, set_meter_provider
+ from opentelemetry.sdk.metrics import MeterProvider
+
+ # Start Prometheus client
+ start_http_server(port=8000, addr="localhost")
+
+ # Exporter to export metrics to Prometheus
+ prefix = "MyAppPrefix"
+ reader = PrometheusMetricReader(prefix)
+
+ # Meter is responsible for creating and recording metrics
+ set_meter_provider(MeterProvider(metric_readers=[reader]))
+ meter = get_meter_provider().get_meter("myapp", "0.1.2")
+
+ counter = meter.create_counter(
+ "requests",
+ "requests",
+ "number of requests",
+ )
+
+ # Labels are used to identify key-values that are associated with a specific
+ # metric that you want to record. These are useful for pre-aggregation and can
+ # be used to store custom dimensions pertaining to a metric
+ labels = {"environment": "staging"}
+
+ counter.add(25, labels)
+ input("Press any key to exit...")
+
+API
+---
+"""
+
+from collections import deque
+from itertools import chain
+from json import dumps
+from logging import getLogger
+from os import environ
+from re import IGNORECASE, UNICODE, compile
+from typing import Dict, Sequence, Tuple, Union
+
+from prometheus_client import start_http_server
+from prometheus_client.core import (
+ REGISTRY,
+ CounterMetricFamily,
+ GaugeMetricFamily,
+ HistogramMetricFamily,
+ InfoMetricFamily,
+)
+from prometheus_client.core import Metric as PrometheusMetric
+
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_PROMETHEUS_HOST,
+ OTEL_EXPORTER_PROMETHEUS_PORT,
+)
+from opentelemetry.sdk.metrics import Counter
+from opentelemetry.sdk.metrics import Histogram as HistogramInstrument
+from opentelemetry.sdk.metrics import (
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Gauge,
+ Histogram,
+ HistogramDataPoint,
+ MetricReader,
+ MetricsData,
+ Sum,
+)
+
+_logger = getLogger(__name__)
+
+_TARGET_INFO_NAME = "target"
+_TARGET_INFO_DESCRIPTION = "Target metadata"
+
+
+def _convert_buckets(
+ bucket_counts: Sequence[int], explicit_bounds: Sequence[float]
+) -> Sequence[Tuple[str, int]]:
+ buckets = []
+ total_count = 0
+ for upper_bound, count in zip(
+ chain(explicit_bounds, ["+Inf"]),
+ bucket_counts,
+ ):
+ total_count += count
+ buckets.append((f"{upper_bound}", total_count))
+
+ return buckets
+
+
+class PrometheusMetricReader(MetricReader):
+ """Prometheus metric exporter for OpenTelemetry."""
+
+ def __init__(self, disable_target_info: bool = False) -> None:
+ super().__init__(
+ preferred_temporality={
+ Counter: AggregationTemporality.CUMULATIVE,
+ UpDownCounter: AggregationTemporality.CUMULATIVE,
+ HistogramInstrument: AggregationTemporality.CUMULATIVE,
+ ObservableCounter: AggregationTemporality.CUMULATIVE,
+ ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
+ ObservableGauge: AggregationTemporality.CUMULATIVE,
+ }
+ )
+ self._collector = _CustomCollector(disable_target_info)
+ REGISTRY.register(self._collector)
+ self._collector._callback = self.collect
+
+ def _receive_metrics(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ if metrics_data is None:
+ return
+ self._collector.add_metrics_data(metrics_data)
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ REGISTRY.unregister(self._collector)
+
+
+class _CustomCollector:
+ """_CustomCollector represents the Prometheus Collector object
+
+ See more:
+ https://github.com/prometheus/client_python#custom-collectors
+ """
+
+ def __init__(self, disable_target_info: bool = False):
+ self._callback = None
+ self._metrics_datas = deque()
+ self._non_letters_digits_underscore_re = compile(
+ r"[^\w]", UNICODE | IGNORECASE
+ )
+ self._disable_target_info = disable_target_info
+ self._target_info = None
+
+ def add_metrics_data(self, metrics_data: MetricsData) -> None:
+ """Add metrics to Prometheus data"""
+ self._metrics_datas.append(metrics_data)
+
+ def collect(self) -> None:
+ """Collect fetches the metrics from OpenTelemetry
+ and delivers them as Prometheus Metrics.
+ Collect is invoked every time a ``prometheus.Gatherer`` is run
+ for example when the HTTP endpoint is invoked by Prometheus.
+ """
+ if self._callback is not None:
+ self._callback()
+
+ metric_family_id_metric_family = {}
+
+ if len(self._metrics_datas):
+ if not self._disable_target_info:
+ if self._target_info is None:
+ attributes = {}
+ for res in self._metrics_datas[0].resource_metrics:
+ attributes = {**attributes, **res.resource.attributes}
+
+ self._target_info = self._create_info_metric(
+ _TARGET_INFO_NAME, _TARGET_INFO_DESCRIPTION, attributes
+ )
+ metric_family_id_metric_family[
+ _TARGET_INFO_NAME
+ ] = self._target_info
+
+ while self._metrics_datas:
+ self._translate_to_prometheus(
+ self._metrics_datas.popleft(), metric_family_id_metric_family
+ )
+
+ if metric_family_id_metric_family:
+ for metric_family in metric_family_id_metric_family.values():
+ yield metric_family
+
+ # pylint: disable=too-many-locals,too-many-branches
+ def _translate_to_prometheus(
+ self,
+ metrics_data: MetricsData,
+ metric_family_id_metric_family: Dict[str, PrometheusMetric],
+ ):
+ metrics = []
+
+ for resource_metrics in metrics_data.resource_metrics:
+ for scope_metrics in resource_metrics.scope_metrics:
+ for metric in scope_metrics.metrics:
+ metrics.append(metric)
+
+ for metric in metrics:
+ label_valuess = []
+ values = []
+
+ pre_metric_family_ids = []
+
+ metric_name = ""
+ metric_name += self._sanitize(metric.name)
+
+ metric_description = metric.description or ""
+
+ for number_data_point in metric.data.data_points:
+ label_keys = []
+ label_values = []
+
+ for key, value in number_data_point.attributes.items():
+ label_keys.append(self._sanitize(key))
+ label_values.append(self._check_value(value))
+
+ pre_metric_family_ids.append(
+ "|".join(
+ [
+ metric_name,
+ metric_description,
+ "%".join(label_keys),
+ metric.unit,
+ ]
+ )
+ )
+
+ label_valuess.append(label_values)
+ if isinstance(number_data_point, HistogramDataPoint):
+ values.append(
+ {
+ "bucket_counts": number_data_point.bucket_counts,
+ "explicit_bounds": (
+ number_data_point.explicit_bounds
+ ),
+ "sum": number_data_point.sum,
+ }
+ )
+ else:
+ values.append(number_data_point.value)
+
+ for pre_metric_family_id, label_values, value in zip(
+ pre_metric_family_ids, label_valuess, values
+ ):
+ is_non_monotonic_sum = (
+ isinstance(metric.data, Sum)
+ and metric.data.is_monotonic is False
+ )
+ is_cumulative = (
+ isinstance(metric.data, Sum)
+ and metric.data.aggregation_temporality
+ == AggregationTemporality.CUMULATIVE
+ )
+
+ # The prometheus compatibility spec for sums says: If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.
+ should_convert_sum_to_gauge = (
+ is_non_monotonic_sum and is_cumulative
+ )
+
+ if (
+ isinstance(metric.data, Sum)
+ and not should_convert_sum_to_gauge
+ ):
+
+ metric_family_id = "|".join(
+ [pre_metric_family_id, CounterMetricFamily.__name__]
+ )
+
+ if metric_family_id not in metric_family_id_metric_family:
+ metric_family_id_metric_family[
+ metric_family_id
+ ] = CounterMetricFamily(
+ name=metric_name,
+ documentation=metric_description,
+ labels=label_keys,
+ unit=metric.unit,
+ )
+ metric_family_id_metric_family[
+ metric_family_id
+ ].add_metric(labels=label_values, value=value)
+ elif (
+ isinstance(metric.data, Gauge)
+ or should_convert_sum_to_gauge
+ ):
+
+ metric_family_id = "|".join(
+ [pre_metric_family_id, GaugeMetricFamily.__name__]
+ )
+
+ if (
+ metric_family_id
+ not in metric_family_id_metric_family.keys()
+ ):
+ metric_family_id_metric_family[
+ metric_family_id
+ ] = GaugeMetricFamily(
+ name=metric_name,
+ documentation=metric_description,
+ labels=label_keys,
+ unit=metric.unit,
+ )
+ metric_family_id_metric_family[
+ metric_family_id
+ ].add_metric(labels=label_values, value=value)
+ elif isinstance(metric.data, Histogram):
+
+ metric_family_id = "|".join(
+ [pre_metric_family_id, HistogramMetricFamily.__name__]
+ )
+
+ if (
+ metric_family_id
+ not in metric_family_id_metric_family.keys()
+ ):
+ metric_family_id_metric_family[
+ metric_family_id
+ ] = HistogramMetricFamily(
+ name=metric_name,
+ documentation=metric_description,
+ labels=label_keys,
+ unit=metric.unit,
+ )
+ metric_family_id_metric_family[
+ metric_family_id
+ ].add_metric(
+ labels=label_values,
+ buckets=_convert_buckets(
+ value["bucket_counts"], value["explicit_bounds"]
+ ),
+ sum_value=value["sum"],
+ )
+ else:
+ _logger.warning(
+ "Unsupported metric data. %s", type(metric.data)
+ )
+
+ def _sanitize(self, key: str) -> str:
+ """sanitize the given metric name or label according to Prometheus rule.
+ Replace all characters other than [A-Za-z0-9_] with '_'.
+ """
+ return self._non_letters_digits_underscore_re.sub("_", key)
+
+ # pylint: disable=no-self-use
+ def _check_value(self, value: Union[int, float, str, Sequence]) -> str:
+ """Check the label value and return is appropriate representation"""
+ if not isinstance(value, str):
+ return dumps(value, default=str)
+ return str(value)
+
+ def _create_info_metric(
+ self, name: str, description: str, attributes: Dict[str, str]
+ ) -> InfoMetricFamily:
+ """Create an Info Metric Family with list of attributes"""
+ # sanitize the attribute names according to Prometheus rule
+ attributes = {
+ self._sanitize(key): value for key, value in attributes.items()
+ }
+ info = InfoMetricFamily(name, description, labels=attributes)
+ info.add_metric(labels=list(attributes.keys()), value=attributes)
+ return info
+
+
+class _AutoPrometheusMetricReader(PrometheusMetricReader):
+ """Thin wrapper around PrometheusMetricReader used for the opentelemetry_metrics_exporter entry point.
+
+ This allows users to use the prometheus exporter with opentelemetry-instrument. It handles
+ starting the Prometheus http server on the the correct port and host.
+ """
+
+ def __init__(self) -> None:
+ super().__init__()
+
+ # Default values are specified in
+ # https://github.com/open-telemetry/opentelemetry-specification/blob/v1.24.0/specification/configuration/sdk-environment-variables.md#prometheus-exporter
+ start_http_server(
+ port=int(environ.get(OTEL_EXPORTER_PROMETHEUS_PORT, "9464")),
+ addr=environ.get(OTEL_EXPORTER_PROMETHEUS_HOST, "localhost"),
+ )
diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/version.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/version.py
new file mode 100644
index 0000000000..ff896307c3
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.44b0.dev"
diff --git a/exporter/opentelemetry-exporter-prometheus/tests/__init__.py b/exporter/opentelemetry-exporter-prometheus/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/exporter/opentelemetry-exporter-prometheus/tests/test_entrypoints.py b/exporter/opentelemetry-exporter-prometheus/tests/test_entrypoints.py
new file mode 100644
index 0000000000..96846e0759
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/tests/test_entrypoints.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=no-self-use
+
+import os
+from unittest import TestCase
+from unittest.mock import ANY, Mock, patch
+
+from opentelemetry.exporter.prometheus import _AutoPrometheusMetricReader
+from opentelemetry.sdk._configuration import _import_exporters
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_PROMETHEUS_HOST,
+ OTEL_EXPORTER_PROMETHEUS_PORT,
+)
+
+
+class TestEntrypoints(TestCase):
+ def test_import_exporters(self) -> None:
+ """
+ Tests that the entrypoint can be loaded and doesn't have a typo in the name
+ """
+ (
+ _trace_exporters,
+ metric_exporters,
+ _logs_exporters,
+ ) = _import_exporters(
+ trace_exporter_names=[],
+ metric_exporter_names=["prometheus"],
+ log_exporter_names=[],
+ )
+
+ self.assertIs(
+ metric_exporters["prometheus"],
+ _AutoPrometheusMetricReader,
+ )
+
+ @patch("opentelemetry.exporter.prometheus.start_http_server")
+ @patch.dict(os.environ)
+ def test_starts_http_server_defaults(
+ self, mock_start_http_server: Mock
+ ) -> None:
+ _AutoPrometheusMetricReader()
+ mock_start_http_server.assert_called_once_with(
+ port=9464, addr="localhost"
+ )
+
+ @patch("opentelemetry.exporter.prometheus.start_http_server")
+ @patch.dict(os.environ, {OTEL_EXPORTER_PROMETHEUS_HOST: "1.2.3.4"})
+ def test_starts_http_server_host_envvar(
+ self, mock_start_http_server: Mock
+ ) -> None:
+ _AutoPrometheusMetricReader()
+ mock_start_http_server.assert_called_once_with(
+ port=ANY, addr="1.2.3.4"
+ )
+
+ @patch("opentelemetry.exporter.prometheus.start_http_server")
+ @patch.dict(os.environ, {OTEL_EXPORTER_PROMETHEUS_PORT: "9999"})
+ def test_starts_http_server_port_envvar(
+ self, mock_start_http_server: Mock
+ ) -> None:
+ _AutoPrometheusMetricReader()
+ mock_start_http_server.assert_called_once_with(port=9999, addr=ANY)
diff --git a/exporter/opentelemetry-exporter-prometheus/tests/test_prometheus_exporter.py b/exporter/opentelemetry-exporter-prometheus/tests/test_prometheus_exporter.py
new file mode 100644
index 0000000000..db920a5c73
--- /dev/null
+++ b/exporter/opentelemetry-exporter-prometheus/tests/test_prometheus_exporter.py
@@ -0,0 +1,423 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from textwrap import dedent
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from prometheus_client import generate_latest
+from prometheus_client.core import (
+ CounterMetricFamily,
+ GaugeMetricFamily,
+ InfoMetricFamily,
+)
+
+from opentelemetry.exporter.prometheus import (
+ PrometheusMetricReader,
+ _CustomCollector,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Histogram,
+ HistogramDataPoint,
+ Metric,
+ MetricsData,
+ ResourceMetrics,
+ ScopeMetrics,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.test.metrictestutil import (
+ _generate_gauge,
+ _generate_sum,
+ _generate_unsupported_metric,
+)
+
+
+class TestPrometheusMetricReader(TestCase):
+ def setUp(self):
+ self._mock_registry_register = Mock()
+ self._registry_register_patch = patch(
+ "prometheus_client.core.REGISTRY.register",
+ side_effect=self._mock_registry_register,
+ )
+
+ # pylint: disable=protected-access
+ def test_constructor(self):
+ """Test the constructor."""
+ with self._registry_register_patch:
+ _ = PrometheusMetricReader()
+ self.assertTrue(self._mock_registry_register.called)
+
+ def test_shutdown(self):
+ with patch(
+ "prometheus_client.core.REGISTRY.unregister"
+ ) as registry_unregister_patch:
+ exporter = PrometheusMetricReader()
+ exporter.shutdown()
+ self.assertTrue(registry_unregister_patch.called)
+
+ def test_histogram_to_prometheus(self):
+ metric = Metric(
+ name="test@name",
+ description="foo",
+ unit="s",
+ data=Histogram(
+ data_points=[
+ HistogramDataPoint(
+ attributes={"histo": 1},
+ start_time_unix_nano=1641946016139533244,
+ time_unix_nano=1641946016139533244,
+ count=6,
+ sum=579.0,
+ bucket_counts=[1, 3, 2],
+ explicit_bounds=[123.0, 456.0],
+ min=1,
+ max=457,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ ),
+ )
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Mock(),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=Mock(),
+ metrics=[metric],
+ schema_url="schema_url",
+ )
+ ],
+ schema_url="schema_url",
+ )
+ ]
+ )
+
+ collector = _CustomCollector(disable_target_info=True)
+ collector.add_metrics_data(metrics_data)
+ result_bytes = generate_latest(collector)
+ result = result_bytes.decode("utf-8")
+ self.assertEqual(
+ result,
+ dedent(
+ """\
+ # HELP test_name_s foo
+ # TYPE test_name_s histogram
+ test_name_s_bucket{histo="1",le="123.0"} 1.0
+ test_name_s_bucket{histo="1",le="456.0"} 4.0
+ test_name_s_bucket{histo="1",le="+Inf"} 6.0
+ test_name_s_count{histo="1"} 6.0
+ test_name_s_sum{histo="1"} 579.0
+ """
+ ),
+ )
+
+ def test_monotonic_sum_to_prometheus(self):
+ labels = {"environment@": "staging", "os": "Windows"}
+ metric = _generate_sum(
+ "test@sum_monotonic",
+ 123,
+ attributes=labels,
+ description="testdesc",
+ unit="testunit",
+ )
+
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Mock(),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=Mock(),
+ metrics=[metric],
+ schema_url="schema_url",
+ )
+ ],
+ schema_url="schema_url",
+ )
+ ]
+ )
+
+ collector = _CustomCollector(disable_target_info=True)
+ collector.add_metrics_data(metrics_data)
+
+ for prometheus_metric in collector.collect():
+ self.assertEqual(type(prometheus_metric), CounterMetricFamily)
+ self.assertEqual(
+ prometheus_metric.name, "test_sum_monotonic_testunit"
+ )
+ self.assertEqual(prometheus_metric.documentation, "testdesc")
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 123)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["environment_"], "staging"
+ )
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["os"], "Windows"
+ )
+
+ def test_non_monotonic_sum_to_prometheus(self):
+ labels = {"environment@": "staging", "os": "Windows"}
+ metric = _generate_sum(
+ "test@sum_nonmonotonic",
+ 123,
+ attributes=labels,
+ description="testdesc",
+ unit="testunit",
+ is_monotonic=False,
+ )
+
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Mock(),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=Mock(),
+ metrics=[metric],
+ schema_url="schema_url",
+ )
+ ],
+ schema_url="schema_url",
+ )
+ ]
+ )
+
+ collector = _CustomCollector(disable_target_info=True)
+ collector.add_metrics_data(metrics_data)
+
+ for prometheus_metric in collector.collect():
+ self.assertEqual(type(prometheus_metric), GaugeMetricFamily)
+ self.assertEqual(
+ prometheus_metric.name, "test_sum_nonmonotonic_testunit"
+ )
+ self.assertEqual(prometheus_metric.documentation, "testdesc")
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 123)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["environment_"], "staging"
+ )
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["os"], "Windows"
+ )
+
+ def test_gauge_to_prometheus(self):
+ labels = {"environment@": "dev", "os": "Unix"}
+ metric = _generate_gauge(
+ "test@gauge",
+ 123,
+ attributes=labels,
+ description="testdesc",
+ unit="testunit",
+ )
+
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Mock(),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=Mock(),
+ metrics=[metric],
+ schema_url="schema_url",
+ )
+ ],
+ schema_url="schema_url",
+ )
+ ]
+ )
+
+ collector = _CustomCollector(disable_target_info=True)
+ collector.add_metrics_data(metrics_data)
+
+ for prometheus_metric in collector.collect():
+ self.assertEqual(type(prometheus_metric), GaugeMetricFamily)
+ self.assertEqual(prometheus_metric.name, "test_gauge_testunit")
+ self.assertEqual(prometheus_metric.documentation, "testdesc")
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 123)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["environment_"], "dev"
+ )
+ self.assertEqual(prometheus_metric.samples[0].labels["os"], "Unix")
+
+ def test_invalid_metric(self):
+ labels = {"environment": "staging"}
+ record = _generate_unsupported_metric(
+ "tesname",
+ attributes=labels,
+ description="testdesc",
+ unit="testunit",
+ )
+ collector = _CustomCollector()
+ collector.add_metrics_data([record])
+ collector.collect()
+ self.assertLogs("opentelemetry.exporter.prometheus", level="WARNING")
+
+ def test_sanitize(self):
+ collector = _CustomCollector()
+ self.assertEqual(
+ collector._sanitize("1!2@3#4$5%6^7&8*9(0)_-"),
+ "1_2_3_4_5_6_7_8_9_0___",
+ )
+ self.assertEqual(collector._sanitize(",./?;:[]{}"), "__________")
+ self.assertEqual(collector._sanitize("TestString"), "TestString")
+ self.assertEqual(collector._sanitize("aAbBcC_12_oi"), "aAbBcC_12_oi")
+
+ def test_list_labels(self):
+ labels = {"environment@": ["1", "2", "3"], "os": "Unix"}
+ metric = _generate_gauge(
+ "test@gauge",
+ 123,
+ attributes=labels,
+ description="testdesc",
+ unit="testunit",
+ )
+ metrics_data = MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=Mock(),
+ scope_metrics=[
+ ScopeMetrics(
+ scope=Mock(),
+ metrics=[metric],
+ schema_url="schema_url",
+ )
+ ],
+ schema_url="schema_url",
+ )
+ ]
+ )
+ collector = _CustomCollector(disable_target_info=True)
+ collector.add_metrics_data(metrics_data)
+
+ for prometheus_metric in collector.collect():
+ self.assertEqual(type(prometheus_metric), GaugeMetricFamily)
+ self.assertEqual(prometheus_metric.name, "test_gauge_testunit")
+ self.assertEqual(prometheus_metric.documentation, "testdesc")
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 123)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["environment_"],
+ '["1", "2", "3"]',
+ )
+ self.assertEqual(prometheus_metric.samples[0].labels["os"], "Unix")
+
+ def test_check_value(self):
+
+ collector = _CustomCollector()
+
+ self.assertEqual(collector._check_value(1), "1")
+ self.assertEqual(collector._check_value(1.0), "1.0")
+ self.assertEqual(collector._check_value("a"), "a")
+ self.assertEqual(collector._check_value([1, 2]), "[1, 2]")
+ self.assertEqual(collector._check_value((1, 2)), "[1, 2]")
+ self.assertEqual(collector._check_value(["a", 2]), '["a", 2]')
+ self.assertEqual(collector._check_value(True), "true")
+ self.assertEqual(collector._check_value(False), "false")
+ self.assertEqual(collector._check_value(None), "null")
+
+ def test_multiple_collection_calls(self):
+
+ metric_reader = PrometheusMetricReader()
+ provider = MeterProvider(metric_readers=[metric_reader])
+ meter = provider.get_meter("getting-started", "0.1.2")
+ counter = meter.create_counter("counter")
+ counter.add(1)
+ result_0 = list(metric_reader._collector.collect())
+ result_1 = list(metric_reader._collector.collect())
+ result_2 = list(metric_reader._collector.collect())
+ self.assertEqual(result_0, result_1)
+ self.assertEqual(result_1, result_2)
+
+ def test_target_info_enabled_by_default(self):
+ metric_reader = PrometheusMetricReader()
+ provider = MeterProvider(
+ metric_readers=[metric_reader],
+ resource=Resource({"os": "Unix", "histo": 1}),
+ )
+ meter = provider.get_meter("getting-started", "0.1.2")
+ counter = meter.create_counter("counter")
+ counter.add(1)
+ result = list(metric_reader._collector.collect())
+
+ for prometheus_metric in result[:0]:
+ self.assertEqual(type(prometheus_metric), InfoMetricFamily)
+ self.assertEqual(prometheus_metric.name, "target")
+ self.assertEqual(
+ prometheus_metric.documentation, "Target metadata"
+ )
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 1)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertEqual(prometheus_metric.samples[0].labels["os"], "Unix")
+ self.assertEqual(prometheus_metric.samples[0].labels["histo"], "1")
+
+ def test_target_info_disabled(self):
+ metric_reader = PrometheusMetricReader(disable_target_info=True)
+ provider = MeterProvider(
+ metric_readers=[metric_reader],
+ resource=Resource({"os": "Unix", "histo": 1}),
+ )
+ meter = provider.get_meter("getting-started", "0.1.2")
+ counter = meter.create_counter("counter")
+ counter.add(1)
+ result = list(metric_reader._collector.collect())
+
+ for prometheus_metric in result:
+ self.assertNotEqual(type(prometheus_metric), InfoMetricFamily)
+ self.assertNotEqual(prometheus_metric.name, "target")
+ self.assertNotEqual(
+ prometheus_metric.documentation, "Target metadata"
+ )
+ self.assertNotIn("os", prometheus_metric.samples[0].labels)
+ self.assertNotIn("histo", prometheus_metric.samples[0].labels)
+
+ def test_target_info_sanitize(self):
+ metric_reader = PrometheusMetricReader()
+ provider = MeterProvider(
+ metric_readers=[metric_reader],
+ resource=Resource(
+ {
+ "system.os": "Unix",
+ "system.name": "Prometheus Target Sanitize",
+ }
+ ),
+ )
+ meter = provider.get_meter("getting-started", "0.1.2")
+ counter = meter.create_counter("counter")
+ counter.add(1)
+ prometheus_metric = list(metric_reader._collector.collect())[0]
+
+ self.assertEqual(type(prometheus_metric), InfoMetricFamily)
+ self.assertEqual(prometheus_metric.name, "target")
+ self.assertEqual(prometheus_metric.documentation, "Target metadata")
+ self.assertTrue(len(prometheus_metric.samples) == 1)
+ self.assertEqual(prometheus_metric.samples[0].value, 1)
+ self.assertTrue(len(prometheus_metric.samples[0].labels) == 2)
+ self.assertTrue("system_os" in prometheus_metric.samples[0].labels)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["system_os"], "Unix"
+ )
+ self.assertTrue("system_name" in prometheus_metric.samples[0].labels)
+ self.assertEqual(
+ prometheus_metric.samples[0].labels["system_name"],
+ "Prometheus Target Sanitize",
+ )
diff --git a/exporter/opentelemetry-exporter-zipkin-json/CHANGELOG.md b/exporter/opentelemetry-exporter-zipkin-json/CHANGELOG.md
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-json/LICENSE b/exporter/opentelemetry-exporter-zipkin-json/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-zipkin-json/README.rst b/exporter/opentelemetry-exporter-zipkin-json/README.rst
new file mode 100644
index 0000000000..cfb7b1fa53
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/README.rst
@@ -0,0 +1,25 @@
+OpenTelemetry Zipkin JSON Exporter
+==================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-zipkin-json.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-zipkin-json/
+
+This library allows export of tracing data to `Zipkin `_ using JSON
+for serialization.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-zipkin-json
+
+
+References
+----------
+
+* `OpenTelemetry Zipkin Exporter `_
+* `Zipkin `_
+* `OpenTelemetry Project `_
diff --git a/exporter/opentelemetry-exporter-zipkin-json/pyproject.toml b/exporter/opentelemetry-exporter-zipkin-json/pyproject.toml
new file mode 100644
index 0000000000..70292809f9
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/pyproject.toml
@@ -0,0 +1,53 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-zipkin-json"
+dynamic = ["version"]
+description = "Zipkin Span JSON Exporter for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-api ~= 1.3",
+ "opentelemetry-sdk ~= 1.11",
+ "requests ~= 2.7",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_traces_exporter]
+zipkin_json = "opentelemetry.exporter.zipkin.json:ZipkinExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-zipkin-json"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/zipkin/json/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/encoder/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/encoder/__init__.py
new file mode 100644
index 0000000000..bb90daa37c
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/encoder/__init__.py
@@ -0,0 +1,299 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Zipkin Exporter Transport Encoder
+
+Base module and abstract class for concrete transport encoders to extend.
+"""
+
+import abc
+import json
+import logging
+from enum import Enum
+from typing import Any, Dict, List, Optional, Sequence, TypeVar
+
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk.trace import Event
+from opentelemetry.trace import (
+ Span,
+ SpanContext,
+ StatusCode,
+ format_span_id,
+ format_trace_id,
+)
+
+EncodedLocalEndpointT = TypeVar("EncodedLocalEndpointT")
+
+DEFAULT_MAX_TAG_VALUE_LENGTH = 128
+NAME_KEY = "otel.library.name"
+VERSION_KEY = "otel.library.version"
+_SCOPE_NAME_KEY = "otel.scope.name"
+_SCOPE_VERSION_KEY = "otel.scope.version"
+
+logger = logging.getLogger(__name__)
+
+
+class Protocol(Enum):
+ """Enum of supported protocol formats.
+
+ Values are human-readable strings so that they can be easily used by the
+ OS environ var OTEL_EXPORTER_ZIPKIN_PROTOCOL (reserved for future usage).
+ """
+
+ V1 = "v1"
+ V2 = "v2"
+
+
+# pylint: disable=W0223
+class Encoder(abc.ABC):
+ """Base class for encoders that are used by the exporter.
+
+ Args:
+ max_tag_value_length: maximum length of an exported tag value. Values
+ will be truncated to conform. Since values are serialized to a JSON
+ list string, max_tag_value_length is honored at the element boundary.
+ """
+
+ def __init__(
+ self, max_tag_value_length: int = DEFAULT_MAX_TAG_VALUE_LENGTH
+ ):
+ self.max_tag_value_length = max_tag_value_length
+
+ @staticmethod
+ @abc.abstractmethod
+ def content_type() -> str:
+ pass
+
+ @abc.abstractmethod
+ def serialize(
+ self, spans: Sequence[Span], local_endpoint: NodeEndpoint
+ ) -> str:
+ pass
+
+ @abc.abstractmethod
+ def _encode_span(
+ self, span: Span, encoded_local_endpoint: EncodedLocalEndpointT
+ ) -> Any:
+ """
+ Per spec Zipkin fields that can be absent SHOULD be omitted from the
+ payload when they are empty in the OpenTelemetry Span.
+
+ https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/sdk_exporters/zipkin.md#request-payload
+ """
+
+ @staticmethod
+ @abc.abstractmethod
+ def _encode_local_endpoint(
+ local_endpoint: NodeEndpoint,
+ ) -> EncodedLocalEndpointT:
+ pass
+
+ @staticmethod
+ def _encode_debug(span_context) -> Any:
+ return span_context.trace_flags.sampled
+
+ @staticmethod
+ @abc.abstractmethod
+ def _encode_span_id(span_id: int) -> Any:
+ pass
+
+ @staticmethod
+ @abc.abstractmethod
+ def _encode_trace_id(trace_id: int) -> Any:
+ pass
+
+ @staticmethod
+ def _get_parent_id(span_context) -> Optional[int]:
+ if isinstance(span_context, Span):
+ parent_id = span_context.parent.span_id
+ elif isinstance(span_context, SpanContext):
+ parent_id = span_context.span_id
+ else:
+ parent_id = None
+ return parent_id
+
+ def _extract_tags_from_dict(
+ self, tags_dict: Optional[Dict]
+ ) -> Dict[str, str]:
+ tags = {}
+ if not tags_dict:
+ return tags
+ for attribute_key, attribute_value in tags_dict.items():
+ if isinstance(attribute_value, bool):
+ value = str(attribute_value).lower()
+ elif isinstance(attribute_value, (int, float, str)):
+ value = str(attribute_value)
+ elif isinstance(attribute_value, Sequence):
+ value = self._extract_tag_value_string_from_sequence(
+ attribute_value
+ )
+ if not value:
+ logger.warning("Could not serialize tag %s", attribute_key)
+ continue
+ else:
+ logger.warning("Could not serialize tag %s", attribute_key)
+ continue
+
+ if (
+ self.max_tag_value_length is not None
+ and self.max_tag_value_length > 0
+ ):
+ value = value[: self.max_tag_value_length]
+ tags[attribute_key] = value
+ return tags
+
+ def _extract_tag_value_string_from_sequence(self, sequence: Sequence):
+ if self.max_tag_value_length and self.max_tag_value_length == 1:
+ return None
+
+ tag_value_elements = []
+ running_string_length = (
+ 2 # accounts for array brackets in output string
+ )
+ defined_max_tag_value_length = (
+ self.max_tag_value_length is not None
+ and self.max_tag_value_length > 0
+ )
+
+ for element in sequence:
+ if isinstance(element, bool):
+ tag_value_element = str(element).lower()
+ elif isinstance(element, (int, float, str)):
+ tag_value_element = str(element)
+ elif element is None:
+ tag_value_element = None
+ else:
+ continue
+
+ if defined_max_tag_value_length:
+ if tag_value_element is None:
+ running_string_length += 4 # null with no quotes
+ else:
+ # + 2 accounts for string quotation marks
+ running_string_length += len(tag_value_element) + 2
+
+ if tag_value_elements:
+ # accounts for ',' item separator
+ running_string_length += 1
+
+ if running_string_length > self.max_tag_value_length:
+ break
+
+ tag_value_elements.append(tag_value_element)
+
+ return json.dumps(tag_value_elements, separators=(",", ":"))
+
+ def _extract_tags_from_span(self, span: Span) -> Dict[str, str]:
+ tags = self._extract_tags_from_dict(span.attributes)
+ if span.resource:
+ tags.update(self._extract_tags_from_dict(span.resource.attributes))
+ if span.instrumentation_scope is not None:
+ tags.update(
+ {
+ NAME_KEY: span.instrumentation_scope.name,
+ VERSION_KEY: span.instrumentation_scope.version,
+ _SCOPE_NAME_KEY: span.instrumentation_scope.name,
+ _SCOPE_VERSION_KEY: span.instrumentation_scope.version,
+ }
+ )
+ if span.status.status_code is not StatusCode.UNSET:
+ tags.update({"otel.status_code": span.status.status_code.name})
+ if span.status.status_code is StatusCode.ERROR:
+ tags.update({"error": span.status.description or ""})
+
+ if span.dropped_attributes:
+ tags.update(
+ {"otel.dropped_attributes_count": str(span.dropped_attributes)}
+ )
+
+ if span.dropped_events:
+ tags.update(
+ {"otel.dropped_events_count": str(span.dropped_events)}
+ )
+
+ if span.dropped_links:
+ tags.update({"otel.dropped_links_count": str(span.dropped_links)})
+
+ return tags
+
+ def _extract_annotations_from_events(
+ self, events: Optional[List[Event]]
+ ) -> Optional[List[Dict]]:
+ if not events:
+ return None
+
+ annotations = []
+ for event in events:
+ attrs = {}
+ for key, value in event.attributes.items():
+ if (
+ isinstance(value, str)
+ and self.max_tag_value_length is not None
+ and self.max_tag_value_length > 0
+ ):
+ value = value[: self.max_tag_value_length]
+ attrs[key] = value
+
+ annotations.append(
+ {
+ "timestamp": self._nsec_to_usec_round(event.timestamp),
+ "value": json.dumps({event.name: attrs}, sort_keys=True),
+ }
+ )
+ return annotations
+
+ @staticmethod
+ def _nsec_to_usec_round(nsec: int) -> int:
+ """Round nanoseconds to microseconds
+
+ Timestamp in zipkin spans is int of microseconds.
+ See: https://zipkin.io/pages/instrumenting.html
+ """
+ return (nsec + 500) // 10**3
+
+
+class JsonEncoder(Encoder):
+ @staticmethod
+ def content_type():
+ return "application/json"
+
+ def serialize(
+ self, spans: Sequence[Span], local_endpoint: NodeEndpoint
+ ) -> str:
+ encoded_local_endpoint = self._encode_local_endpoint(local_endpoint)
+ encoded_spans = []
+ for span in spans:
+ encoded_spans.append(
+ self._encode_span(span, encoded_local_endpoint)
+ )
+ return json.dumps(encoded_spans)
+
+ @staticmethod
+ def _encode_local_endpoint(local_endpoint: NodeEndpoint) -> Dict:
+ encoded_local_endpoint = {"serviceName": local_endpoint.service_name}
+ if local_endpoint.ipv4 is not None:
+ encoded_local_endpoint["ipv4"] = str(local_endpoint.ipv4)
+ if local_endpoint.ipv6 is not None:
+ encoded_local_endpoint["ipv6"] = str(local_endpoint.ipv6)
+ if local_endpoint.port is not None:
+ encoded_local_endpoint["port"] = local_endpoint.port
+ return encoded_local_endpoint
+
+ @staticmethod
+ def _encode_span_id(span_id: int) -> str:
+ return format_span_id(span_id)
+
+ @staticmethod
+ def _encode_trace_id(trace_id: int) -> str:
+ return format_trace_id(trace_id)
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/__init__.py
new file mode 100644
index 0000000000..ba313db942
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/__init__.py
@@ -0,0 +1,191 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+OpenTelemetry Zipkin JSON Exporter
+----------------------------------
+
+This library allows to export tracing data to `Zipkin `_.
+
+Usage
+-----
+
+The **OpenTelemetry Zipkin JSON Exporter** allows exporting of `OpenTelemetry`_
+traces to `Zipkin`_. This exporter sends traces to the configured Zipkin
+collector endpoint using JSON over HTTP and supports multiple versions (v1, v2).
+
+.. _Zipkin: https://zipkin.io/
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+.. _Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#zipkin-exporter
+
+.. code:: python
+
+ import requests
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.zipkin.json import ZipkinExporter
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+ trace.set_tracer_provider(TracerProvider())
+ tracer = trace.get_tracer(__name__)
+
+ # create a ZipkinExporter
+ zipkin_exporter = ZipkinExporter(
+ # version=Protocol.V2
+ # optional:
+ # endpoint="http://localhost:9411/api/v2/spans",
+ # local_node_ipv4="192.168.0.1",
+ # local_node_ipv6="2001:db8::c001",
+ # local_node_port=31313,
+ # max_tag_value_length=256,
+ # timeout=5 (in seconds),
+ # session=requests.Session(),
+ )
+
+ # Create a BatchSpanProcessor and add the exporter to it
+ span_processor = BatchSpanProcessor(zipkin_exporter)
+
+ # add to the tracer
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ print("Hello world!")
+
+The exporter supports the following environment variable for configuration:
+
+- :envvar:`OTEL_EXPORTER_ZIPKIN_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_ZIPKIN_TIMEOUT`
+
+API
+---
+"""
+
+import logging
+from os import environ
+from typing import Optional, Sequence
+
+import requests
+
+from opentelemetry.exporter.zipkin.encoder import Protocol
+from opentelemetry.exporter.zipkin.json.v1 import JsonV1Encoder
+from opentelemetry.exporter.zipkin.json.v2 import JsonV2Encoder
+from opentelemetry.exporter.zipkin.node_endpoint import IpInput, NodeEndpoint
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_ZIPKIN_ENDPOINT,
+ OTEL_EXPORTER_ZIPKIN_TIMEOUT,
+)
+from opentelemetry.sdk.resources import SERVICE_NAME
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+from opentelemetry.trace import Span
+
+DEFAULT_ENDPOINT = "http://localhost:9411/api/v2/spans"
+REQUESTS_SUCCESS_STATUS_CODES = (200, 202)
+
+logger = logging.getLogger(__name__)
+
+
+class ZipkinExporter(SpanExporter):
+ def __init__(
+ self,
+ version: Protocol = Protocol.V2,
+ endpoint: Optional[str] = None,
+ local_node_ipv4: IpInput = None,
+ local_node_ipv6: IpInput = None,
+ local_node_port: Optional[int] = None,
+ max_tag_value_length: Optional[int] = None,
+ timeout: Optional[int] = None,
+ session: Optional[requests.Session] = None,
+ ):
+ """Zipkin exporter.
+
+ Args:
+ version: The protocol version to be used.
+ endpoint: The endpoint of the Zipkin collector.
+ local_node_ipv4: Primary IPv4 address associated with this connection.
+ local_node_ipv6: Primary IPv6 address associated with this connection.
+ local_node_port: Depending on context, this could be a listen port or the
+ client-side of a socket.
+ max_tag_value_length: Max length string attribute values can have.
+ timeout: Maximum time the Zipkin exporter will wait for each batch export.
+ The default value is 10s.
+ session: Connection session to the Zipkin collector endpoint.
+
+ The tuple (local_node_ipv4, local_node_ipv6, local_node_port) is used to represent
+ the network context of a node in the service graph.
+ """
+ self.local_node = NodeEndpoint(
+ local_node_ipv4, local_node_ipv6, local_node_port
+ )
+
+ if endpoint is None:
+ endpoint = (
+ environ.get(OTEL_EXPORTER_ZIPKIN_ENDPOINT) or DEFAULT_ENDPOINT
+ )
+ self.endpoint = endpoint
+
+ if version == Protocol.V1:
+ self.encoder = JsonV1Encoder(max_tag_value_length)
+ elif version == Protocol.V2:
+ self.encoder = JsonV2Encoder(max_tag_value_length)
+
+ self.session = session or requests.Session()
+ self.session.headers.update(
+ {"Content-Type": self.encoder.content_type()}
+ )
+ self._closed = False
+ self.timeout = timeout or int(
+ environ.get(OTEL_EXPORTER_ZIPKIN_TIMEOUT, 10)
+ )
+
+ def export(self, spans: Sequence[Span]) -> SpanExportResult:
+ # After the call to Shutdown subsequent calls to Export are
+ # not allowed and should return a Failure result
+ if self._closed:
+ logger.warning("Exporter already shutdown, ignoring batch")
+ return SpanExportResult.FAILURE
+
+ # Populate service_name from first span
+ # We restrict any SpanProcessor to be only associated with a single
+ # TracerProvider, so it is safe to assume that all Spans in a single
+ # batch all originate from one TracerProvider (and in turn have all
+ # the same service.name)
+ if spans:
+ service_name = spans[0].resource.attributes.get(SERVICE_NAME)
+ if service_name:
+ self.local_node.service_name = service_name
+ result = self.session.post(
+ url=self.endpoint,
+ data=self.encoder.serialize(spans, self.local_node),
+ timeout=self.timeout,
+ )
+
+ if result.status_code not in REQUESTS_SUCCESS_STATUS_CODES:
+ logger.error(
+ "Traces cannot be uploaded; status code: %s, message %s",
+ result.status_code,
+ result.text,
+ )
+ return SpanExportResult.FAILURE
+ return SpanExportResult.SUCCESS
+
+ def shutdown(self) -> None:
+ if self._closed:
+ logger.warning("Exporter already shutdown, ignoring call")
+ return
+ self.session.close()
+ self._closed = True
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ return True
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v1/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v1/__init__.py
new file mode 100644
index 0000000000..5272173f31
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v1/__init__.py
@@ -0,0 +1,84 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Zipkin Export Encoders for JSON formats
+"""
+
+from typing import Dict, List
+
+from opentelemetry.exporter.zipkin.encoder import Encoder, JsonEncoder
+from opentelemetry.trace import Span
+
+
+# pylint: disable=W0223
+class V1Encoder(Encoder):
+ def _extract_binary_annotations(
+ self, span: Span, encoded_local_endpoint: Dict
+ ) -> List[Dict]:
+ binary_annotations = []
+ for tag_key, tag_value in self._extract_tags_from_span(span).items():
+ if isinstance(tag_value, str) and self.max_tag_value_length > 0:
+ tag_value = tag_value[: self.max_tag_value_length]
+ binary_annotations.append(
+ {
+ "key": tag_key,
+ "value": tag_value,
+ "endpoint": encoded_local_endpoint,
+ }
+ )
+ return binary_annotations
+
+
+class JsonV1Encoder(JsonEncoder, V1Encoder):
+ """Zipkin Export Encoder for JSON v1 API
+
+ API spec: https://github.com/openzipkin/zipkin-api/blob/master/zipkin-api.yaml
+ """
+
+ def _encode_span(self, span: Span, encoded_local_endpoint: Dict) -> Dict:
+ context = span.get_span_context()
+
+ encoded_span = {
+ "traceId": self._encode_trace_id(context.trace_id),
+ "id": self._encode_span_id(context.span_id),
+ "name": span.name,
+ "timestamp": self._nsec_to_usec_round(span.start_time),
+ "duration": self._nsec_to_usec_round(
+ span.end_time - span.start_time
+ ),
+ }
+
+ encoded_annotations = self._extract_annotations_from_events(
+ span.events
+ )
+ if encoded_annotations is not None:
+ for annotation in encoded_annotations:
+ annotation["endpoint"] = encoded_local_endpoint
+ encoded_span["annotations"] = encoded_annotations
+
+ binary_annotations = self._extract_binary_annotations(
+ span, encoded_local_endpoint
+ )
+ if binary_annotations:
+ encoded_span["binaryAnnotations"] = binary_annotations
+
+ debug = self._encode_debug(context)
+ if debug:
+ encoded_span["debug"] = debug
+
+ parent_id = self._get_parent_id(span.parent)
+ if parent_id is not None:
+ encoded_span["parentId"] = self._encode_span_id(parent_id)
+
+ return encoded_span
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v2/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v2/__init__.py
new file mode 100644
index 0000000000..ec6e53382b
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/v2/__init__.py
@@ -0,0 +1,67 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Zipkin Export Encoders for JSON formats
+"""
+from typing import Dict
+
+from opentelemetry.exporter.zipkin.encoder import JsonEncoder
+from opentelemetry.trace import Span, SpanKind
+
+
+class JsonV2Encoder(JsonEncoder):
+ """Zipkin Export Encoder for JSON v2 API
+
+ API spec: https://github.com/openzipkin/zipkin-api/blob/master/zipkin2-api.yaml
+ """
+
+ SPAN_KIND_MAP = {
+ SpanKind.INTERNAL: None,
+ SpanKind.SERVER: "SERVER",
+ SpanKind.CLIENT: "CLIENT",
+ SpanKind.PRODUCER: "PRODUCER",
+ SpanKind.CONSUMER: "CONSUMER",
+ }
+
+ def _encode_span(self, span: Span, encoded_local_endpoint: Dict) -> Dict:
+ context = span.get_span_context()
+ encoded_span = {
+ "traceId": self._encode_trace_id(context.trace_id),
+ "id": self._encode_span_id(context.span_id),
+ "name": span.name,
+ "timestamp": self._nsec_to_usec_round(span.start_time),
+ "duration": self._nsec_to_usec_round(
+ span.end_time - span.start_time
+ ),
+ "localEndpoint": encoded_local_endpoint,
+ "kind": self.SPAN_KIND_MAP[span.kind],
+ }
+
+ tags = self._extract_tags_from_span(span)
+ if tags:
+ encoded_span["tags"] = tags
+
+ annotations = self._extract_annotations_from_events(span.events)
+ if annotations:
+ encoded_span["annotations"] = annotations
+
+ debug = self._encode_debug(context)
+ if debug:
+ encoded_span["debug"] = debug
+
+ parent_id = self._get_parent_id(span.parent)
+ if parent_id is not None:
+ encoded_span["parentId"] = self._encode_span_id(parent_id)
+
+ return encoded_span
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/version.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/json/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/node_endpoint.py b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/node_endpoint.py
new file mode 100644
index 0000000000..67f5d0ad12
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/node_endpoint.py
@@ -0,0 +1,85 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Zipkin Exporter Endpoints"""
+
+import ipaddress
+from typing import Optional, Union
+
+from opentelemetry import trace
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+
+IpInput = Union[str, int, None]
+
+
+class NodeEndpoint:
+ """The network context of a node in the service graph.
+
+ Args:
+ ipv4: Primary IPv4 address associated with this connection.
+ ipv6: Primary IPv6 address associated with this connection.
+ port: Depending on context, this could be a listen port or the
+ client-side of a socket. None if unknown.
+ """
+
+ def __init__(
+ self,
+ ipv4: IpInput = None,
+ ipv6: IpInput = None,
+ port: Optional[int] = None,
+ ):
+ self.ipv4 = ipv4
+ self.ipv6 = ipv6
+ self.port = port
+
+ tracer_provider = trace.get_tracer_provider()
+
+ if hasattr(tracer_provider, "resource"):
+ resource = tracer_provider.resource
+ else:
+ resource = Resource.create()
+
+ self.service_name = resource.attributes[SERVICE_NAME]
+
+ @property
+ def ipv4(self) -> Optional[ipaddress.IPv4Address]:
+ return self._ipv4
+
+ @ipv4.setter
+ def ipv4(self, address: IpInput) -> None:
+ if address is None:
+ self._ipv4 = None
+ else:
+ ipv4_address = ipaddress.ip_address(address)
+ if not isinstance(ipv4_address, ipaddress.IPv4Address):
+ raise ValueError(
+ f"{address!r} does not appear to be an IPv4 address"
+ )
+ self._ipv4 = ipv4_address
+
+ @property
+ def ipv6(self) -> Optional[ipaddress.IPv6Address]:
+ return self._ipv6
+
+ @ipv6.setter
+ def ipv6(self, address: IpInput) -> None:
+ if address is None:
+ self._ipv6 = None
+ else:
+ ipv6_address = ipaddress.ip_address(address)
+ if not isinstance(ipv6_address, ipaddress.IPv6Address):
+ raise ValueError(
+ f"{address!r} does not appear to be an IPv6 address"
+ )
+ self._ipv6 = ipv6_address
diff --git a/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/py.typed b/exporter/opentelemetry-exporter-zipkin-json/src/opentelemetry/exporter/zipkin/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/__init__.py b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/common_tests.py b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/common_tests.py
new file mode 100644
index 0000000000..ada00c7c8e
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/common_tests.py
@@ -0,0 +1,479 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import abc
+import unittest
+from typing import Dict, List
+
+from opentelemetry import trace as trace_api
+from opentelemetry.exporter.zipkin.encoder import (
+ DEFAULT_MAX_TAG_VALUE_LENGTH,
+ Encoder,
+)
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.trace import TraceFlags
+from opentelemetry.trace.status import Status, StatusCode
+
+TEST_SERVICE_NAME = "test_service"
+
+
+# pylint: disable=protected-access
+class CommonEncoderTestCases:
+ class CommonEncoderTest(unittest.TestCase):
+ @staticmethod
+ @abc.abstractmethod
+ def get_encoder(*args, **kwargs) -> Encoder:
+ pass
+
+ @classmethod
+ def get_encoder_default(cls) -> Encoder:
+ return cls.get_encoder()
+
+ @abc.abstractmethod
+ def test_encode_trace_id(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_span_id(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_local_endpoint_default(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_local_endpoint_explicits(self):
+ pass
+
+ @abc.abstractmethod
+ def _test_encode_max_tag_length(self, max_tag_value_length: int):
+ pass
+
+ def test_encode_max_tag_length_2(self):
+ self._test_encode_max_tag_length(2)
+
+ def test_encode_max_tag_length_5(self):
+ self._test_encode_max_tag_length(5)
+
+ def test_encode_max_tag_length_9(self):
+ self._test_encode_max_tag_length(9)
+
+ def test_encode_max_tag_length_10(self):
+ self._test_encode_max_tag_length(10)
+
+ def test_encode_max_tag_length_11(self):
+ self._test_encode_max_tag_length(11)
+
+ def test_encode_max_tag_length_128(self):
+ self._test_encode_max_tag_length(128)
+
+ def test_constructor_default(self):
+ encoder = self.get_encoder()
+
+ self.assertEqual(
+ DEFAULT_MAX_TAG_VALUE_LENGTH, encoder.max_tag_value_length
+ )
+
+ def test_constructor_max_tag_value_length(self):
+ max_tag_value_length = 123456
+ encoder = self.get_encoder(max_tag_value_length)
+ self.assertEqual(
+ max_tag_value_length, encoder.max_tag_value_length
+ )
+
+ def test_nsec_to_usec_round(self):
+ base_time_nsec = 683647322 * 10**9
+ for nsec in (
+ base_time_nsec,
+ base_time_nsec + 150 * 10**6,
+ base_time_nsec + 300 * 10**6,
+ base_time_nsec + 400 * 10**6,
+ ):
+ self.assertEqual(
+ (nsec + 500) // 10**3,
+ self.get_encoder_default()._nsec_to_usec_round(nsec),
+ )
+
+ def test_encode_debug(self):
+ self.assertFalse(
+ self.get_encoder_default()._encode_debug(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.DEFAULT),
+ )
+ )
+ )
+ self.assertTrue(
+ self.get_encoder_default()._encode_debug(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ )
+ )
+ )
+
+ def test_get_parent_id_from_span(self):
+ parent_id = 0x00000000DEADBEF0
+ self.assertEqual(
+ parent_id,
+ self.get_encoder_default()._get_parent_id(
+ trace._Span(
+ name="test-span",
+ context=trace_api.SpanContext(
+ 0x000000000000000000000000DEADBEEF,
+ 0x04BF92DEEFC58C92,
+ is_remote=False,
+ ),
+ parent=trace_api.SpanContext(
+ 0x0000000000000000000000AADEADBEEF,
+ parent_id,
+ is_remote=False,
+ ),
+ )
+ ),
+ )
+
+ def test_get_parent_id_from_span_context(self):
+ parent_id = 0x00000000DEADBEF0
+ self.assertEqual(
+ parent_id,
+ self.get_encoder_default()._get_parent_id(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=parent_id,
+ is_remote=False,
+ ),
+ ),
+ )
+
+ @staticmethod
+ def get_data_for_max_tag_length_test(
+ max_tag_length: int,
+ ) -> (trace._Span, Dict):
+ start_time = 683647322 * 10**9 # in ns
+ duration = 50 * 10**6
+ end_time = start_time + duration
+
+ span = trace._Span(
+ name=TEST_SERVICE_NAME,
+ context=trace_api.SpanContext(
+ 0x0E0C63257DE34C926F9EFCD03927272E,
+ 0x04BF92DEEFC58C92,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ resource=trace.Resource({}),
+ )
+ span.start(start_time=start_time)
+ span.set_attribute("string1", "v" * 500)
+ span.set_attribute("string2", "v" * 50)
+ span.set_attribute("list1", ["a"] * 25)
+ span.set_attribute("list2", ["a"] * 10)
+ span.set_attribute("list3", [2] * 25)
+ span.set_attribute("list4", [2] * 10)
+ span.set_attribute("list5", [True] * 25)
+ span.set_attribute("list6", [True] * 10)
+ span.set_attribute("tuple1", ("a",) * 25)
+ span.set_attribute("tuple2", ("a",) * 10)
+ span.set_attribute("tuple3", (2,) * 25)
+ span.set_attribute("tuple4", (2,) * 10)
+ span.set_attribute("tuple5", (True,) * 25)
+ span.set_attribute("tuple6", (True,) * 10)
+ span.set_attribute("range1", range(0, 25))
+ span.set_attribute("range2", range(0, 10))
+ span.set_attribute("empty_list", [])
+ span.set_attribute("none_list", ["hello", None, "world"])
+ span.end(end_time=end_time)
+
+ expected_outputs = {
+ 2: {
+ "string1": "vv",
+ "string2": "vv",
+ "list1": "[]",
+ "list2": "[]",
+ "list3": "[]",
+ "list4": "[]",
+ "list5": "[]",
+ "list6": "[]",
+ "tuple1": "[]",
+ "tuple2": "[]",
+ "tuple3": "[]",
+ "tuple4": "[]",
+ "tuple5": "[]",
+ "tuple6": "[]",
+ "range1": "[]",
+ "range2": "[]",
+ "empty_list": "[]",
+ "none_list": "[]",
+ },
+ 5: {
+ "string1": "vvvvv",
+ "string2": "vvvvv",
+ "list1": '["a"]',
+ "list2": '["a"]',
+ "list3": '["2"]',
+ "list4": '["2"]',
+ "list5": "[]",
+ "list6": "[]",
+ "tuple1": '["a"]',
+ "tuple2": '["a"]',
+ "tuple3": '["2"]',
+ "tuple4": '["2"]',
+ "tuple5": "[]",
+ "tuple6": "[]",
+ "range1": '["0"]',
+ "range2": '["0"]',
+ "empty_list": "[]",
+ "none_list": "[]",
+ },
+ 9: {
+ "string1": "vvvvvvvvv",
+ "string2": "vvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 10: {
+ "string1": "vvvvvvvvvv",
+ "string2": "vvvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 11: {
+ "string1": "vvvvvvvvvvv",
+ "string2": "vvvvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 128: {
+ "string1": "v" * 128,
+ "string2": "v" * 50,
+ "list1": '["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]',
+ "list2": '["a","a","a","a","a","a","a","a","a","a"]',
+ "list3": '["2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2"]',
+ "list4": '["2","2","2","2","2","2","2","2","2","2"]',
+ "list5": '["true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true"]',
+ "list6": '["true","true","true","true","true","true","true","true","true","true"]',
+ "tuple1": '["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]',
+ "tuple2": '["a","a","a","a","a","a","a","a","a","a"]',
+ "tuple3": '["2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2"]',
+ "tuple4": '["2","2","2","2","2","2","2","2","2","2"]',
+ "tuple5": '["true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true"]',
+ "tuple6": '["true","true","true","true","true","true","true","true","true","true"]',
+ "range1": '["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24"]',
+ "range2": '["0","1","2","3","4","5","6","7","8","9"]',
+ "empty_list": "[]",
+ "none_list": '["hello",null,"world"]',
+ },
+ }
+
+ return span, expected_outputs[max_tag_length]
+
+ @staticmethod
+ def get_exhaustive_otel_span_list() -> List[trace._Span]:
+ trace_id = 0x6E0C63257DE34C926F9EFCD03927272E
+
+ base_time = 683647322 * 10**9 # in ns
+ start_times = (
+ base_time,
+ base_time + 150 * 10**6,
+ base_time + 300 * 10**6,
+ base_time + 400 * 10**6,
+ )
+ end_times = (
+ start_times[0] + (50 * 10**6),
+ start_times[1] + (100 * 10**6),
+ start_times[2] + (200 * 10**6),
+ start_times[3] + (300 * 10**6),
+ )
+
+ parent_span_context = trace_api.SpanContext(
+ trace_id, 0x1111111111111111, is_remote=False
+ )
+
+ other_context = trace_api.SpanContext(
+ trace_id, 0x2222222222222222, is_remote=False
+ )
+
+ span1 = trace._Span(
+ name="test-span-1",
+ context=trace_api.SpanContext(
+ trace_id,
+ 0x34BF92DEEFC58C92,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ parent=parent_span_context,
+ events=(
+ trace.Event(
+ name="event0",
+ timestamp=base_time + 50 * 10**6,
+ attributes={
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ },
+ ),
+ ),
+ links=(
+ trace_api.Link(
+ context=other_context, attributes={"key_bool": True}
+ ),
+ ),
+ resource=trace.Resource({}),
+ )
+ span1.start(start_time=start_times[0])
+ span1.set_attribute("key_bool", False)
+ span1.set_attribute("key_string", "hello_world")
+ span1.set_attribute("key_float", 111.22)
+ span1.set_status(Status(StatusCode.OK))
+ span1.end(end_time=end_times[0])
+
+ span2 = trace._Span(
+ name="test-span-2",
+ context=parent_span_context,
+ parent=None,
+ resource=trace.Resource(
+ attributes={"key_resource": "some_resource"}
+ ),
+ )
+ span2.start(start_time=start_times[1])
+ span2.set_status(Status(StatusCode.ERROR, "Example description"))
+ span2.end(end_time=end_times[1])
+
+ span3 = trace._Span(
+ name="test-span-3",
+ context=other_context,
+ parent=None,
+ resource=trace.Resource(
+ attributes={"key_resource": "some_resource"}
+ ),
+ )
+ span3.start(start_time=start_times[2])
+ span3.set_attribute("key_string", "hello_world")
+ span3.end(end_time=end_times[2])
+
+ span4 = trace._Span(
+ name="test-span-3",
+ context=other_context,
+ parent=None,
+ resource=trace.Resource({}),
+ instrumentation_scope=InstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+ span4.start(start_time=start_times[3])
+ span4.end(end_time=end_times[3])
+
+ return [span1, span2, span3, span4]
+
+ # pylint: disable=W0223
+ class CommonJsonEncoderTest(CommonEncoderTest, abc.ABC):
+ def test_encode_trace_id(self):
+ for trace_id in (1, 1024, 2**32, 2**64, 2**65):
+ self.assertEqual(
+ format(trace_id, "032x"),
+ self.get_encoder_default()._encode_trace_id(trace_id),
+ )
+
+ def test_encode_span_id(self):
+ for span_id in (1, 1024, 2**8, 2**16, 2**32, 2**64):
+ self.assertEqual(
+ format(span_id, "016x"),
+ self.get_encoder_default()._encode_span_id(span_id),
+ )
+
+ def test_encode_local_endpoint_default(self):
+ self.assertEqual(
+ self.get_encoder_default()._encode_local_endpoint(
+ NodeEndpoint()
+ ),
+ {"serviceName": TEST_SERVICE_NAME},
+ )
+
+ def test_encode_local_endpoint_explicits(self):
+ ipv4 = "192.168.0.1"
+ ipv6 = "2001:db8::c001"
+ port = 414120
+ self.assertEqual(
+ self.get_encoder_default()._encode_local_endpoint(
+ NodeEndpoint(ipv4, ipv6, port)
+ ),
+ {
+ "serviceName": TEST_SERVICE_NAME,
+ "ipv4": ipv4,
+ "ipv6": ipv6,
+ "port": port,
+ },
+ )
+
+ @staticmethod
+ def pop_and_sort(source_list, source_index, sort_key):
+ """
+ Convenience method that will pop a specified index from a list,
+ sort it by a given key and then return it.
+ """
+ popped_item = source_list.pop(source_index, None)
+ if popped_item is not None:
+ popped_item = sorted(popped_item, key=lambda x: x[sort_key])
+ return popped_item
+
+ def assert_equal_encoded_spans(self, expected_spans, actual_spans):
+ self.assertEqual(expected_spans, actual_spans)
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v1_json.py b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v1_json.py
new file mode 100644
index 0000000000..778ed74e8d
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v1_json.py
@@ -0,0 +1,285 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import json
+
+from opentelemetry import trace as trace_api
+from opentelemetry.exporter.zipkin.encoder import (
+ _SCOPE_NAME_KEY,
+ _SCOPE_VERSION_KEY,
+ NAME_KEY,
+ VERSION_KEY,
+)
+from opentelemetry.exporter.zipkin.json.v1 import JsonV1Encoder
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk import trace
+from opentelemetry.test.spantestutil import (
+ get_span_with_dropped_attributes_events_links,
+)
+from opentelemetry.trace import TraceFlags, format_span_id, format_trace_id
+
+from .common_tests import ( # pylint: disable=import-error
+ TEST_SERVICE_NAME,
+ CommonEncoderTestCases,
+)
+
+
+# pylint: disable=protected-access
+class TestV1JsonEncoder(CommonEncoderTestCases.CommonJsonEncoderTest):
+ @staticmethod
+ def get_encoder(*args, **kwargs) -> JsonV1Encoder:
+ return JsonV1Encoder(*args, **kwargs)
+
+ def test_encode(self):
+
+ local_endpoint = {"serviceName": TEST_SERVICE_NAME}
+
+ otel_spans = self.get_exhaustive_otel_span_list()
+ trace_id = JsonV1Encoder._encode_trace_id(
+ otel_spans[0].context.trace_id
+ )
+
+ expected_output = [
+ {
+ "traceId": trace_id,
+ "id": JsonV1Encoder._encode_span_id(
+ otel_spans[0].context.span_id
+ ),
+ "name": otel_spans[0].name,
+ "timestamp": otel_spans[0].start_time // 10**3,
+ "duration": (otel_spans[0].end_time // 10**3)
+ - (otel_spans[0].start_time // 10**3),
+ "annotations": [
+ {
+ "timestamp": otel_spans[0].events[0].timestamp
+ // 10**3,
+ "value": json.dumps(
+ {
+ "event0": {
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ }
+ },
+ sort_keys=True,
+ ),
+ "endpoint": local_endpoint,
+ }
+ ],
+ "binaryAnnotations": [
+ {
+ "key": "key_bool",
+ "value": "false",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "key_string",
+ "value": "hello_world",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "key_float",
+ "value": "111.22",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "otel.status_code",
+ "value": "OK",
+ "endpoint": local_endpoint,
+ },
+ ],
+ "debug": True,
+ "parentId": JsonV1Encoder._encode_span_id(
+ otel_spans[0].parent.span_id
+ ),
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV1Encoder._encode_span_id(
+ otel_spans[1].context.span_id
+ ),
+ "name": otel_spans[1].name,
+ "timestamp": otel_spans[1].start_time // 10**3,
+ "duration": (otel_spans[1].end_time // 10**3)
+ - (otel_spans[1].start_time // 10**3),
+ "binaryAnnotations": [
+ {
+ "key": "key_resource",
+ "value": "some_resource",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "otel.status_code",
+ "value": "ERROR",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "error",
+ "value": "Example description",
+ "endpoint": local_endpoint,
+ },
+ ],
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV1Encoder._encode_span_id(
+ otel_spans[2].context.span_id
+ ),
+ "name": otel_spans[2].name,
+ "timestamp": otel_spans[2].start_time // 10**3,
+ "duration": (otel_spans[2].end_time // 10**3)
+ - (otel_spans[2].start_time // 10**3),
+ "binaryAnnotations": [
+ {
+ "key": "key_string",
+ "value": "hello_world",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": "key_resource",
+ "value": "some_resource",
+ "endpoint": local_endpoint,
+ },
+ ],
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV1Encoder._encode_span_id(
+ otel_spans[3].context.span_id
+ ),
+ "name": otel_spans[3].name,
+ "timestamp": otel_spans[3].start_time // 10**3,
+ "duration": (otel_spans[3].end_time // 10**3)
+ - (otel_spans[3].start_time // 10**3),
+ "binaryAnnotations": [
+ {
+ "key": NAME_KEY,
+ "value": "name",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": VERSION_KEY,
+ "value": "version",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": _SCOPE_NAME_KEY,
+ "value": "name",
+ "endpoint": local_endpoint,
+ },
+ {
+ "key": _SCOPE_VERSION_KEY,
+ "value": "version",
+ "endpoint": local_endpoint,
+ },
+ ],
+ },
+ ]
+
+ self.assert_equal_encoded_spans(
+ json.dumps(expected_output),
+ JsonV1Encoder().serialize(otel_spans, NodeEndpoint()),
+ )
+
+ def test_encode_id_zero_padding(self):
+ trace_id = 0x0E0C63257DE34C926F9EFCD03927272E
+ span_id = 0x04BF92DEEFC58C92
+ parent_id = 0x0AAAAAAAAAAAAAAA
+ start_time = 683647322 * 10**9 # in ns
+ duration = 50 * 10**6
+ end_time = start_time + duration
+
+ otel_span = trace._Span(
+ name=TEST_SERVICE_NAME,
+ context=trace_api.SpanContext(
+ trace_id,
+ span_id,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ parent=trace_api.SpanContext(trace_id, parent_id, is_remote=False),
+ resource=trace.Resource({}),
+ )
+ otel_span.start(start_time=start_time)
+ otel_span.end(end_time=end_time)
+
+ expected_output = [
+ {
+ "traceId": format_trace_id(trace_id),
+ "id": format_span_id(span_id),
+ "name": TEST_SERVICE_NAME,
+ "timestamp": JsonV1Encoder._nsec_to_usec_round(start_time),
+ "duration": JsonV1Encoder._nsec_to_usec_round(duration),
+ "debug": True,
+ "parentId": format_span_id(parent_id),
+ }
+ ]
+
+ self.assertEqual(
+ json.dumps(expected_output),
+ JsonV1Encoder().serialize([otel_span], NodeEndpoint()),
+ )
+
+ def _test_encode_max_tag_length(self, max_tag_value_length: int):
+ otel_span, expected_tag_output = self.get_data_for_max_tag_length_test(
+ max_tag_value_length
+ )
+ service_name = otel_span.name
+
+ binary_annotations = []
+ for tag_key, tag_expected_value in expected_tag_output.items():
+ binary_annotations.append(
+ {
+ "key": tag_key,
+ "value": tag_expected_value,
+ "endpoint": {"serviceName": service_name},
+ }
+ )
+
+ expected_output = [
+ {
+ "traceId": JsonV1Encoder._encode_trace_id(
+ otel_span.context.trace_id
+ ),
+ "id": JsonV1Encoder._encode_span_id(otel_span.context.span_id),
+ "name": service_name,
+ "timestamp": JsonV1Encoder._nsec_to_usec_round(
+ otel_span.start_time
+ ),
+ "duration": JsonV1Encoder._nsec_to_usec_round(
+ otel_span.end_time - otel_span.start_time
+ ),
+ "binaryAnnotations": binary_annotations,
+ "debug": True,
+ }
+ ]
+
+ self.assert_equal_encoded_spans(
+ json.dumps(expected_output),
+ JsonV1Encoder(max_tag_value_length).serialize(
+ [otel_span], NodeEndpoint()
+ ),
+ )
+
+ def test_dropped_span_attributes(self):
+ otel_span = get_span_with_dropped_attributes_events_links()
+ annotations = JsonV1Encoder()._encode_span(otel_span, "test")[
+ "binaryAnnotations"
+ ]
+ annotations = {
+ annotation["key"]: annotation["value"]
+ for annotation in annotations
+ }
+ self.assertEqual("1", annotations["otel.dropped_links_count"])
+ self.assertEqual("2", annotations["otel.dropped_attributes_count"])
+ self.assertEqual("3", annotations["otel.dropped_events_count"])
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v2_json.py b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v2_json.py
new file mode 100644
index 0000000000..37a0414fca
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/tests/encoder/test_v2_json.py
@@ -0,0 +1,229 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import json
+
+from opentelemetry import trace as trace_api
+from opentelemetry.exporter.zipkin.encoder import (
+ _SCOPE_NAME_KEY,
+ _SCOPE_VERSION_KEY,
+ NAME_KEY,
+ VERSION_KEY,
+)
+from opentelemetry.exporter.zipkin.json.v2 import JsonV2Encoder
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk import trace
+from opentelemetry.test.spantestutil import (
+ get_span_with_dropped_attributes_events_links,
+)
+from opentelemetry.trace import SpanKind, TraceFlags
+
+from .common_tests import ( # pylint: disable=import-error
+ TEST_SERVICE_NAME,
+ CommonEncoderTestCases,
+)
+
+
+# pylint: disable=protected-access
+class TestV2JsonEncoder(CommonEncoderTestCases.CommonJsonEncoderTest):
+ @staticmethod
+ def get_encoder(*args, **kwargs) -> JsonV2Encoder:
+ return JsonV2Encoder(*args, **kwargs)
+
+ def test_encode(self):
+ local_endpoint = {"serviceName": TEST_SERVICE_NAME}
+ span_kind = JsonV2Encoder.SPAN_KIND_MAP[SpanKind.INTERNAL]
+
+ otel_spans = self.get_exhaustive_otel_span_list()
+ trace_id = JsonV2Encoder._encode_trace_id(
+ otel_spans[0].context.trace_id
+ )
+
+ expected_output = [
+ {
+ "traceId": trace_id,
+ "id": JsonV2Encoder._encode_span_id(
+ otel_spans[0].context.span_id
+ ),
+ "name": otel_spans[0].name,
+ "timestamp": otel_spans[0].start_time // 10**3,
+ "duration": (otel_spans[0].end_time // 10**3)
+ - (otel_spans[0].start_time // 10**3),
+ "localEndpoint": local_endpoint,
+ "kind": span_kind,
+ "tags": {
+ "key_bool": "false",
+ "key_string": "hello_world",
+ "key_float": "111.22",
+ "otel.status_code": "OK",
+ },
+ "annotations": [
+ {
+ "timestamp": otel_spans[0].events[0].timestamp
+ // 10**3,
+ "value": json.dumps(
+ {
+ "event0": {
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ }
+ },
+ sort_keys=True,
+ ),
+ }
+ ],
+ "debug": True,
+ "parentId": JsonV2Encoder._encode_span_id(
+ otel_spans[0].parent.span_id
+ ),
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV2Encoder._encode_span_id(
+ otel_spans[1].context.span_id
+ ),
+ "name": otel_spans[1].name,
+ "timestamp": otel_spans[1].start_time // 10**3,
+ "duration": (otel_spans[1].end_time // 10**3)
+ - (otel_spans[1].start_time // 10**3),
+ "localEndpoint": local_endpoint,
+ "kind": span_kind,
+ "tags": {
+ "key_resource": "some_resource",
+ "otel.status_code": "ERROR",
+ "error": "Example description",
+ },
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV2Encoder._encode_span_id(
+ otel_spans[2].context.span_id
+ ),
+ "name": otel_spans[2].name,
+ "timestamp": otel_spans[2].start_time // 10**3,
+ "duration": (otel_spans[2].end_time // 10**3)
+ - (otel_spans[2].start_time // 10**3),
+ "localEndpoint": local_endpoint,
+ "kind": span_kind,
+ "tags": {
+ "key_string": "hello_world",
+ "key_resource": "some_resource",
+ },
+ },
+ {
+ "traceId": trace_id,
+ "id": JsonV2Encoder._encode_span_id(
+ otel_spans[3].context.span_id
+ ),
+ "name": otel_spans[3].name,
+ "timestamp": otel_spans[3].start_time // 10**3,
+ "duration": (otel_spans[3].end_time // 10**3)
+ - (otel_spans[3].start_time // 10**3),
+ "localEndpoint": local_endpoint,
+ "kind": span_kind,
+ "tags": {
+ NAME_KEY: "name",
+ VERSION_KEY: "version",
+ _SCOPE_NAME_KEY: "name",
+ _SCOPE_VERSION_KEY: "version",
+ },
+ },
+ ]
+
+ self.assert_equal_encoded_spans(
+ json.dumps(expected_output),
+ JsonV2Encoder().serialize(otel_spans, NodeEndpoint()),
+ )
+
+ def test_encode_id_zero_padding(self):
+ trace_id = 0x0E0C63257DE34C926F9EFCD03927272E
+ span_id = 0x04BF92DEEFC58C92
+ parent_id = 0x0AAAAAAAAAAAAAAA
+ start_time = 683647322 * 10**9 # in ns
+ duration = 50 * 10**6
+ end_time = start_time + duration
+
+ otel_span = trace._Span(
+ name=TEST_SERVICE_NAME,
+ context=trace_api.SpanContext(
+ trace_id,
+ span_id,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ parent=trace_api.SpanContext(trace_id, parent_id, is_remote=False),
+ resource=trace.Resource({}),
+ )
+ otel_span.start(start_time=start_time)
+ otel_span.end(end_time=end_time)
+
+ expected_output = [
+ {
+ "traceId": format(trace_id, "032x"),
+ "id": format(span_id, "016x"),
+ "name": TEST_SERVICE_NAME,
+ "timestamp": JsonV2Encoder._nsec_to_usec_round(start_time),
+ "duration": JsonV2Encoder._nsec_to_usec_round(duration),
+ "localEndpoint": {"serviceName": TEST_SERVICE_NAME},
+ "kind": JsonV2Encoder.SPAN_KIND_MAP[SpanKind.INTERNAL],
+ "debug": True,
+ "parentId": format(parent_id, "016x"),
+ }
+ ]
+
+ self.assert_equal_encoded_spans(
+ json.dumps(expected_output),
+ JsonV2Encoder().serialize([otel_span], NodeEndpoint()),
+ )
+
+ def _test_encode_max_tag_length(self, max_tag_value_length: int):
+ otel_span, expected_tag_output = self.get_data_for_max_tag_length_test(
+ max_tag_value_length
+ )
+ service_name = otel_span.name
+
+ expected_output = [
+ {
+ "traceId": JsonV2Encoder._encode_trace_id(
+ otel_span.context.trace_id
+ ),
+ "id": JsonV2Encoder._encode_span_id(otel_span.context.span_id),
+ "name": service_name,
+ "timestamp": JsonV2Encoder._nsec_to_usec_round(
+ otel_span.start_time
+ ),
+ "duration": JsonV2Encoder._nsec_to_usec_round(
+ otel_span.end_time - otel_span.start_time
+ ),
+ "localEndpoint": {"serviceName": service_name},
+ "kind": JsonV2Encoder.SPAN_KIND_MAP[SpanKind.INTERNAL],
+ "tags": expected_tag_output,
+ "debug": True,
+ }
+ ]
+
+ self.assert_equal_encoded_spans(
+ json.dumps(expected_output),
+ JsonV2Encoder(max_tag_value_length).serialize(
+ [otel_span], NodeEndpoint()
+ ),
+ )
+
+ def test_dropped_span_attributes(self):
+ otel_span = get_span_with_dropped_attributes_events_links()
+ tags = JsonV2Encoder()._encode_span(otel_span, "test")["tags"]
+
+ self.assertEqual("1", tags["otel.dropped_links_count"])
+ self.assertEqual("2", tags["otel.dropped_attributes_count"])
+ self.assertEqual("3", tags["otel.dropped_events_count"])
diff --git a/exporter/opentelemetry-exporter-zipkin-json/tests/test_zipkin_exporter.py b/exporter/opentelemetry-exporter-zipkin-json/tests/test_zipkin_exporter.py
new file mode 100644
index 0000000000..77e3ef5375
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-json/tests/test_zipkin_exporter.py
@@ -0,0 +1,228 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import ipaddress
+import os
+import unittest
+from unittest.mock import patch
+
+import requests
+
+from opentelemetry import trace
+from opentelemetry.exporter.zipkin.encoder import Protocol
+from opentelemetry.exporter.zipkin.json import DEFAULT_ENDPOINT, ZipkinExporter
+from opentelemetry.exporter.zipkin.json.v2 import JsonV2Encoder
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_ZIPKIN_ENDPOINT,
+ OTEL_EXPORTER_ZIPKIN_TIMEOUT,
+)
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+from opentelemetry.sdk.trace import TracerProvider, _Span
+from opentelemetry.sdk.trace.export import SpanExportResult
+
+TEST_SERVICE_NAME = "test_service"
+
+
+class MockResponse:
+ def __init__(self, status_code):
+ self.status_code = status_code
+ self.text = status_code
+
+
+class TestZipkinExporter(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ trace.set_tracer_provider(
+ TracerProvider(
+ resource=Resource({SERVICE_NAME: TEST_SERVICE_NAME})
+ )
+ )
+
+ def tearDown(self):
+ os.environ.pop(OTEL_EXPORTER_ZIPKIN_ENDPOINT, None)
+ os.environ.pop(OTEL_EXPORTER_ZIPKIN_TIMEOUT, None)
+
+ def test_constructor_default(self):
+ exporter = ZipkinExporter()
+ self.assertIsInstance(exporter.encoder, JsonV2Encoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, DEFAULT_ENDPOINT)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+
+ def test_constructor_env_vars(self):
+ os_endpoint = "https://foo:9911/path"
+ os.environ[OTEL_EXPORTER_ZIPKIN_ENDPOINT] = os_endpoint
+ os.environ[OTEL_EXPORTER_ZIPKIN_TIMEOUT] = "15"
+
+ exporter = ZipkinExporter()
+
+ self.assertEqual(exporter.endpoint, os_endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+ self.assertEqual(exporter.timeout, 15)
+
+ def test_constructor_protocol_endpoint(self):
+ """Test the constructor for the common usage of providing the
+ protocol and endpoint arguments."""
+ endpoint = "https://opentelemetry.io:15875/myapi/traces?format=zipkin"
+
+ exporter = ZipkinExporter(endpoint=endpoint)
+
+ self.assertIsInstance(exporter.encoder, JsonV2Encoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+
+ def test_constructor_all_params_and_env_vars(self):
+ """Test the scenario where all params are provided and all OS env
+ vars are set. Explicit params should take precedence.
+ """
+ os_endpoint = "https://os.env.param:9911/path"
+ os.environ[OTEL_EXPORTER_ZIPKIN_ENDPOINT] = os_endpoint
+ os.environ[OTEL_EXPORTER_ZIPKIN_TIMEOUT] = "15"
+
+ constructor_param_version = Protocol.V2
+ constructor_param_endpoint = "https://constructor.param:9911/path"
+ local_node_ipv4 = "192.168.0.1"
+ local_node_ipv6 = "2001:db8::1000"
+ local_node_port = 30301
+ max_tag_value_length = 56
+ timeout_param = 20
+ session_param = requests.Session()
+
+ exporter = ZipkinExporter(
+ constructor_param_version,
+ constructor_param_endpoint,
+ local_node_ipv4,
+ local_node_ipv6,
+ local_node_port,
+ max_tag_value_length,
+ timeout_param,
+ session_param,
+ )
+
+ self.assertIsInstance(exporter.encoder, JsonV2Encoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, constructor_param_endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(
+ exporter.local_node.ipv4, ipaddress.IPv4Address(local_node_ipv4)
+ )
+ self.assertEqual(
+ exporter.local_node.ipv6, ipaddress.IPv6Address(local_node_ipv6)
+ )
+ self.assertEqual(exporter.local_node.port, local_node_port)
+ # Assert timeout passed in constructor is prioritized over env
+ # when both are set.
+ self.assertEqual(exporter.timeout, 20)
+
+ @patch("requests.Session.post")
+ def test_export_success(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+
+ @patch("requests.Session.post")
+ def test_export_invalid_response(self, mock_post):
+ mock_post.return_value = MockResponse(404)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.FAILURE, status)
+
+ @patch("requests.Session.post")
+ def test_export_span_service_name(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ resource = Resource.create({SERVICE_NAME: "test"})
+ context = trace.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ )
+ span = _Span("test_span", context=context, resource=resource)
+ span.start()
+ span.end()
+ exporter = ZipkinExporter()
+ exporter.export([span])
+ self.assertEqual(exporter.local_node.service_name, "test")
+
+ @patch("requests.Session.post")
+ def test_export_shutdown(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+
+ exporter.shutdown()
+ # Any call to .export() post shutdown should return failure
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.FAILURE, status)
+
+ @patch("requests.Session.post")
+ def test_export_timeout(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter(timeout=2)
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+ mock_post.assert_called_with(
+ url="http://localhost:9411/api/v2/spans", data="[]", timeout=2
+ )
+
+
+class TestZipkinNodeEndpoint(unittest.TestCase):
+ def test_constructor_default(self):
+ node_endpoint = NodeEndpoint()
+ self.assertEqual(node_endpoint.ipv4, None)
+ self.assertEqual(node_endpoint.ipv6, None)
+ self.assertEqual(node_endpoint.port, None)
+ self.assertEqual(node_endpoint.service_name, TEST_SERVICE_NAME)
+
+ def test_constructor_explicits(self):
+ ipv4 = "192.168.0.1"
+ ipv6 = "2001:db8::c001"
+ port = 414120
+ node_endpoint = NodeEndpoint(ipv4, ipv6, port)
+ self.assertEqual(node_endpoint.ipv4, ipaddress.IPv4Address(ipv4))
+ self.assertEqual(node_endpoint.ipv6, ipaddress.IPv6Address(ipv6))
+ self.assertEqual(node_endpoint.port, port)
+ self.assertEqual(node_endpoint.service_name, TEST_SERVICE_NAME)
+
+ def test_ipv4_invalid_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv4="invalid-ipv4-address")
+
+ def test_ipv4_passed_ipv6_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv4="2001:db8::c001")
+
+ def test_ipv6_invalid_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv6="invalid-ipv6-address")
+
+ def test_ipv6_passed_ipv4_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv6="192.168.0.1")
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/CHANGELOG.md b/exporter/opentelemetry-exporter-zipkin-proto-http/CHANGELOG.md
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/LICENSE b/exporter/opentelemetry-exporter-zipkin-proto-http/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/README.rst b/exporter/opentelemetry-exporter-zipkin-proto-http/README.rst
new file mode 100644
index 0000000000..12801dbf37
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/README.rst
@@ -0,0 +1,25 @@
+OpenTelemetry Zipkin Protobuf Exporter
+======================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-zipkin-proto-http.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-zipkin-proto-http/
+
+This library allows export of tracing data to `Zipkin `_ using Protobuf
+for serialization.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-zipkin-proto-http
+
+
+References
+----------
+
+* `OpenTelemetry Zipkin Exporter `_
+* `Zipkin `_
+* `OpenTelemetry Project `_
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/pyproject.toml b/exporter/opentelemetry-exporter-zipkin-proto-http/pyproject.toml
new file mode 100644
index 0000000000..02a480c3a3
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/pyproject.toml
@@ -0,0 +1,55 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-zipkin-proto-http"
+dynamic = ["version"]
+description = "Zipkin Span Protobuf Exporter for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-api ~= 1.3",
+ "opentelemetry-exporter-zipkin-json == 1.23.0.dev",
+ "opentelemetry-sdk ~= 1.11",
+ "protobuf ~= 3.12",
+ "requests ~= 2.7",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_traces_exporter]
+zipkin_proto = "opentelemetry.exporter.zipkin.proto.http:ZipkinExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-zipkin-proto-http"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/zipkin/proto/http/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/__init__.py b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/__init__.py
new file mode 100644
index 0000000000..8177efc07b
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/__init__.py
@@ -0,0 +1,183 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+OpenTelemetry Zipkin Protobuf Exporter
+--------------------------------------
+
+This library allows to export tracing data to `Zipkin `_.
+
+Usage
+-----
+
+The **OpenTelemetry Zipkin Exporter** allows exporting of `OpenTelemetry`_
+traces to `Zipkin`_. This exporter sends traces to the configured Zipkin
+collector endpoint using HTTP and supports v2 protobuf.
+
+.. _Zipkin: https://zipkin.io/
+.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+.. _Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#zipkin-exporter
+
+.. code:: python
+
+ import requests
+
+ from opentelemetry import trace
+ from opentelemetry.exporter.zipkin.proto.http import ZipkinExporter
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+ trace.set_tracer_provider(TracerProvider())
+ tracer = trace.get_tracer(__name__)
+
+ # create a ZipkinExporter
+ zipkin_exporter = ZipkinExporter(
+ # optional:
+ # endpoint="http://localhost:9411/api/v2/spans",
+ # local_node_ipv4="192.168.0.1",
+ # local_node_ipv6="2001:db8::c001",
+ # local_node_port=31313,
+ # max_tag_value_length=256,
+ # timeout=5 (in seconds),
+ # session=requests.Session()
+ )
+
+ # Create a BatchSpanProcessor and add the exporter to it
+ span_processor = BatchSpanProcessor(zipkin_exporter)
+
+ # add to the tracer
+ trace.get_tracer_provider().add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ print("Hello world!")
+
+The exporter supports the following environment variable for configuration:
+
+- :envvar:`OTEL_EXPORTER_ZIPKIN_ENDPOINT`
+- :envvar:`OTEL_EXPORTER_ZIPKIN_TIMEOUT`
+
+API
+---
+"""
+
+import logging
+from os import environ
+from typing import Optional, Sequence
+
+import requests
+
+from opentelemetry.exporter.zipkin.proto.http.v2 import ProtobufEncoder
+from opentelemetry.exporter.zipkin.node_endpoint import IpInput, NodeEndpoint
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_ZIPKIN_ENDPOINT,
+ OTEL_EXPORTER_ZIPKIN_TIMEOUT,
+)
+from opentelemetry.sdk.resources import SERVICE_NAME
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+from opentelemetry.trace import Span
+
+DEFAULT_ENDPOINT = "http://localhost:9411/api/v2/spans"
+REQUESTS_SUCCESS_STATUS_CODES = (200, 202)
+
+logger = logging.getLogger(__name__)
+
+
+class ZipkinExporter(SpanExporter):
+ def __init__(
+ self,
+ endpoint: Optional[str] = None,
+ local_node_ipv4: IpInput = None,
+ local_node_ipv6: IpInput = None,
+ local_node_port: Optional[int] = None,
+ max_tag_value_length: Optional[int] = None,
+ timeout: Optional[int] = None,
+ session: Optional[requests.Session] = None,
+ ):
+ """Zipkin exporter.
+
+ Args:
+ version: The protocol version to be used.
+ endpoint: The endpoint of the Zipkin collector.
+ local_node_ipv4: Primary IPv4 address associated with this connection.
+ local_node_ipv6: Primary IPv6 address associated with this connection.
+ local_node_port: Depending on context, this could be a listen port or the
+ client-side of a socket.
+ max_tag_value_length: Max length string attribute values can have.
+ timeout: Maximum time the Zipkin exporter will wait for each batch export.
+ The default value is 10s.
+ session: Connection session to the Zipkin collector endpoint.
+
+ The tuple (local_node_ipv4, local_node_ipv6, local_node_port) is used to represent
+ the network context of a node in the service graph.
+ """
+ self.local_node = NodeEndpoint(
+ local_node_ipv4, local_node_ipv6, local_node_port
+ )
+
+ if endpoint is None:
+ endpoint = (
+ environ.get(OTEL_EXPORTER_ZIPKIN_ENDPOINT) or DEFAULT_ENDPOINT
+ )
+ self.endpoint = endpoint
+
+ self.encoder = ProtobufEncoder(max_tag_value_length)
+
+ self.session = session or requests.Session()
+ self.session.headers.update(
+ {"Content-Type": self.encoder.content_type()}
+ )
+ self._closed = False
+ self.timeout = timeout or int(
+ environ.get(OTEL_EXPORTER_ZIPKIN_TIMEOUT, 10)
+ )
+
+ def export(self, spans: Sequence[Span]) -> SpanExportResult:
+ # After the call to Shutdown subsequent calls to Export are
+ # not allowed and should return a Failure result
+ if self._closed:
+ logger.warning("Exporter already shutdown, ignoring batch")
+ return SpanExportResult.FAILURE
+ # Populate service_name from first span
+ # We restrict any SpanProcessor to be only associated with a single
+ # TracerProvider, so it is safe to assume that all Spans in a single
+ # batch all originate from one TracerProvider (and in turn have all
+ # the same service.name)
+ if spans:
+ service_name = spans[0].resource.attributes.get(SERVICE_NAME)
+ if service_name:
+ self.local_node.service_name = service_name
+ result = self.session.post(
+ url=self.endpoint,
+ data=self.encoder.serialize(spans, self.local_node),
+ timeout=self.timeout,
+ )
+
+ if result.status_code not in REQUESTS_SUCCESS_STATUS_CODES:
+ logger.error(
+ "Traces cannot be uploaded; status code: %s, message %s",
+ result.status_code,
+ result.text,
+ )
+ return SpanExportResult.FAILURE
+ return SpanExportResult.SUCCESS
+
+ def shutdown(self) -> None:
+ if self._closed:
+ logger.warning("Exporter already shutdown, ignoring call")
+ return
+ self.session.close()
+ self._closed = True
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ return True
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/py.typed b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/__init__.py b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/__init__.py
new file mode 100644
index 0000000000..676c2496f7
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/__init__.py
@@ -0,0 +1,129 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Zipkin Export Encoder for Protobuf
+
+API spec: https://github.com/openzipkin/zipkin-api/blob/master/zipkin.proto
+"""
+from typing import List, Optional, Sequence
+
+from opentelemetry.exporter.zipkin.encoder import Encoder
+from opentelemetry.exporter.zipkin.proto.http.v2.gen import zipkin_pb2
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk.trace import Event
+from opentelemetry.trace import Span, SpanKind
+
+
+class ProtobufEncoder(Encoder):
+ """Zipkin Export Encoder for Protobuf
+
+ API spec: https://github.com/openzipkin/zipkin-api/blob/master/zipkin.proto
+ """
+
+ SPAN_KIND_MAP = {
+ SpanKind.INTERNAL: zipkin_pb2.Span.Kind.SPAN_KIND_UNSPECIFIED,
+ SpanKind.SERVER: zipkin_pb2.Span.Kind.SERVER,
+ SpanKind.CLIENT: zipkin_pb2.Span.Kind.CLIENT,
+ SpanKind.PRODUCER: zipkin_pb2.Span.Kind.PRODUCER,
+ SpanKind.CONSUMER: zipkin_pb2.Span.Kind.CONSUMER,
+ }
+
+ @staticmethod
+ def content_type():
+ return "application/x-protobuf"
+
+ def serialize(
+ self, spans: Sequence[Span], local_endpoint: NodeEndpoint
+ ) -> str:
+ encoded_local_endpoint = self._encode_local_endpoint(local_endpoint)
+ # pylint: disable=no-member
+ encoded_spans = zipkin_pb2.ListOfSpans()
+ for span in spans:
+ encoded_spans.spans.append(
+ self._encode_span(span, encoded_local_endpoint)
+ )
+ return encoded_spans.SerializeToString()
+
+ def _encode_span(
+ self, span: Span, encoded_local_endpoint: zipkin_pb2.Endpoint
+ ) -> zipkin_pb2.Span:
+ context = span.get_span_context()
+ # pylint: disable=no-member
+ encoded_span = zipkin_pb2.Span(
+ trace_id=self._encode_trace_id(context.trace_id),
+ id=self._encode_span_id(context.span_id),
+ name=span.name,
+ timestamp=self._nsec_to_usec_round(span.start_time),
+ duration=self._nsec_to_usec_round(span.end_time - span.start_time),
+ local_endpoint=encoded_local_endpoint,
+ kind=self.SPAN_KIND_MAP[span.kind],
+ )
+
+ tags = self._extract_tags_from_span(span)
+ if tags:
+ encoded_span.tags.update(tags)
+
+ annotations = self._encode_annotations(span.events)
+ if annotations:
+ encoded_span.annotations.extend(annotations)
+
+ debug = self._encode_debug(context)
+ if debug:
+ encoded_span.debug = debug
+
+ parent_id = self._get_parent_id(span.parent)
+ if parent_id is not None:
+ encoded_span.parent_id = self._encode_span_id(parent_id)
+
+ return encoded_span
+
+ def _encode_annotations(
+ self, span_events: Optional[List[Event]]
+ ) -> Optional[List]:
+ annotations = self._extract_annotations_from_events(span_events)
+ if annotations is None:
+ encoded_annotations = None
+ else:
+ encoded_annotations = []
+ for annotation in annotations:
+ encoded_annotations.append(
+ zipkin_pb2.Annotation(
+ timestamp=annotation["timestamp"],
+ value=annotation["value"],
+ )
+ )
+ return encoded_annotations
+
+ @staticmethod
+ def _encode_local_endpoint(
+ local_endpoint: NodeEndpoint,
+ ) -> zipkin_pb2.Endpoint:
+ encoded_local_endpoint = zipkin_pb2.Endpoint(
+ service_name=local_endpoint.service_name,
+ )
+ if local_endpoint.ipv4 is not None:
+ encoded_local_endpoint.ipv4 = local_endpoint.ipv4.packed
+ if local_endpoint.ipv6 is not None:
+ encoded_local_endpoint.ipv6 = local_endpoint.ipv6.packed
+ if local_endpoint.port is not None:
+ encoded_local_endpoint.port = local_endpoint.port
+ return encoded_local_endpoint
+
+ @staticmethod
+ def _encode_span_id(span_id: int) -> bytes:
+ return span_id.to_bytes(length=8, byteorder="big", signed=False)
+
+ @staticmethod
+ def _encode_trace_id(trace_id: int) -> bytes:
+ return trace_id.to_bytes(length=16, byteorder="big", signed=False)
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/__init__.py b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.py b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.py
new file mode 100644
index 0000000000..7b578febc1
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.py
@@ -0,0 +1,458 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: zipkin.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+
+
+DESCRIPTOR = _descriptor.FileDescriptor(
+ name='zipkin.proto',
+ package='zipkin.proto3',
+ syntax='proto3',
+ serialized_options=b'\n\016zipkin2.proto3P\001',
+ create_key=_descriptor._internal_create_key,
+ serialized_pb=b'\n\x0czipkin.proto\x12\rzipkin.proto3\"\xf5\x03\n\x04Span\x12\x10\n\x08trace_id\x18\x01 \x01(\x0c\x12\x11\n\tparent_id\x18\x02 \x01(\x0c\x12\n\n\x02id\x18\x03 \x01(\x0c\x12&\n\x04kind\x18\x04 \x01(\x0e\x32\x18.zipkin.proto3.Span.Kind\x12\x0c\n\x04name\x18\x05 \x01(\t\x12\x11\n\ttimestamp\x18\x06 \x01(\x06\x12\x10\n\x08\x64uration\x18\x07 \x01(\x04\x12/\n\x0elocal_endpoint\x18\x08 \x01(\x0b\x32\x17.zipkin.proto3.Endpoint\x12\x30\n\x0fremote_endpoint\x18\t \x01(\x0b\x32\x17.zipkin.proto3.Endpoint\x12.\n\x0b\x61nnotations\x18\n \x03(\x0b\x32\x19.zipkin.proto3.Annotation\x12+\n\x04tags\x18\x0b \x03(\x0b\x32\x1d.zipkin.proto3.Span.TagsEntry\x12\r\n\x05\x64\x65\x62ug\x18\x0c \x01(\x08\x12\x0e\n\x06shared\x18\r \x01(\x08\x1a+\n\tTagsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"U\n\x04Kind\x12\x19\n\x15SPAN_KIND_UNSPECIFIED\x10\x00\x12\n\n\x06\x43LIENT\x10\x01\x12\n\n\x06SERVER\x10\x02\x12\x0c\n\x08PRODUCER\x10\x03\x12\x0c\n\x08\x43ONSUMER\x10\x04\"J\n\x08\x45ndpoint\x12\x14\n\x0cservice_name\x18\x01 \x01(\t\x12\x0c\n\x04ipv4\x18\x02 \x01(\x0c\x12\x0c\n\x04ipv6\x18\x03 \x01(\x0c\x12\x0c\n\x04port\x18\x04 \x01(\x05\".\n\nAnnotation\x12\x11\n\ttimestamp\x18\x01 \x01(\x06\x12\r\n\x05value\x18\x02 \x01(\t\"1\n\x0bListOfSpans\x12\"\n\x05spans\x18\x01 \x03(\x0b\x32\x13.zipkin.proto3.Span\"\x10\n\x0eReportResponse2T\n\x0bSpanService\x12\x45\n\x06Report\x12\x1a.zipkin.proto3.ListOfSpans\x1a\x1d.zipkin.proto3.ReportResponse\"\x00\x42\x12\n\x0ezipkin2.proto3P\x01\x62\x06proto3'
+)
+
+
+
+_SPAN_KIND = _descriptor.EnumDescriptor(
+ name='Kind',
+ full_name='zipkin.proto3.Span.Kind',
+ filename=None,
+ file=DESCRIPTOR,
+ create_key=_descriptor._internal_create_key,
+ values=[
+ _descriptor.EnumValueDescriptor(
+ name='SPAN_KIND_UNSPECIFIED', index=0, number=0,
+ serialized_options=None,
+ type=None,
+ create_key=_descriptor._internal_create_key),
+ _descriptor.EnumValueDescriptor(
+ name='CLIENT', index=1, number=1,
+ serialized_options=None,
+ type=None,
+ create_key=_descriptor._internal_create_key),
+ _descriptor.EnumValueDescriptor(
+ name='SERVER', index=2, number=2,
+ serialized_options=None,
+ type=None,
+ create_key=_descriptor._internal_create_key),
+ _descriptor.EnumValueDescriptor(
+ name='PRODUCER', index=3, number=3,
+ serialized_options=None,
+ type=None,
+ create_key=_descriptor._internal_create_key),
+ _descriptor.EnumValueDescriptor(
+ name='CONSUMER', index=4, number=4,
+ serialized_options=None,
+ type=None,
+ create_key=_descriptor._internal_create_key),
+ ],
+ containing_type=None,
+ serialized_options=None,
+ serialized_start=448,
+ serialized_end=533,
+)
+_sym_db.RegisterEnumDescriptor(_SPAN_KIND)
+
+
+_SPAN_TAGSENTRY = _descriptor.Descriptor(
+ name='TagsEntry',
+ full_name='zipkin.proto3.Span.TagsEntry',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='key', full_name='zipkin.proto3.Span.TagsEntry.key', index=0,
+ number=1, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"".decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='value', full_name='zipkin.proto3.Span.TagsEntry.value', index=1,
+ number=2, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"".decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=b'8\001',
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=403,
+ serialized_end=446,
+)
+
+_SPAN = _descriptor.Descriptor(
+ name='Span',
+ full_name='zipkin.proto3.Span',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='trace_id', full_name='zipkin.proto3.Span.trace_id', index=0,
+ number=1, type=12, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='parent_id', full_name='zipkin.proto3.Span.parent_id', index=1,
+ number=2, type=12, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='id', full_name='zipkin.proto3.Span.id', index=2,
+ number=3, type=12, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='kind', full_name='zipkin.proto3.Span.kind', index=3,
+ number=4, type=14, cpp_type=8, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='name', full_name='zipkin.proto3.Span.name', index=4,
+ number=5, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"".decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='timestamp', full_name='zipkin.proto3.Span.timestamp', index=5,
+ number=6, type=6, cpp_type=4, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='duration', full_name='zipkin.proto3.Span.duration', index=6,
+ number=7, type=4, cpp_type=4, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='local_endpoint', full_name='zipkin.proto3.Span.local_endpoint', index=7,
+ number=8, type=11, cpp_type=10, label=1,
+ has_default_value=False, default_value=None,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='remote_endpoint', full_name='zipkin.proto3.Span.remote_endpoint', index=8,
+ number=9, type=11, cpp_type=10, label=1,
+ has_default_value=False, default_value=None,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='annotations', full_name='zipkin.proto3.Span.annotations', index=9,
+ number=10, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='tags', full_name='zipkin.proto3.Span.tags', index=10,
+ number=11, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='debug', full_name='zipkin.proto3.Span.debug', index=11,
+ number=12, type=8, cpp_type=7, label=1,
+ has_default_value=False, default_value=False,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='shared', full_name='zipkin.proto3.Span.shared', index=12,
+ number=13, type=8, cpp_type=7, label=1,
+ has_default_value=False, default_value=False,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ ],
+ extensions=[
+ ],
+ nested_types=[_SPAN_TAGSENTRY, ],
+ enum_types=[
+ _SPAN_KIND,
+ ],
+ serialized_options=None,
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=32,
+ serialized_end=533,
+)
+
+
+_ENDPOINT = _descriptor.Descriptor(
+ name='Endpoint',
+ full_name='zipkin.proto3.Endpoint',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='service_name', full_name='zipkin.proto3.Endpoint.service_name', index=0,
+ number=1, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"".decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='ipv4', full_name='zipkin.proto3.Endpoint.ipv4', index=1,
+ number=2, type=12, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='ipv6', full_name='zipkin.proto3.Endpoint.ipv6', index=2,
+ number=3, type=12, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"",
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='port', full_name='zipkin.proto3.Endpoint.port', index=3,
+ number=4, type=5, cpp_type=1, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=None,
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=535,
+ serialized_end=609,
+)
+
+
+_ANNOTATION = _descriptor.Descriptor(
+ name='Annotation',
+ full_name='zipkin.proto3.Annotation',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='timestamp', full_name='zipkin.proto3.Annotation.timestamp', index=0,
+ number=1, type=6, cpp_type=4, label=1,
+ has_default_value=False, default_value=0,
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ _descriptor.FieldDescriptor(
+ name='value', full_name='zipkin.proto3.Annotation.value', index=1,
+ number=2, type=9, cpp_type=9, label=1,
+ has_default_value=False, default_value=b"".decode('utf-8'),
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=None,
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=611,
+ serialized_end=657,
+)
+
+
+_LISTOFSPANS = _descriptor.Descriptor(
+ name='ListOfSpans',
+ full_name='zipkin.proto3.ListOfSpans',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ _descriptor.FieldDescriptor(
+ name='spans', full_name='zipkin.proto3.ListOfSpans.spans', index=0,
+ number=1, type=11, cpp_type=10, label=3,
+ has_default_value=False, default_value=[],
+ message_type=None, enum_type=None, containing_type=None,
+ is_extension=False, extension_scope=None,
+ serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=None,
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=659,
+ serialized_end=708,
+)
+
+
+_REPORTRESPONSE = _descriptor.Descriptor(
+ name='ReportResponse',
+ full_name='zipkin.proto3.ReportResponse',
+ filename=None,
+ file=DESCRIPTOR,
+ containing_type=None,
+ create_key=_descriptor._internal_create_key,
+ fields=[
+ ],
+ extensions=[
+ ],
+ nested_types=[],
+ enum_types=[
+ ],
+ serialized_options=None,
+ is_extendable=False,
+ syntax='proto3',
+ extension_ranges=[],
+ oneofs=[
+ ],
+ serialized_start=710,
+ serialized_end=726,
+)
+
+_SPAN_TAGSENTRY.containing_type = _SPAN
+_SPAN.fields_by_name['kind'].enum_type = _SPAN_KIND
+_SPAN.fields_by_name['local_endpoint'].message_type = _ENDPOINT
+_SPAN.fields_by_name['remote_endpoint'].message_type = _ENDPOINT
+_SPAN.fields_by_name['annotations'].message_type = _ANNOTATION
+_SPAN.fields_by_name['tags'].message_type = _SPAN_TAGSENTRY
+_SPAN_KIND.containing_type = _SPAN
+_LISTOFSPANS.fields_by_name['spans'].message_type = _SPAN
+DESCRIPTOR.message_types_by_name['Span'] = _SPAN
+DESCRIPTOR.message_types_by_name['Endpoint'] = _ENDPOINT
+DESCRIPTOR.message_types_by_name['Annotation'] = _ANNOTATION
+DESCRIPTOR.message_types_by_name['ListOfSpans'] = _LISTOFSPANS
+DESCRIPTOR.message_types_by_name['ReportResponse'] = _REPORTRESPONSE
+_sym_db.RegisterFileDescriptor(DESCRIPTOR)
+
+Span = _reflection.GeneratedProtocolMessageType('Span', (_message.Message,), {
+
+ 'TagsEntry' : _reflection.GeneratedProtocolMessageType('TagsEntry', (_message.Message,), {
+ 'DESCRIPTOR' : _SPAN_TAGSENTRY,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.Span.TagsEntry)
+ })
+ ,
+ 'DESCRIPTOR' : _SPAN,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.Span)
+ })
+_sym_db.RegisterMessage(Span)
+_sym_db.RegisterMessage(Span.TagsEntry)
+
+Endpoint = _reflection.GeneratedProtocolMessageType('Endpoint', (_message.Message,), {
+ 'DESCRIPTOR' : _ENDPOINT,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.Endpoint)
+ })
+_sym_db.RegisterMessage(Endpoint)
+
+Annotation = _reflection.GeneratedProtocolMessageType('Annotation', (_message.Message,), {
+ 'DESCRIPTOR' : _ANNOTATION,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.Annotation)
+ })
+_sym_db.RegisterMessage(Annotation)
+
+ListOfSpans = _reflection.GeneratedProtocolMessageType('ListOfSpans', (_message.Message,), {
+ 'DESCRIPTOR' : _LISTOFSPANS,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.ListOfSpans)
+ })
+_sym_db.RegisterMessage(ListOfSpans)
+
+ReportResponse = _reflection.GeneratedProtocolMessageType('ReportResponse', (_message.Message,), {
+ 'DESCRIPTOR' : _REPORTRESPONSE,
+ '__module__' : 'zipkin_pb2'
+ # @@protoc_insertion_point(class_scope:zipkin.proto3.ReportResponse)
+ })
+_sym_db.RegisterMessage(ReportResponse)
+
+
+DESCRIPTOR._options = None
+_SPAN_TAGSENTRY._options = None
+
+_SPANSERVICE = _descriptor.ServiceDescriptor(
+ name='SpanService',
+ full_name='zipkin.proto3.SpanService',
+ file=DESCRIPTOR,
+ index=0,
+ serialized_options=None,
+ create_key=_descriptor._internal_create_key,
+ serialized_start=728,
+ serialized_end=812,
+ methods=[
+ _descriptor.MethodDescriptor(
+ name='Report',
+ full_name='zipkin.proto3.SpanService.Report',
+ index=0,
+ containing_service=None,
+ input_type=_LISTOFSPANS,
+ output_type=_REPORTRESPONSE,
+ serialized_options=None,
+ create_key=_descriptor._internal_create_key,
+ ),
+])
+_sym_db.RegisterServiceDescriptor(_SPANSERVICE)
+
+DESCRIPTOR.services_by_name['SpanService'] = _SPANSERVICE
+
+# @@protoc_insertion_point(module_scope)
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.pyi b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.pyi
new file mode 100644
index 0000000000..1624d7d595
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen/zipkin_pb2.pyi
@@ -0,0 +1,214 @@
+# @generated by generate_proto_mypy_stubs.py. Do not edit!
+import sys
+from google.protobuf.descriptor import (
+ Descriptor as google___protobuf___descriptor___Descriptor,
+ EnumDescriptor as google___protobuf___descriptor___EnumDescriptor,
+ FileDescriptor as google___protobuf___descriptor___FileDescriptor,
+)
+
+from google.protobuf.internal.containers import (
+ RepeatedCompositeFieldContainer as google___protobuf___internal___containers___RepeatedCompositeFieldContainer,
+)
+
+from google.protobuf.message import (
+ Message as google___protobuf___message___Message,
+)
+
+from typing import (
+ Iterable as typing___Iterable,
+ List as typing___List,
+ Mapping as typing___Mapping,
+ MutableMapping as typing___MutableMapping,
+ NewType as typing___NewType,
+ Optional as typing___Optional,
+ Text as typing___Text,
+ Tuple as typing___Tuple,
+ Union as typing___Union,
+ cast as typing___cast,
+)
+
+from typing_extensions import (
+ Literal as typing_extensions___Literal,
+)
+
+
+builtin___bool = bool
+builtin___bytes = bytes
+builtin___float = float
+builtin___int = int
+builtin___str = str
+if sys.version_info < (3,):
+ builtin___buffer = buffer
+ builtin___unicode = unicode
+
+
+DESCRIPTOR: google___protobuf___descriptor___FileDescriptor = ...
+
+class Span(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+ KindValue = typing___NewType('KindValue', builtin___int)
+ type___KindValue = KindValue
+ class Kind(object):
+ DESCRIPTOR: google___protobuf___descriptor___EnumDescriptor = ...
+ @classmethod
+ def Name(cls, number: builtin___int) -> builtin___str: ...
+ @classmethod
+ def Value(cls, name: builtin___str) -> Span.KindValue: ...
+ @classmethod
+ def keys(cls) -> typing___List[builtin___str]: ...
+ @classmethod
+ def values(cls) -> typing___List[Span.KindValue]: ...
+ @classmethod
+ def items(cls) -> typing___List[typing___Tuple[builtin___str, Span.KindValue]]: ...
+ SPAN_KIND_UNSPECIFIED = typing___cast(Span.KindValue, 0)
+ CLIENT = typing___cast(Span.KindValue, 1)
+ SERVER = typing___cast(Span.KindValue, 2)
+ PRODUCER = typing___cast(Span.KindValue, 3)
+ CONSUMER = typing___cast(Span.KindValue, 4)
+ SPAN_KIND_UNSPECIFIED = typing___cast(Span.KindValue, 0)
+ CLIENT = typing___cast(Span.KindValue, 1)
+ SERVER = typing___cast(Span.KindValue, 2)
+ PRODUCER = typing___cast(Span.KindValue, 3)
+ CONSUMER = typing___cast(Span.KindValue, 4)
+ type___Kind = Kind
+
+ class TagsEntry(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+ key: typing___Text = ...
+ value: typing___Text = ...
+
+ def __init__(self,
+ *,
+ key : typing___Optional[typing___Text] = None,
+ value : typing___Optional[typing___Text] = None,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> Span.TagsEntry: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> Span.TagsEntry: ...
+ def ClearField(self, field_name: typing_extensions___Literal[u"key",b"key",u"value",b"value"]) -> None: ...
+ type___TagsEntry = TagsEntry
+
+ trace_id: builtin___bytes = ...
+ parent_id: builtin___bytes = ...
+ id: builtin___bytes = ...
+ kind: type___Span.KindValue = ...
+ name: typing___Text = ...
+ timestamp: builtin___int = ...
+ duration: builtin___int = ...
+ debug: builtin___bool = ...
+ shared: builtin___bool = ...
+
+ @property
+ def local_endpoint(self) -> type___Endpoint: ...
+
+ @property
+ def remote_endpoint(self) -> type___Endpoint: ...
+
+ @property
+ def annotations(self) -> google___protobuf___internal___containers___RepeatedCompositeFieldContainer[type___Annotation]: ...
+
+ @property
+ def tags(self) -> typing___MutableMapping[typing___Text, typing___Text]: ...
+
+ def __init__(self,
+ *,
+ trace_id : typing___Optional[builtin___bytes] = None,
+ parent_id : typing___Optional[builtin___bytes] = None,
+ id : typing___Optional[builtin___bytes] = None,
+ kind : typing___Optional[type___Span.KindValue] = None,
+ name : typing___Optional[typing___Text] = None,
+ timestamp : typing___Optional[builtin___int] = None,
+ duration : typing___Optional[builtin___int] = None,
+ local_endpoint : typing___Optional[type___Endpoint] = None,
+ remote_endpoint : typing___Optional[type___Endpoint] = None,
+ annotations : typing___Optional[typing___Iterable[type___Annotation]] = None,
+ tags : typing___Optional[typing___Mapping[typing___Text, typing___Text]] = None,
+ debug : typing___Optional[builtin___bool] = None,
+ shared : typing___Optional[builtin___bool] = None,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> Span: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> Span: ...
+ def HasField(self, field_name: typing_extensions___Literal[u"local_endpoint",b"local_endpoint",u"remote_endpoint",b"remote_endpoint"]) -> builtin___bool: ...
+ def ClearField(self, field_name: typing_extensions___Literal[u"annotations",b"annotations",u"debug",b"debug",u"duration",b"duration",u"id",b"id",u"kind",b"kind",u"local_endpoint",b"local_endpoint",u"name",b"name",u"parent_id",b"parent_id",u"remote_endpoint",b"remote_endpoint",u"shared",b"shared",u"tags",b"tags",u"timestamp",b"timestamp",u"trace_id",b"trace_id"]) -> None: ...
+type___Span = Span
+
+class Endpoint(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+ service_name: typing___Text = ...
+ ipv4: builtin___bytes = ...
+ ipv6: builtin___bytes = ...
+ port: builtin___int = ...
+
+ def __init__(self,
+ *,
+ service_name : typing___Optional[typing___Text] = None,
+ ipv4 : typing___Optional[builtin___bytes] = None,
+ ipv6 : typing___Optional[builtin___bytes] = None,
+ port : typing___Optional[builtin___int] = None,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> Endpoint: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> Endpoint: ...
+ def ClearField(self, field_name: typing_extensions___Literal[u"ipv4",b"ipv4",u"ipv6",b"ipv6",u"port",b"port",u"service_name",b"service_name"]) -> None: ...
+type___Endpoint = Endpoint
+
+class Annotation(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+ timestamp: builtin___int = ...
+ value: typing___Text = ...
+
+ def __init__(self,
+ *,
+ timestamp : typing___Optional[builtin___int] = None,
+ value : typing___Optional[typing___Text] = None,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> Annotation: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> Annotation: ...
+ def ClearField(self, field_name: typing_extensions___Literal[u"timestamp",b"timestamp",u"value",b"value"]) -> None: ...
+type___Annotation = Annotation
+
+class ListOfSpans(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+
+ @property
+ def spans(self) -> google___protobuf___internal___containers___RepeatedCompositeFieldContainer[type___Span]: ...
+
+ def __init__(self,
+ *,
+ spans : typing___Optional[typing___Iterable[type___Span]] = None,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> ListOfSpans: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> ListOfSpans: ...
+ def ClearField(self, field_name: typing_extensions___Literal[u"spans",b"spans"]) -> None: ...
+type___ListOfSpans = ListOfSpans
+
+class ReportResponse(google___protobuf___message___Message):
+ DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
+
+ def __init__(self,
+ ) -> None: ...
+ if sys.version_info >= (3,):
+ @classmethod
+ def FromString(cls, s: builtin___bytes) -> ReportResponse: ...
+ else:
+ @classmethod
+ def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> ReportResponse: ...
+type___ReportResponse = ReportResponse
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/version.py b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/tests/__init__.py b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/__init__.py b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/common_tests.py b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/common_tests.py
new file mode 100644
index 0000000000..ada00c7c8e
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/common_tests.py
@@ -0,0 +1,479 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import abc
+import unittest
+from typing import Dict, List
+
+from opentelemetry import trace as trace_api
+from opentelemetry.exporter.zipkin.encoder import (
+ DEFAULT_MAX_TAG_VALUE_LENGTH,
+ Encoder,
+)
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.trace import TraceFlags
+from opentelemetry.trace.status import Status, StatusCode
+
+TEST_SERVICE_NAME = "test_service"
+
+
+# pylint: disable=protected-access
+class CommonEncoderTestCases:
+ class CommonEncoderTest(unittest.TestCase):
+ @staticmethod
+ @abc.abstractmethod
+ def get_encoder(*args, **kwargs) -> Encoder:
+ pass
+
+ @classmethod
+ def get_encoder_default(cls) -> Encoder:
+ return cls.get_encoder()
+
+ @abc.abstractmethod
+ def test_encode_trace_id(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_span_id(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_local_endpoint_default(self):
+ pass
+
+ @abc.abstractmethod
+ def test_encode_local_endpoint_explicits(self):
+ pass
+
+ @abc.abstractmethod
+ def _test_encode_max_tag_length(self, max_tag_value_length: int):
+ pass
+
+ def test_encode_max_tag_length_2(self):
+ self._test_encode_max_tag_length(2)
+
+ def test_encode_max_tag_length_5(self):
+ self._test_encode_max_tag_length(5)
+
+ def test_encode_max_tag_length_9(self):
+ self._test_encode_max_tag_length(9)
+
+ def test_encode_max_tag_length_10(self):
+ self._test_encode_max_tag_length(10)
+
+ def test_encode_max_tag_length_11(self):
+ self._test_encode_max_tag_length(11)
+
+ def test_encode_max_tag_length_128(self):
+ self._test_encode_max_tag_length(128)
+
+ def test_constructor_default(self):
+ encoder = self.get_encoder()
+
+ self.assertEqual(
+ DEFAULT_MAX_TAG_VALUE_LENGTH, encoder.max_tag_value_length
+ )
+
+ def test_constructor_max_tag_value_length(self):
+ max_tag_value_length = 123456
+ encoder = self.get_encoder(max_tag_value_length)
+ self.assertEqual(
+ max_tag_value_length, encoder.max_tag_value_length
+ )
+
+ def test_nsec_to_usec_round(self):
+ base_time_nsec = 683647322 * 10**9
+ for nsec in (
+ base_time_nsec,
+ base_time_nsec + 150 * 10**6,
+ base_time_nsec + 300 * 10**6,
+ base_time_nsec + 400 * 10**6,
+ ):
+ self.assertEqual(
+ (nsec + 500) // 10**3,
+ self.get_encoder_default()._nsec_to_usec_round(nsec),
+ )
+
+ def test_encode_debug(self):
+ self.assertFalse(
+ self.get_encoder_default()._encode_debug(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.DEFAULT),
+ )
+ )
+ )
+ self.assertTrue(
+ self.get_encoder_default()._encode_debug(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ )
+ )
+ )
+
+ def test_get_parent_id_from_span(self):
+ parent_id = 0x00000000DEADBEF0
+ self.assertEqual(
+ parent_id,
+ self.get_encoder_default()._get_parent_id(
+ trace._Span(
+ name="test-span",
+ context=trace_api.SpanContext(
+ 0x000000000000000000000000DEADBEEF,
+ 0x04BF92DEEFC58C92,
+ is_remote=False,
+ ),
+ parent=trace_api.SpanContext(
+ 0x0000000000000000000000AADEADBEEF,
+ parent_id,
+ is_remote=False,
+ ),
+ )
+ ),
+ )
+
+ def test_get_parent_id_from_span_context(self):
+ parent_id = 0x00000000DEADBEF0
+ self.assertEqual(
+ parent_id,
+ self.get_encoder_default()._get_parent_id(
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=parent_id,
+ is_remote=False,
+ ),
+ ),
+ )
+
+ @staticmethod
+ def get_data_for_max_tag_length_test(
+ max_tag_length: int,
+ ) -> (trace._Span, Dict):
+ start_time = 683647322 * 10**9 # in ns
+ duration = 50 * 10**6
+ end_time = start_time + duration
+
+ span = trace._Span(
+ name=TEST_SERVICE_NAME,
+ context=trace_api.SpanContext(
+ 0x0E0C63257DE34C926F9EFCD03927272E,
+ 0x04BF92DEEFC58C92,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ resource=trace.Resource({}),
+ )
+ span.start(start_time=start_time)
+ span.set_attribute("string1", "v" * 500)
+ span.set_attribute("string2", "v" * 50)
+ span.set_attribute("list1", ["a"] * 25)
+ span.set_attribute("list2", ["a"] * 10)
+ span.set_attribute("list3", [2] * 25)
+ span.set_attribute("list4", [2] * 10)
+ span.set_attribute("list5", [True] * 25)
+ span.set_attribute("list6", [True] * 10)
+ span.set_attribute("tuple1", ("a",) * 25)
+ span.set_attribute("tuple2", ("a",) * 10)
+ span.set_attribute("tuple3", (2,) * 25)
+ span.set_attribute("tuple4", (2,) * 10)
+ span.set_attribute("tuple5", (True,) * 25)
+ span.set_attribute("tuple6", (True,) * 10)
+ span.set_attribute("range1", range(0, 25))
+ span.set_attribute("range2", range(0, 10))
+ span.set_attribute("empty_list", [])
+ span.set_attribute("none_list", ["hello", None, "world"])
+ span.end(end_time=end_time)
+
+ expected_outputs = {
+ 2: {
+ "string1": "vv",
+ "string2": "vv",
+ "list1": "[]",
+ "list2": "[]",
+ "list3": "[]",
+ "list4": "[]",
+ "list5": "[]",
+ "list6": "[]",
+ "tuple1": "[]",
+ "tuple2": "[]",
+ "tuple3": "[]",
+ "tuple4": "[]",
+ "tuple5": "[]",
+ "tuple6": "[]",
+ "range1": "[]",
+ "range2": "[]",
+ "empty_list": "[]",
+ "none_list": "[]",
+ },
+ 5: {
+ "string1": "vvvvv",
+ "string2": "vvvvv",
+ "list1": '["a"]',
+ "list2": '["a"]',
+ "list3": '["2"]',
+ "list4": '["2"]',
+ "list5": "[]",
+ "list6": "[]",
+ "tuple1": '["a"]',
+ "tuple2": '["a"]',
+ "tuple3": '["2"]',
+ "tuple4": '["2"]',
+ "tuple5": "[]",
+ "tuple6": "[]",
+ "range1": '["0"]',
+ "range2": '["0"]',
+ "empty_list": "[]",
+ "none_list": "[]",
+ },
+ 9: {
+ "string1": "vvvvvvvvv",
+ "string2": "vvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 10: {
+ "string1": "vvvvvvvvvv",
+ "string2": "vvvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 11: {
+ "string1": "vvvvvvvvvvv",
+ "string2": "vvvvvvvvvvv",
+ "list1": '["a","a"]',
+ "list2": '["a","a"]',
+ "list3": '["2","2"]',
+ "list4": '["2","2"]',
+ "list5": '["true"]',
+ "list6": '["true"]',
+ "tuple1": '["a","a"]',
+ "tuple2": '["a","a"]',
+ "tuple3": '["2","2"]',
+ "tuple4": '["2","2"]',
+ "tuple5": '["true"]',
+ "tuple6": '["true"]',
+ "range1": '["0","1"]',
+ "range2": '["0","1"]',
+ "empty_list": "[]",
+ "none_list": '["hello"]',
+ },
+ 128: {
+ "string1": "v" * 128,
+ "string2": "v" * 50,
+ "list1": '["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]',
+ "list2": '["a","a","a","a","a","a","a","a","a","a"]',
+ "list3": '["2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2"]',
+ "list4": '["2","2","2","2","2","2","2","2","2","2"]',
+ "list5": '["true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true"]',
+ "list6": '["true","true","true","true","true","true","true","true","true","true"]',
+ "tuple1": '["a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a","a"]',
+ "tuple2": '["a","a","a","a","a","a","a","a","a","a"]',
+ "tuple3": '["2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2"]',
+ "tuple4": '["2","2","2","2","2","2","2","2","2","2"]',
+ "tuple5": '["true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true","true"]',
+ "tuple6": '["true","true","true","true","true","true","true","true","true","true"]',
+ "range1": '["0","1","2","3","4","5","6","7","8","9","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24"]',
+ "range2": '["0","1","2","3","4","5","6","7","8","9"]',
+ "empty_list": "[]",
+ "none_list": '["hello",null,"world"]',
+ },
+ }
+
+ return span, expected_outputs[max_tag_length]
+
+ @staticmethod
+ def get_exhaustive_otel_span_list() -> List[trace._Span]:
+ trace_id = 0x6E0C63257DE34C926F9EFCD03927272E
+
+ base_time = 683647322 * 10**9 # in ns
+ start_times = (
+ base_time,
+ base_time + 150 * 10**6,
+ base_time + 300 * 10**6,
+ base_time + 400 * 10**6,
+ )
+ end_times = (
+ start_times[0] + (50 * 10**6),
+ start_times[1] + (100 * 10**6),
+ start_times[2] + (200 * 10**6),
+ start_times[3] + (300 * 10**6),
+ )
+
+ parent_span_context = trace_api.SpanContext(
+ trace_id, 0x1111111111111111, is_remote=False
+ )
+
+ other_context = trace_api.SpanContext(
+ trace_id, 0x2222222222222222, is_remote=False
+ )
+
+ span1 = trace._Span(
+ name="test-span-1",
+ context=trace_api.SpanContext(
+ trace_id,
+ 0x34BF92DEEFC58C92,
+ is_remote=False,
+ trace_flags=TraceFlags(TraceFlags.SAMPLED),
+ ),
+ parent=parent_span_context,
+ events=(
+ trace.Event(
+ name="event0",
+ timestamp=base_time + 50 * 10**6,
+ attributes={
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ },
+ ),
+ ),
+ links=(
+ trace_api.Link(
+ context=other_context, attributes={"key_bool": True}
+ ),
+ ),
+ resource=trace.Resource({}),
+ )
+ span1.start(start_time=start_times[0])
+ span1.set_attribute("key_bool", False)
+ span1.set_attribute("key_string", "hello_world")
+ span1.set_attribute("key_float", 111.22)
+ span1.set_status(Status(StatusCode.OK))
+ span1.end(end_time=end_times[0])
+
+ span2 = trace._Span(
+ name="test-span-2",
+ context=parent_span_context,
+ parent=None,
+ resource=trace.Resource(
+ attributes={"key_resource": "some_resource"}
+ ),
+ )
+ span2.start(start_time=start_times[1])
+ span2.set_status(Status(StatusCode.ERROR, "Example description"))
+ span2.end(end_time=end_times[1])
+
+ span3 = trace._Span(
+ name="test-span-3",
+ context=other_context,
+ parent=None,
+ resource=trace.Resource(
+ attributes={"key_resource": "some_resource"}
+ ),
+ )
+ span3.start(start_time=start_times[2])
+ span3.set_attribute("key_string", "hello_world")
+ span3.end(end_time=end_times[2])
+
+ span4 = trace._Span(
+ name="test-span-3",
+ context=other_context,
+ parent=None,
+ resource=trace.Resource({}),
+ instrumentation_scope=InstrumentationScope(
+ name="name", version="version"
+ ),
+ )
+ span4.start(start_time=start_times[3])
+ span4.end(end_time=end_times[3])
+
+ return [span1, span2, span3, span4]
+
+ # pylint: disable=W0223
+ class CommonJsonEncoderTest(CommonEncoderTest, abc.ABC):
+ def test_encode_trace_id(self):
+ for trace_id in (1, 1024, 2**32, 2**64, 2**65):
+ self.assertEqual(
+ format(trace_id, "032x"),
+ self.get_encoder_default()._encode_trace_id(trace_id),
+ )
+
+ def test_encode_span_id(self):
+ for span_id in (1, 1024, 2**8, 2**16, 2**32, 2**64):
+ self.assertEqual(
+ format(span_id, "016x"),
+ self.get_encoder_default()._encode_span_id(span_id),
+ )
+
+ def test_encode_local_endpoint_default(self):
+ self.assertEqual(
+ self.get_encoder_default()._encode_local_endpoint(
+ NodeEndpoint()
+ ),
+ {"serviceName": TEST_SERVICE_NAME},
+ )
+
+ def test_encode_local_endpoint_explicits(self):
+ ipv4 = "192.168.0.1"
+ ipv6 = "2001:db8::c001"
+ port = 414120
+ self.assertEqual(
+ self.get_encoder_default()._encode_local_endpoint(
+ NodeEndpoint(ipv4, ipv6, port)
+ ),
+ {
+ "serviceName": TEST_SERVICE_NAME,
+ "ipv4": ipv4,
+ "ipv6": ipv6,
+ "port": port,
+ },
+ )
+
+ @staticmethod
+ def pop_and_sort(source_list, source_index, sort_key):
+ """
+ Convenience method that will pop a specified index from a list,
+ sort it by a given key and then return it.
+ """
+ popped_item = source_list.pop(source_index, None)
+ if popped_item is not None:
+ popped_item = sorted(popped_item, key=lambda x: x[sort_key])
+ return popped_item
+
+ def assert_equal_encoded_spans(self, expected_spans, actual_spans):
+ self.assertEqual(expected_spans, actual_spans)
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/test_v2_protobuf.py b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/test_v2_protobuf.py
new file mode 100644
index 0000000000..2f2c894e4a
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/encoder/test_v2_protobuf.py
@@ -0,0 +1,263 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import ipaddress
+import json
+
+from opentelemetry.exporter.zipkin.encoder import (
+ _SCOPE_NAME_KEY,
+ _SCOPE_VERSION_KEY,
+ NAME_KEY,
+ VERSION_KEY,
+)
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.exporter.zipkin.proto.http.v2 import ProtobufEncoder
+from opentelemetry.exporter.zipkin.proto.http.v2.gen import zipkin_pb2
+from opentelemetry.test.spantestutil import (
+ get_span_with_dropped_attributes_events_links,
+)
+from opentelemetry.trace import SpanKind
+
+from .common_tests import ( # pylint: disable=import-error
+ TEST_SERVICE_NAME,
+ CommonEncoderTestCases,
+)
+
+
+# pylint: disable=protected-access
+class TestProtobufEncoder(CommonEncoderTestCases.CommonEncoderTest):
+ @staticmethod
+ def get_encoder(*args, **kwargs) -> ProtobufEncoder:
+ return ProtobufEncoder(*args, **kwargs)
+
+ def test_encode_trace_id(self):
+ for trace_id in (1, 1024, 2**32, 2**64, 2**127):
+ self.assertEqual(
+ self.get_encoder_default()._encode_trace_id(trace_id),
+ trace_id.to_bytes(length=16, byteorder="big", signed=False),
+ )
+
+ def test_encode_span_id(self):
+ for span_id in (1, 1024, 2**8, 2**16, 2**32, 2**63):
+ self.assertEqual(
+ self.get_encoder_default()._encode_span_id(span_id),
+ span_id.to_bytes(length=8, byteorder="big", signed=False),
+ )
+
+ def test_encode_local_endpoint_default(self):
+ self.assertEqual(
+ ProtobufEncoder()._encode_local_endpoint(NodeEndpoint()),
+ zipkin_pb2.Endpoint(service_name=TEST_SERVICE_NAME),
+ )
+
+ def test_encode_local_endpoint_explicits(self):
+ ipv4 = "192.168.0.1"
+ ipv6 = "2001:db8::c001"
+ port = 414120
+ self.assertEqual(
+ ProtobufEncoder()._encode_local_endpoint(
+ NodeEndpoint(ipv4, ipv6, port)
+ ),
+ zipkin_pb2.Endpoint(
+ service_name=TEST_SERVICE_NAME,
+ ipv4=ipaddress.ip_address(ipv4).packed,
+ ipv6=ipaddress.ip_address(ipv6).packed,
+ port=port,
+ ),
+ )
+
+ def test_encode(self):
+ local_endpoint = zipkin_pb2.Endpoint(service_name=TEST_SERVICE_NAME)
+ span_kind = ProtobufEncoder.SPAN_KIND_MAP[SpanKind.INTERNAL]
+
+ otel_spans = self.get_exhaustive_otel_span_list()
+ trace_id = ProtobufEncoder._encode_trace_id(
+ otel_spans[0].context.trace_id
+ )
+ expected_output = zipkin_pb2.ListOfSpans(
+ spans=[
+ zipkin_pb2.Span(
+ trace_id=trace_id,
+ id=ProtobufEncoder._encode_span_id(
+ otel_spans[0].context.span_id
+ ),
+ name=otel_spans[0].name,
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[0].start_time
+ ),
+ duration=(
+ ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[0].end_time - otel_spans[0].start_time
+ )
+ ),
+ local_endpoint=local_endpoint,
+ kind=span_kind,
+ tags={
+ "key_bool": "false",
+ "key_string": "hello_world",
+ "key_float": "111.22",
+ "otel.status_code": "OK",
+ },
+ debug=True,
+ parent_id=ProtobufEncoder._encode_span_id(
+ otel_spans[0].parent.span_id
+ ),
+ annotations=[
+ zipkin_pb2.Annotation(
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[0].events[0].timestamp
+ ),
+ value=json.dumps(
+ {
+ "event0": {
+ "annotation_bool": True,
+ "annotation_string": "annotation_test",
+ "key_float": 0.3,
+ }
+ },
+ sort_keys=True,
+ ),
+ ),
+ ],
+ ),
+ zipkin_pb2.Span(
+ trace_id=trace_id,
+ id=ProtobufEncoder._encode_span_id(
+ otel_spans[1].context.span_id
+ ),
+ name=otel_spans[1].name,
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[1].start_time
+ ),
+ duration=(
+ ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[1].end_time - otel_spans[1].start_time
+ )
+ ),
+ local_endpoint=local_endpoint,
+ kind=span_kind,
+ tags={
+ "key_resource": "some_resource",
+ "otel.status_code": "ERROR",
+ "error": "Example description",
+ },
+ debug=False,
+ ),
+ zipkin_pb2.Span(
+ trace_id=trace_id,
+ id=ProtobufEncoder._encode_span_id(
+ otel_spans[2].context.span_id
+ ),
+ name=otel_spans[2].name,
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[2].start_time
+ ),
+ duration=(
+ ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[2].end_time - otel_spans[2].start_time
+ )
+ ),
+ local_endpoint=local_endpoint,
+ kind=span_kind,
+ tags={
+ "key_string": "hello_world",
+ "key_resource": "some_resource",
+ },
+ debug=False,
+ ),
+ zipkin_pb2.Span(
+ trace_id=trace_id,
+ id=ProtobufEncoder._encode_span_id(
+ otel_spans[3].context.span_id
+ ),
+ name=otel_spans[3].name,
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[3].start_time
+ ),
+ duration=(
+ ProtobufEncoder._nsec_to_usec_round(
+ otel_spans[3].end_time - otel_spans[3].start_time
+ )
+ ),
+ local_endpoint=local_endpoint,
+ kind=span_kind,
+ tags={
+ NAME_KEY: "name",
+ VERSION_KEY: "version",
+ _SCOPE_NAME_KEY: "name",
+ _SCOPE_VERSION_KEY: "version",
+ },
+ debug=False,
+ ),
+ ],
+ )
+
+ actual_output = zipkin_pb2.ListOfSpans.FromString(
+ ProtobufEncoder().serialize(otel_spans, NodeEndpoint())
+ )
+
+ self.assertEqual(actual_output, expected_output)
+
+ def _test_encode_max_tag_length(self, max_tag_value_length: int):
+ otel_span, expected_tag_output = self.get_data_for_max_tag_length_test(
+ max_tag_value_length
+ )
+ service_name = otel_span.name
+
+ expected_output = zipkin_pb2.ListOfSpans(
+ spans=[
+ zipkin_pb2.Span(
+ trace_id=ProtobufEncoder._encode_trace_id(
+ otel_span.context.trace_id
+ ),
+ id=ProtobufEncoder._encode_span_id(
+ otel_span.context.span_id
+ ),
+ name=service_name,
+ timestamp=ProtobufEncoder._nsec_to_usec_round(
+ otel_span.start_time
+ ),
+ duration=ProtobufEncoder._nsec_to_usec_round(
+ otel_span.end_time - otel_span.start_time
+ ),
+ local_endpoint=zipkin_pb2.Endpoint(
+ service_name=service_name
+ ),
+ kind=ProtobufEncoder.SPAN_KIND_MAP[SpanKind.INTERNAL],
+ tags=expected_tag_output,
+ annotations=None,
+ debug=True,
+ )
+ ]
+ )
+
+ actual_output = zipkin_pb2.ListOfSpans.FromString(
+ ProtobufEncoder(max_tag_value_length).serialize(
+ [otel_span], NodeEndpoint()
+ )
+ )
+
+ self.assertEqual(actual_output, expected_output)
+
+ def test_dropped_span_attributes(self):
+ otel_span = get_span_with_dropped_attributes_events_links()
+ # pylint: disable=no-member
+ tags = (
+ ProtobufEncoder()
+ ._encode_span(otel_span, zipkin_pb2.Endpoint())
+ .tags
+ )
+
+ self.assertEqual("1", tags["otel.dropped_links_count"])
+ self.assertEqual("2", tags["otel.dropped_attributes_count"])
+ self.assertEqual("3", tags["otel.dropped_events_count"])
diff --git a/exporter/opentelemetry-exporter-zipkin-proto-http/tests/test_zipkin_exporter.py b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/test_zipkin_exporter.py
new file mode 100644
index 0000000000..8a3c055437
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin-proto-http/tests/test_zipkin_exporter.py
@@ -0,0 +1,228 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import ipaddress
+import os
+import unittest
+from unittest.mock import patch
+
+import requests
+
+from opentelemetry import trace
+from opentelemetry.exporter.zipkin.node_endpoint import NodeEndpoint
+from opentelemetry.exporter.zipkin.proto.http import (
+ DEFAULT_ENDPOINT,
+ ZipkinExporter,
+)
+from opentelemetry.exporter.zipkin.proto.http.v2 import ProtobufEncoder
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPORTER_ZIPKIN_ENDPOINT,
+ OTEL_EXPORTER_ZIPKIN_TIMEOUT,
+)
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+from opentelemetry.sdk.trace import TracerProvider, _Span
+from opentelemetry.sdk.trace.export import SpanExportResult
+
+TEST_SERVICE_NAME = "test_service"
+
+
+class MockResponse:
+ def __init__(self, status_code):
+ self.status_code = status_code
+ self.text = status_code
+
+
+class TestZipkinExporter(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ trace.set_tracer_provider(
+ TracerProvider(
+ resource=Resource({SERVICE_NAME: TEST_SERVICE_NAME})
+ )
+ )
+
+ def tearDown(self):
+ os.environ.pop(OTEL_EXPORTER_ZIPKIN_ENDPOINT, None)
+ os.environ.pop(OTEL_EXPORTER_ZIPKIN_TIMEOUT, None)
+
+ def test_constructor_default(self):
+ exporter = ZipkinExporter()
+ self.assertIsInstance(exporter.encoder, ProtobufEncoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, DEFAULT_ENDPOINT)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+
+ def test_constructor_env_vars(self):
+ os_endpoint = "https://foo:9911/path"
+ os.environ[OTEL_EXPORTER_ZIPKIN_ENDPOINT] = os_endpoint
+ os.environ[OTEL_EXPORTER_ZIPKIN_TIMEOUT] = "15"
+
+ exporter = ZipkinExporter()
+
+ self.assertEqual(exporter.endpoint, os_endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+ self.assertEqual(exporter.timeout, 15)
+
+ def test_constructor_protocol_endpoint(self):
+ """Test the constructor for the common usage of providing the
+ protocol and endpoint arguments."""
+ endpoint = "https://opentelemetry.io:15875/myapi/traces?format=zipkin"
+
+ exporter = ZipkinExporter(endpoint)
+
+ self.assertIsInstance(exporter.encoder, ProtobufEncoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(exporter.local_node.ipv4, None)
+ self.assertEqual(exporter.local_node.ipv6, None)
+ self.assertEqual(exporter.local_node.port, None)
+
+ def test_constructor_all_params_and_env_vars(self):
+ """Test the scenario where all params are provided and all OS env
+ vars are set. Explicit params should take precedence.
+ """
+ os_endpoint = "https://os.env.param:9911/path"
+ os.environ[OTEL_EXPORTER_ZIPKIN_ENDPOINT] = os_endpoint
+ os.environ[OTEL_EXPORTER_ZIPKIN_TIMEOUT] = "15"
+
+ constructor_param_endpoint = "https://constructor.param:9911/path"
+ local_node_ipv4 = "192.168.0.1"
+ local_node_ipv6 = "2001:db8::1000"
+ local_node_port = 30301
+ max_tag_value_length = 56
+ timeout_param = 20
+ session_param = requests.Session()
+
+ exporter = ZipkinExporter(
+ constructor_param_endpoint,
+ local_node_ipv4,
+ local_node_ipv6,
+ local_node_port,
+ max_tag_value_length,
+ timeout_param,
+ session_param,
+ )
+
+ self.assertIsInstance(exporter.encoder, ProtobufEncoder)
+ self.assertIsInstance(exporter.session, requests.Session)
+ self.assertEqual(exporter.endpoint, constructor_param_endpoint)
+ self.assertEqual(exporter.local_node.service_name, TEST_SERVICE_NAME)
+ self.assertEqual(
+ exporter.local_node.ipv4, ipaddress.IPv4Address(local_node_ipv4)
+ )
+ self.assertEqual(
+ exporter.local_node.ipv6, ipaddress.IPv6Address(local_node_ipv6)
+ )
+ self.assertEqual(exporter.local_node.port, local_node_port)
+ # Assert timeout passed in constructor is prioritized over env
+ # when both are set.
+ self.assertEqual(exporter.timeout, 20)
+
+ @patch("requests.Session.post")
+ def test_export_success(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+
+ @patch("requests.Session.post")
+ def test_export_invalid_response(self, mock_post):
+ mock_post.return_value = MockResponse(404)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.FAILURE, status)
+
+ @patch("requests.Session.post")
+ def test_export_span_service_name(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ resource = Resource.create({SERVICE_NAME: "test"})
+ context = trace.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ )
+ span = _Span("test_span", context=context, resource=resource)
+ span.start()
+ span.end()
+ exporter = ZipkinExporter()
+ exporter.export([span])
+ self.assertEqual(exporter.local_node.service_name, "test")
+
+ @patch("requests.Session.post")
+ def test_export_shutdown(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter()
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+
+ exporter.shutdown()
+ # Any call to .export() post shutdown should return failure
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.FAILURE, status)
+
+ @patch("requests.Session.post")
+ def test_export_timeout(self, mock_post):
+ mock_post.return_value = MockResponse(200)
+ spans = []
+ exporter = ZipkinExporter(timeout=2)
+ status = exporter.export(spans)
+ self.assertEqual(SpanExportResult.SUCCESS, status)
+ mock_post.assert_called_with(
+ url="http://localhost:9411/api/v2/spans", data=b"", timeout=2
+ )
+
+
+class TestZipkinNodeEndpoint(unittest.TestCase):
+ def test_constructor_default(self):
+ node_endpoint = NodeEndpoint()
+ self.assertEqual(node_endpoint.ipv4, None)
+ self.assertEqual(node_endpoint.ipv6, None)
+ self.assertEqual(node_endpoint.port, None)
+ self.assertEqual(node_endpoint.service_name, TEST_SERVICE_NAME)
+
+ def test_constructor_explicits(self):
+ ipv4 = "192.168.0.1"
+ ipv6 = "2001:db8::c001"
+ port = 414120
+ node_endpoint = NodeEndpoint(ipv4, ipv6, port)
+ self.assertEqual(node_endpoint.ipv4, ipaddress.IPv4Address(ipv4))
+ self.assertEqual(node_endpoint.ipv6, ipaddress.IPv6Address(ipv6))
+ self.assertEqual(node_endpoint.port, port)
+ self.assertEqual(node_endpoint.service_name, TEST_SERVICE_NAME)
+
+ def test_ipv4_invalid_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv4="invalid-ipv4-address")
+
+ def test_ipv4_passed_ipv6_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv4="2001:db8::c001")
+
+ def test_ipv6_invalid_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv6="invalid-ipv6-address")
+
+ def test_ipv6_passed_ipv4_raises_error(self):
+ with self.assertRaises(ValueError):
+ NodeEndpoint(ipv6="192.168.0.1")
diff --git a/exporter/opentelemetry-exporter-zipkin/LICENSE b/exporter/opentelemetry-exporter-zipkin/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/exporter/opentelemetry-exporter-zipkin/README.rst b/exporter/opentelemetry-exporter-zipkin/README.rst
new file mode 100644
index 0000000000..2445ca879b
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin/README.rst
@@ -0,0 +1,32 @@
+OpenTelemetry Zipkin Exporter
+=============================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-zipkin.svg
+ :target: https://pypi.org/project/opentelemetry-exporter-zipkin/
+
+This library is provided as a convenience to install all supported OpenTelemetry Zipkin Exporters. Currently it installs:
+* opentelemetry-exporter-zipkin-json
+* opentelemetry-exporter-zipkin-proto-http
+
+In the future, additional packages may be available:
+* opentelemetry-exporter-zipkin-thrift
+
+To avoid unnecessary dependencies, users should install the specific package once they've determined their
+preferred serialization method.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-exporter-zipkin
+
+
+References
+----------
+
+* `OpenTelemetry Zipkin Exporter `_
+* `Zipkin `_
+* `OpenTelemetry Project `_
diff --git a/exporter/opentelemetry-exporter-zipkin/pyproject.toml b/exporter/opentelemetry-exporter-zipkin/pyproject.toml
new file mode 100644
index 0000000000..e2340769d5
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin/pyproject.toml
@@ -0,0 +1,52 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-exporter-zipkin"
+dynamic = ["version"]
+description = "Zipkin Span Exporters for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-exporter-zipkin-json == 1.23.0.dev",
+ "opentelemetry-exporter-zipkin-proto-http == 1.23.0.dev",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_traces_exporter]
+zipkin = "opentelemetry.exporter.zipkin.proto.http:ZipkinExporter"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/exporter/opentelemetry-exporter-zipkin"
+
+[tool.hatch.version]
+path = "src/opentelemetry/exporter/zipkin/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/py.typed b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/version.py b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/exporter/opentelemetry-exporter-zipkin/tests/__init__.py b/exporter/opentelemetry-exporter-zipkin/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/exporter/opentelemetry-exporter-zipkin/tests/test_zipkin.py b/exporter/opentelemetry-exporter-zipkin/tests/test_zipkin.py
new file mode 100644
index 0000000000..fa9c3ecf48
--- /dev/null
+++ b/exporter/opentelemetry-exporter-zipkin/tests/test_zipkin.py
@@ -0,0 +1,27 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry.exporter.zipkin import json
+from opentelemetry.exporter.zipkin.proto import http
+
+
+class TestZipkinExporter(unittest.TestCase):
+ def test_constructors(self):
+ try:
+ json.ZipkinExporter()
+ http.ZipkinExporter()
+ except Exception as exc: # pylint: disable=broad-except
+ self.assertIsNone(exc)
diff --git a/gen-requirements.txt b/gen-requirements.txt
index 0f96f12a56..7bc889e68f 100644
--- a/gen-requirements.txt
+++ b/gen-requirements.txt
@@ -1,3 +1,4 @@
+<<<<<<< HEAD
-c dev-requirements.txt
astor==0.8.1
jinja2~=2.7
@@ -8,3 +9,11 @@ requests
tomli
tomli_w
hatch
+=======
+# This version of grpcio-tools ships with protoc 3.19.4 which appears to be compatible with
+# both protobuf 3.19.x and 4.x (see https://github.com/protocolbuffers/protobuf/issues/11123).
+# Bump this version with caution to preserve compatibility with protobuf 3.
+# https://github.com/open-telemetry/opentelemetry-python/blob/main/opentelemetry-proto/pyproject.toml#L28
+grpcio-tools==1.48.1
+mypy-protobuf~=3.0.0
+>>>>>>> upstream/main
diff --git a/git b/git
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/mypy-relaxed.ini b/mypy-relaxed.ini
new file mode 100644
index 0000000000..34538bdd79
--- /dev/null
+++ b/mypy-relaxed.ini
@@ -0,0 +1,22 @@
+; This is mainly intended for unit tests and such. So probably going forward, we
+; will disable even more warnings here.
+[mypy]
+ disallow_any_unimported = True
+; disallow_any_expr = True
+ disallow_any_decorated = True
+; disallow_any_explicit = True
+ disallow_any_generics = True
+ disallow_subclassing_any = True
+ disallow_untyped_calls = True
+; disallow_untyped_defs = True
+ disallow_incomplete_defs = True
+ check_untyped_defs = True
+ disallow_untyped_decorators = True
+ allow_untyped_globals = True
+; Due to disabling some other warnings, unused ignores may occur.
+; warn_unused_ignores = True
+ warn_return_any = True
+ strict_equality = True
+
+[mypy-setuptools]
+ ignore_missing_imports = True
diff --git a/mypy.ini b/mypy.ini
new file mode 100644
index 0000000000..dca41f8c6b
--- /dev/null
+++ b/mypy.ini
@@ -0,0 +1,20 @@
+[mypy]
+ disallow_any_unimported = True
+ disallow_any_expr = True
+ disallow_any_decorated = True
+; disallow_any_explicit = True
+ disallow_any_generics = True
+ disallow_subclassing_any = True
+ disallow_untyped_calls = True
+ disallow_untyped_defs = True
+ disallow_incomplete_defs = True
+ check_untyped_defs = True
+ disallow_untyped_decorators = True
+ warn_unused_configs = True
+ warn_unused_ignores = True
+ warn_return_any = True
+ warn_redundant_casts = True
+ strict_equality = True
+ strict_optional = True
+ no_implicit_optional = True
+ no_implicit_reexport = True
\ No newline at end of file
diff --git a/opentelemetry-api/LICENSE b/opentelemetry-api/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/opentelemetry-api/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/opentelemetry-api/README.rst b/opentelemetry-api/README.rst
new file mode 100644
index 0000000000..130fbbf39d
--- /dev/null
+++ b/opentelemetry-api/README.rst
@@ -0,0 +1,19 @@
+OpenTelemetry Python API
+============================================================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-api.svg
+ :target: https://pypi.org/project/opentelemetry-api/
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-api
+
+References
+----------
+
+* `OpenTelemetry Project `_
diff --git a/opentelemetry-api/pyproject.toml b/opentelemetry-api/pyproject.toml
new file mode 100644
index 0000000000..adf9512cf0
--- /dev/null
+++ b/opentelemetry-api/pyproject.toml
@@ -0,0 +1,69 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-api"
+description = "OpenTelemetry Python API"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "Deprecated >= 1.2.6",
+ # FIXME This should be able to be removed after 3.12 is released if there is a reliable API
+ # in importlib.metadata.
+ "importlib-metadata >= 6.0, < 7.0",
+]
+dynamic = [
+ "version",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_context]
+contextvars_context = "opentelemetry.context.contextvars_context:ContextVarsRuntimeContext"
+
+[project.entry-points.opentelemetry_environment_variables]
+api = "opentelemetry.environment_variables"
+
+[project.entry-points.opentelemetry_meter_provider]
+default_meter_provider = "opentelemetry.metrics:NoOpMeterProvider"
+
+[project.entry-points.opentelemetry_propagator]
+baggage = "opentelemetry.baggage.propagation:W3CBaggagePropagator"
+tracecontext = "opentelemetry.trace.propagation.tracecontext:TraceContextTextMapPropagator"
+
+[project.entry-points.opentelemetry_tracer_provider]
+default_tracer_provider = "opentelemetry.trace:NoOpTracerProvider"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-api"
+
+[tool.hatch.version]
+path = "src/opentelemetry/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/opentelemetry-api/src/opentelemetry/_logs/__init__.py b/opentelemetry-api/src/opentelemetry/_logs/__init__.py
new file mode 100644
index 0000000000..aaf29e5fe6
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/_logs/__init__.py
@@ -0,0 +1,59 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+The OpenTelemetry logging API describes the classes used to generate logs and events.
+
+The :class:`.LoggerProvider` provides users access to the :class:`.Logger`.
+
+This module provides abstract (i.e. unimplemented) classes required for
+logging, and a concrete no-op implementation :class:`.NoOpLogger` that allows applications
+to use the API package alone without a supporting implementation.
+
+To get a logger, you need to provide the package name from which you are
+calling the logging APIs to OpenTelemetry by calling `LoggerProvider.get_logger`
+with the calling module name and the version of your package.
+
+The following code shows how to obtain a logger using the global :class:`.LoggerProvider`::
+
+ from opentelemetry._logs import get_logger
+
+ logger = get_logger("example-logger")
+
+.. versionadded:: 1.15.0
+"""
+
+from opentelemetry._logs._internal import (
+ Logger,
+ LoggerProvider,
+ LogRecord,
+ NoOpLogger,
+ NoOpLoggerProvider,
+ get_logger,
+ get_logger_provider,
+ set_logger_provider,
+)
+from opentelemetry._logs.severity import SeverityNumber, std_to_otel
+
+__all__ = [
+ "Logger",
+ "LoggerProvider",
+ "LogRecord",
+ "NoOpLogger",
+ "NoOpLoggerProvider",
+ "get_logger",
+ "get_logger_provider",
+ "set_logger_provider",
+ "SeverityNumber",
+ "std_to_otel",
+]
diff --git a/opentelemetry-api/src/opentelemetry/_logs/_internal/__init__.py b/opentelemetry-api/src/opentelemetry/_logs/_internal/__init__.py
new file mode 100644
index 0000000000..e67f28439b
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/_logs/_internal/__init__.py
@@ -0,0 +1,231 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+The OpenTelemetry logging API describes the classes used to generate logs and events.
+
+The :class:`.LoggerProvider` provides users access to the :class:`.Logger` which in
+turn is used to create :class:`.Event` and :class:`.Log` objects.
+
+This module provides abstract (i.e. unimplemented) classes required for
+logging, and a concrete no-op implementation :class:`.NoOpLogger` that allows applications
+to use the API package alone without a supporting implementation.
+
+To get a logger, you need to provide the package name from which you are
+calling the logging APIs to OpenTelemetry by calling `LoggerProvider.get_logger`
+with the calling module name and the version of your package.
+
+The following code shows how to obtain a logger using the global :class:`.LoggerProvider`::
+
+ from opentelemetry._logs import get_logger
+
+ logger = get_logger("example-logger")
+
+.. versionadded:: 1.15.0
+"""
+
+from abc import ABC, abstractmethod
+from logging import getLogger
+from os import environ
+from time import time_ns
+from typing import Any, Optional, cast
+
+from opentelemetry._logs.severity import SeverityNumber
+from opentelemetry.environment_variables import _OTEL_PYTHON_LOGGER_PROVIDER
+from opentelemetry.trace.span import TraceFlags
+from opentelemetry.util._once import Once
+from opentelemetry.util._providers import _load_provider
+from opentelemetry.util.types import Attributes
+
+_logger = getLogger(__name__)
+
+
+class LogRecord(ABC):
+ """A LogRecord instance represents an event being logged.
+
+ LogRecord instances are created and emitted via `Logger`
+ every time something is logged. They contain all the information
+ pertinent to the event being logged.
+ """
+
+ def __init__(
+ self,
+ timestamp: Optional[int] = None,
+ observed_timestamp: Optional[int] = None,
+ trace_id: Optional[int] = None,
+ span_id: Optional[int] = None,
+ trace_flags: Optional["TraceFlags"] = None,
+ severity_text: Optional[str] = None,
+ severity_number: Optional[SeverityNumber] = None,
+ body: Optional[Any] = None,
+ attributes: Optional["Attributes"] = None,
+ ):
+ self.timestamp = timestamp
+ if observed_timestamp is None:
+ observed_timestamp = time_ns()
+ self.observed_timestamp = observed_timestamp
+ self.trace_id = trace_id
+ self.span_id = span_id
+ self.trace_flags = trace_flags
+ self.severity_text = severity_text
+ self.severity_number = severity_number
+ self.body = body # type: ignore
+ self.attributes = attributes
+
+
+class Logger(ABC):
+ """Handles emitting events and logs via `LogRecord`."""
+
+ def __init__(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> None:
+ super().__init__()
+ self._name = name
+ self._version = version
+ self._schema_url = schema_url
+
+ @abstractmethod
+ def emit(self, record: "LogRecord") -> None:
+ """Emits a :class:`LogRecord` representing a log to the processing pipeline."""
+
+
+class NoOpLogger(Logger):
+ """The default Logger used when no Logger implementation is available.
+
+ All operations are no-op.
+ """
+
+ def emit(self, record: "LogRecord") -> None:
+ pass
+
+
+class LoggerProvider(ABC):
+ """
+ LoggerProvider is the entry point of the API. It provides access to Logger instances.
+ """
+
+ @abstractmethod
+ def get_logger(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> Logger:
+ """Returns a `Logger` for use by the given instrumentation library.
+
+ For any two calls it is undefined whether the same or different
+ `Logger` instances are returned, even for different library names.
+
+ This function may return different `Logger` types (e.g. a no-op logger
+ vs. a functional logger).
+
+ Args:
+ name: The name of the instrumenting module.
+ ``__name__`` may not be used as this can result in
+ different logger names if the loggers are in different files.
+ It is better to use a fixed string that can be imported where
+ needed and used consistently as the name of the logger.
+
+ This should *not* be the name of the module that is
+ instrumented but the name of the module doing the instrumentation.
+ E.g., instead of ``"requests"``, use
+ ``"opentelemetry.instrumentation.requests"``.
+
+ version: Optional. The version string of the
+ instrumenting library. Usually this should be the same as
+ ``importlib.metadata.version(instrumenting_library_name)``.
+
+ schema_url: Optional. Specifies the Schema URL of the emitted telemetry.
+ """
+
+
+class NoOpLoggerProvider(LoggerProvider):
+ """The default LoggerProvider used when no LoggerProvider implementation is available."""
+
+ def get_logger(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> Logger:
+ """Returns a NoOpLogger."""
+ super().get_logger(name, version=version, schema_url=schema_url)
+ return NoOpLogger(name, version=version, schema_url=schema_url)
+
+
+# TODO: ProxyLoggerProvider
+
+
+_LOGGER_PROVIDER_SET_ONCE = Once()
+_LOGGER_PROVIDER = None
+
+
+def get_logger_provider() -> LoggerProvider:
+ """Gets the current global :class:`~.LoggerProvider` object."""
+ global _LOGGER_PROVIDER # pylint: disable=global-statement
+ if _LOGGER_PROVIDER is None:
+ if _OTEL_PYTHON_LOGGER_PROVIDER not in environ:
+ # TODO: return proxy
+ _LOGGER_PROVIDER = NoOpLoggerProvider()
+ return _LOGGER_PROVIDER
+
+ logger_provider: LoggerProvider = _load_provider( # type: ignore
+ _OTEL_PYTHON_LOGGER_PROVIDER, "logger_provider"
+ )
+ _set_logger_provider(logger_provider, log=False)
+
+ # _LOGGER_PROVIDER will have been set by one thread
+ return cast("LoggerProvider", _LOGGER_PROVIDER)
+
+
+def _set_logger_provider(logger_provider: LoggerProvider, log: bool) -> None:
+ def set_lp() -> None:
+ global _LOGGER_PROVIDER # pylint: disable=global-statement
+ _LOGGER_PROVIDER = logger_provider # type: ignore
+
+ did_set = _LOGGER_PROVIDER_SET_ONCE.do_once(set_lp)
+
+ if log and not did_set:
+ _logger.warning("Overriding of current LoggerProvider is not allowed")
+
+
+def set_logger_provider(logger_provider: LoggerProvider) -> None:
+ """Sets the current global :class:`~.LoggerProvider` object.
+
+ This can only be done once, a warning will be logged if any further attempt
+ is made.
+ """
+ _set_logger_provider(logger_provider, log=True)
+
+
+def get_logger(
+ instrumenting_module_name: str,
+ instrumenting_library_version: str = "",
+ logger_provider: Optional[LoggerProvider] = None,
+ schema_url: Optional[str] = None,
+) -> "Logger":
+ """Returns a `Logger` for use within a python process.
+
+ This function is a convenience wrapper for
+ opentelemetry.sdk._logs.LoggerProvider.get_logger.
+
+ If logger_provider param is omitted the current configured one is used.
+ """
+ if logger_provider is None:
+ logger_provider = get_logger_provider()
+ return logger_provider.get_logger(
+ instrumenting_module_name, instrumenting_library_version, schema_url
+ )
diff --git a/opentelemetry-api/src/opentelemetry/_logs/severity/__init__.py b/opentelemetry-api/src/opentelemetry/_logs/severity/__init__.py
new file mode 100644
index 0000000000..1daaa19f44
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/_logs/severity/__init__.py
@@ -0,0 +1,115 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import enum
+
+
+class SeverityNumber(enum.Enum):
+ """Numerical value of severity.
+
+ Smaller numerical values correspond to less severe events
+ (such as debug events), larger numerical values correspond
+ to more severe events (such as errors and critical events).
+
+ See the `Log Data Model`_ spec for more info and how to map the
+ severity from source format to OTLP Model.
+
+ .. _Log Data Model: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-severitynumber
+ """
+
+ UNSPECIFIED = 0
+ TRACE = 1
+ TRACE2 = 2
+ TRACE3 = 3
+ TRACE4 = 4
+ DEBUG = 5
+ DEBUG2 = 6
+ DEBUG3 = 7
+ DEBUG4 = 8
+ INFO = 9
+ INFO2 = 10
+ INFO3 = 11
+ INFO4 = 12
+ WARN = 13
+ WARN2 = 14
+ WARN3 = 15
+ WARN4 = 16
+ ERROR = 17
+ ERROR2 = 18
+ ERROR3 = 19
+ ERROR4 = 20
+ FATAL = 21
+ FATAL2 = 22
+ FATAL3 = 23
+ FATAL4 = 24
+
+
+_STD_TO_OTEL = {
+ 10: SeverityNumber.DEBUG,
+ 11: SeverityNumber.DEBUG2,
+ 12: SeverityNumber.DEBUG3,
+ 13: SeverityNumber.DEBUG4,
+ 14: SeverityNumber.DEBUG4,
+ 15: SeverityNumber.DEBUG4,
+ 16: SeverityNumber.DEBUG4,
+ 17: SeverityNumber.DEBUG4,
+ 18: SeverityNumber.DEBUG4,
+ 19: SeverityNumber.DEBUG4,
+ 20: SeverityNumber.INFO,
+ 21: SeverityNumber.INFO2,
+ 22: SeverityNumber.INFO3,
+ 23: SeverityNumber.INFO4,
+ 24: SeverityNumber.INFO4,
+ 25: SeverityNumber.INFO4,
+ 26: SeverityNumber.INFO4,
+ 27: SeverityNumber.INFO4,
+ 28: SeverityNumber.INFO4,
+ 29: SeverityNumber.INFO4,
+ 30: SeverityNumber.WARN,
+ 31: SeverityNumber.WARN2,
+ 32: SeverityNumber.WARN3,
+ 33: SeverityNumber.WARN4,
+ 34: SeverityNumber.WARN4,
+ 35: SeverityNumber.WARN4,
+ 36: SeverityNumber.WARN4,
+ 37: SeverityNumber.WARN4,
+ 38: SeverityNumber.WARN4,
+ 39: SeverityNumber.WARN4,
+ 40: SeverityNumber.ERROR,
+ 41: SeverityNumber.ERROR2,
+ 42: SeverityNumber.ERROR3,
+ 43: SeverityNumber.ERROR4,
+ 44: SeverityNumber.ERROR4,
+ 45: SeverityNumber.ERROR4,
+ 46: SeverityNumber.ERROR4,
+ 47: SeverityNumber.ERROR4,
+ 48: SeverityNumber.ERROR4,
+ 49: SeverityNumber.ERROR4,
+ 50: SeverityNumber.FATAL,
+ 51: SeverityNumber.FATAL2,
+ 52: SeverityNumber.FATAL3,
+ 53: SeverityNumber.FATAL4,
+}
+
+
+def std_to_otel(levelno: int) -> SeverityNumber:
+ """
+ Map python log levelno as defined in https://docs.python.org/3/library/logging.html#logging-levels
+ to OTel log severity number as defined here: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-severitynumber
+ """
+ if levelno < 10:
+ return SeverityNumber.UNSPECIFIED
+ if levelno > 53:
+ return SeverityNumber.FATAL4
+ return _STD_TO_OTEL[levelno]
diff --git a/opentelemetry-api/src/opentelemetry/attributes/__init__.py b/opentelemetry-api/src/opentelemetry/attributes/__init__.py
new file mode 100644
index 0000000000..724c931c82
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/attributes/__init__.py
@@ -0,0 +1,199 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+import logging
+import threading
+from collections import OrderedDict
+from collections.abc import MutableMapping
+from typing import Optional, Sequence, Union
+
+from opentelemetry.util import types
+
+# bytes are accepted as a user supplied value for attributes but
+# decoded to strings internally.
+_VALID_ATTR_VALUE_TYPES = (bool, str, bytes, int, float)
+
+
+_logger = logging.getLogger(__name__)
+
+
+def _clean_attribute(
+ key: str, value: types.AttributeValue, max_len: Optional[int]
+) -> Optional[types.AttributeValue]:
+ """Checks if attribute value is valid and cleans it if required.
+
+ The function returns the cleaned value or None if the value is not valid.
+
+ An attribute value is valid if it is either:
+ - A primitive type: string, boolean, double precision floating
+ point (IEEE 754-1985) or integer.
+ - An array of primitive type values. The array MUST be homogeneous,
+ i.e. it MUST NOT contain values of different types.
+
+ An attribute needs cleansing if:
+ - Its length is greater than the maximum allowed length.
+ - It needs to be encoded/decoded e.g, bytes to strings.
+ """
+
+ if not (key and isinstance(key, str)):
+ _logger.warning("invalid key `%s`. must be non-empty string.", key)
+ return None
+
+ if isinstance(value, _VALID_ATTR_VALUE_TYPES):
+ return _clean_attribute_value(value, max_len)
+
+ if isinstance(value, Sequence):
+ sequence_first_valid_type = None
+ cleaned_seq = []
+
+ for element in value:
+ element = _clean_attribute_value(element, max_len)
+ if element is None:
+ cleaned_seq.append(element)
+ continue
+
+ element_type = type(element)
+ # Reject attribute value if sequence contains a value with an incompatible type.
+ if element_type not in _VALID_ATTR_VALUE_TYPES:
+ _logger.warning(
+ "Invalid type %s in attribute value sequence. Expected one of "
+ "%s or None",
+ element_type.__name__,
+ [
+ valid_type.__name__
+ for valid_type in _VALID_ATTR_VALUE_TYPES
+ ],
+ )
+ return None
+
+ # The type of the sequence must be homogeneous. The first non-None
+ # element determines the type of the sequence
+ if sequence_first_valid_type is None:
+ sequence_first_valid_type = element_type
+ # use equality instead of isinstance as isinstance(True, int) evaluates to True
+ elif element_type != sequence_first_valid_type:
+ _logger.warning(
+ "Attribute %r mixes types %s and %s in attribute value sequence",
+ key,
+ sequence_first_valid_type.__name__,
+ type(element).__name__,
+ )
+ return None
+
+ cleaned_seq.append(element)
+
+ # Freeze mutable sequences defensively
+ return tuple(cleaned_seq)
+
+ _logger.warning(
+ "Invalid type %s for attribute '%s' value. Expected one of %s or a "
+ "sequence of those types",
+ type(value).__name__,
+ key,
+ [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],
+ )
+ return None
+
+
+def _clean_attribute_value(
+ value: types.AttributeValue, limit: Optional[int]
+) -> Union[types.AttributeValue, None]:
+ if value is None:
+ return None
+
+ if isinstance(value, bytes):
+ try:
+ value = value.decode()
+ except UnicodeDecodeError:
+ _logger.warning("Byte attribute could not be decoded.")
+ return None
+
+ if limit is not None and isinstance(value, str):
+ value = value[:limit]
+ return value
+
+
+class BoundedAttributes(MutableMapping):
+ """An ordered dict with a fixed max capacity.
+
+ Oldest elements are dropped when the dict is full and a new element is
+ added.
+ """
+
+ def __init__(
+ self,
+ maxlen: Optional[int] = None,
+ attributes: types.Attributes = None,
+ immutable: bool = True,
+ max_value_len: Optional[int] = None,
+ ):
+ if maxlen is not None:
+ if not isinstance(maxlen, int) or maxlen < 0:
+ raise ValueError(
+ "maxlen must be valid int greater or equal to 0"
+ )
+ self.maxlen = maxlen
+ self.dropped = 0
+ self.max_value_len = max_value_len
+ self._dict = OrderedDict() # type: OrderedDict
+ self._lock = threading.Lock() # type: threading.Lock
+ if attributes:
+ for key, value in attributes.items():
+ self[key] = value
+ self._immutable = immutable
+
+ def __repr__(self):
+ return (
+ f"{type(self).__name__}({dict(self._dict)}, maxlen={self.maxlen})"
+ )
+
+ def __getitem__(self, key):
+ return self._dict[key]
+
+ def __setitem__(self, key, value):
+ if getattr(self, "_immutable", False):
+ raise TypeError
+ with self._lock:
+ if self.maxlen is not None and self.maxlen == 0:
+ self.dropped += 1
+ return
+
+ value = _clean_attribute(key, value, self.max_value_len)
+ if value is not None:
+ if key in self._dict:
+ del self._dict[key]
+ elif (
+ self.maxlen is not None and len(self._dict) == self.maxlen
+ ):
+ self._dict.popitem(last=False)
+ self.dropped += 1
+
+ self._dict[key] = value
+
+ def __delitem__(self, key):
+ if getattr(self, "_immutable", False):
+ raise TypeError
+ with self._lock:
+ del self._dict[key]
+
+ def __iter__(self):
+ with self._lock:
+ return iter(self._dict.copy())
+
+ def __len__(self):
+ return len(self._dict)
+
+ def copy(self):
+ return self._dict.copy()
diff --git a/opentelemetry-api/src/opentelemetry/baggage/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/__init__.py
new file mode 100644
index 0000000000..9a740200a6
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/baggage/__init__.py
@@ -0,0 +1,132 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+from re import compile
+from types import MappingProxyType
+from typing import Mapping, Optional
+
+from opentelemetry.context import create_key, get_value, set_value
+from opentelemetry.context.context import Context
+from opentelemetry.util.re import (
+ _BAGGAGE_PROPERTY_FORMAT,
+ _KEY_FORMAT,
+ _VALUE_FORMAT,
+)
+
+_BAGGAGE_KEY = create_key("baggage")
+_logger = getLogger(__name__)
+
+_KEY_PATTERN = compile(_KEY_FORMAT)
+_VALUE_PATTERN = compile(_VALUE_FORMAT)
+_PROPERT_PATTERN = compile(_BAGGAGE_PROPERTY_FORMAT)
+
+
+def get_all(
+ context: Optional[Context] = None,
+) -> Mapping[str, object]:
+ """Returns the name/value pairs in the Baggage
+
+ Args:
+ context: The Context to use. If not set, uses current Context
+
+ Returns:
+ The name/value pairs in the Baggage
+ """
+ baggage = get_value(_BAGGAGE_KEY, context=context)
+ if isinstance(baggage, dict):
+ return MappingProxyType(baggage)
+ return MappingProxyType({})
+
+
+def get_baggage(
+ name: str, context: Optional[Context] = None
+) -> Optional[object]:
+ """Provides access to the value for a name/value pair in the
+ Baggage
+
+ Args:
+ name: The name of the value to retrieve
+ context: The Context to use. If not set, uses current Context
+
+ Returns:
+ The value associated with the given name, or null if the given name is
+ not present.
+ """
+ return get_all(context=context).get(name)
+
+
+def set_baggage(
+ name: str, value: object, context: Optional[Context] = None
+) -> Context:
+ """Sets a value in the Baggage
+
+ Args:
+ name: The name of the value to set
+ value: The value to set
+ context: The Context to use. If not set, uses current Context
+
+ Returns:
+ A Context with the value updated
+ """
+ baggage = dict(get_all(context=context))
+ baggage[name] = value
+ return set_value(_BAGGAGE_KEY, baggage, context=context)
+
+
+def remove_baggage(name: str, context: Optional[Context] = None) -> Context:
+ """Removes a value from the Baggage
+
+ Args:
+ name: The name of the value to remove
+ context: The Context to use. If not set, uses current Context
+
+ Returns:
+ A Context with the name/value removed
+ """
+ baggage = dict(get_all(context=context))
+ baggage.pop(name, None)
+
+ return set_value(_BAGGAGE_KEY, baggage, context=context)
+
+
+def clear(context: Optional[Context] = None) -> Context:
+ """Removes all values from the Baggage
+
+ Args:
+ context: The Context to use. If not set, uses current Context
+
+ Returns:
+ A Context with all baggage entries removed
+ """
+ return set_value(_BAGGAGE_KEY, {}, context=context)
+
+
+def _is_valid_key(name: str) -> bool:
+ return _KEY_PATTERN.fullmatch(str(name)) is not None
+
+
+def _is_valid_value(value: object) -> bool:
+ parts = str(value).split(";")
+ is_valid_value = _VALUE_PATTERN.fullmatch(parts[0]) is not None
+ if len(parts) > 1: # one or more properties metadata
+ for property in parts[1:]:
+ if _PROPERT_PATTERN.fullmatch(property) is None:
+ is_valid_value = False
+ break
+ return is_valid_value
+
+
+def _is_valid_pair(key: str, value: str) -> bool:
+ return _is_valid_key(key) and _is_valid_value(value)
diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
new file mode 100644
index 0000000000..91898d53ae
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -0,0 +1,146 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from logging import getLogger
+from re import split
+from typing import Iterable, List, Mapping, Optional, Set
+from urllib.parse import quote_plus, unquote_plus
+
+from opentelemetry.baggage import _is_valid_pair, get_all, set_baggage
+from opentelemetry.context import get_current
+from opentelemetry.context.context import Context
+from opentelemetry.propagators import textmap
+from opentelemetry.util.re import _DELIMITER_PATTERN
+
+_logger = getLogger(__name__)
+
+
+class W3CBaggagePropagator(textmap.TextMapPropagator):
+ """Extracts and injects Baggage which is used to annotate telemetry."""
+
+ _MAX_HEADER_LENGTH = 8192
+ _MAX_PAIR_LENGTH = 4096
+ _MAX_PAIRS = 180
+ _BAGGAGE_HEADER_NAME = "baggage"
+
+ def extract(
+ self,
+ carrier: textmap.CarrierT,
+ context: Optional[Context] = None,
+ getter: textmap.Getter[textmap.CarrierT] = textmap.default_getter,
+ ) -> Context:
+ """Extract Baggage from the carrier.
+
+ See
+ `opentelemetry.propagators.textmap.TextMapPropagator.extract`
+ """
+
+ if context is None:
+ context = get_current()
+
+ header = _extract_first_element(
+ getter.get(carrier, self._BAGGAGE_HEADER_NAME)
+ )
+
+ if not header:
+ return context
+
+ if len(header) > self._MAX_HEADER_LENGTH:
+ _logger.warning(
+ "Baggage header `%s` exceeded the maximum number of bytes per baggage-string",
+ header,
+ )
+ return context
+
+ baggage_entries: List[str] = split(_DELIMITER_PATTERN, header)
+ total_baggage_entries = self._MAX_PAIRS
+
+ if len(baggage_entries) > self._MAX_PAIRS:
+ _logger.warning(
+ "Baggage header `%s` exceeded the maximum number of list-members",
+ header,
+ )
+
+ for entry in baggage_entries:
+ if len(entry) > self._MAX_PAIR_LENGTH:
+ _logger.warning(
+ "Baggage entry `%s` exceeded the maximum number of bytes per list-member",
+ entry,
+ )
+ continue
+ if not entry: # empty string
+ continue
+ try:
+ name, value = entry.split("=", 1)
+ except Exception: # pylint: disable=broad-except
+ _logger.warning(
+ "Baggage list-member `%s` doesn't match the format", entry
+ )
+ continue
+
+ if not _is_valid_pair(name, value):
+ _logger.warning("Invalid baggage entry: `%s`", entry)
+ continue
+
+ name = unquote_plus(name).strip()
+ value = unquote_plus(value).strip()
+
+ context = set_baggage(
+ name,
+ value,
+ context=context,
+ )
+ total_baggage_entries -= 1
+ if total_baggage_entries == 0:
+ break
+
+ return context
+
+ def inject(
+ self,
+ carrier: textmap.CarrierT,
+ context: Optional[Context] = None,
+ setter: textmap.Setter[textmap.CarrierT] = textmap.default_setter,
+ ) -> None:
+ """Injects Baggage into the carrier.
+
+ See
+ `opentelemetry.propagators.textmap.TextMapPropagator.inject`
+ """
+ baggage_entries = get_all(context=context)
+ if not baggage_entries:
+ return
+
+ baggage_string = _format_baggage(baggage_entries)
+ setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
+
+ @property
+ def fields(self) -> Set[str]:
+ """Returns a set with the fields set in `inject`."""
+ return {self._BAGGAGE_HEADER_NAME}
+
+
+def _format_baggage(baggage_entries: Mapping[str, object]) -> str:
+ return ",".join(
+ quote_plus(str(key)) + "=" + quote_plus(str(value))
+ for key, value in baggage_entries.items()
+ )
+
+
+def _extract_first_element(
+ items: Optional[Iterable[textmap.CarrierT]],
+) -> Optional[textmap.CarrierT]:
+ if items is None:
+ return None
+ return next(iter(items), None)
diff --git a/opentelemetry-api/src/opentelemetry/context/__init__.py b/opentelemetry-api/src/opentelemetry/context/__init__.py
new file mode 100644
index 0000000000..d170089812
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/context/__init__.py
@@ -0,0 +1,184 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import threading
+import typing
+from functools import wraps
+from os import environ
+from uuid import uuid4
+
+# pylint: disable=wrong-import-position
+from opentelemetry.context.context import Context, _RuntimeContext # noqa
+from opentelemetry.environment_variables import OTEL_PYTHON_CONTEXT
+from opentelemetry.util._importlib_metadata import entry_points
+
+logger = logging.getLogger(__name__)
+_RUNTIME_CONTEXT = None # type: typing.Optional[_RuntimeContext]
+_RUNTIME_CONTEXT_LOCK = threading.Lock()
+
+_F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any])
+
+
+def _load_runtime_context(func: _F) -> _F:
+ """A decorator used to initialize the global RuntimeContext
+
+ Returns:
+ A wrapper of the decorated method.
+ """
+
+ @wraps(func) # type: ignore[misc]
+ def wrapper( # type: ignore[misc]
+ *args: typing.Tuple[typing.Any, typing.Any],
+ **kwargs: typing.Dict[typing.Any, typing.Any],
+ ) -> typing.Optional[typing.Any]:
+ global _RUNTIME_CONTEXT # pylint: disable=global-statement
+
+ with _RUNTIME_CONTEXT_LOCK:
+ if _RUNTIME_CONTEXT is None:
+ # FIXME use a better implementation of a configuration manager
+ # to avoid having to get configuration values straight from
+ # environment variables
+ default_context = "contextvars_context"
+
+ configured_context = environ.get(
+ OTEL_PYTHON_CONTEXT, default_context
+ ) # type: str
+ try:
+
+ _RUNTIME_CONTEXT = next( # type: ignore
+ iter( # type: ignore
+ entry_points( # type: ignore
+ group="opentelemetry_context",
+ name=configured_context,
+ )
+ )
+ ).load()()
+
+ except Exception: # pylint: disable=broad-except
+ logger.exception(
+ "Failed to load context: %s", configured_context
+ )
+ return func(*args, **kwargs) # type: ignore[misc]
+
+ return typing.cast(_F, wrapper) # type: ignore[misc]
+
+
+def create_key(keyname: str) -> str:
+ """To allow cross-cutting concern to control access to their local state,
+ the RuntimeContext API provides a function which takes a keyname as input,
+ and returns a unique key.
+ Args:
+ keyname: The key name is for debugging purposes and is not required to be unique.
+ Returns:
+ A unique string representing the newly created key.
+ """
+ return keyname + "-" + str(uuid4())
+
+
+def get_value(key: str, context: typing.Optional[Context] = None) -> "object":
+ """To access the local state of a concern, the RuntimeContext API
+ provides a function which takes a context and a key as input,
+ and returns a value.
+
+ Args:
+ key: The key of the value to retrieve.
+ context: The context from which to retrieve the value, if None, the current context is used.
+
+ Returns:
+ The value associated with the key.
+ """
+ return context.get(key) if context is not None else get_current().get(key)
+
+
+def set_value(
+ key: str, value: "object", context: typing.Optional[Context] = None
+) -> Context:
+ """To record the local state of a cross-cutting concern, the
+ RuntimeContext API provides a function which takes a context, a
+ key, and a value as input, and returns an updated context
+ which contains the new value.
+
+ Args:
+ key: The key of the entry to set.
+ value: The value of the entry to set.
+ context: The context to copy, if None, the current context is used.
+
+ Returns:
+ A new `Context` containing the value set.
+ """
+ if context is None:
+ context = get_current()
+ new_values = context.copy()
+ new_values[key] = value
+ return Context(new_values)
+
+
+@_load_runtime_context # type: ignore
+def get_current() -> Context:
+ """To access the context associated with program execution,
+ the Context API provides a function which takes no arguments
+ and returns a Context.
+
+ Returns:
+ The current `Context` object.
+ """
+ return _RUNTIME_CONTEXT.get_current() # type:ignore
+
+
+@_load_runtime_context # type: ignore
+def attach(context: Context) -> object:
+ """Associates a Context with the caller's current execution unit. Returns
+ a token that can be used to restore the previous Context.
+
+ Args:
+ context: The Context to set as current.
+
+ Returns:
+ A token that can be used with `detach` to reset the context.
+ """
+ return _RUNTIME_CONTEXT.attach(context) # type:ignore
+
+
+@_load_runtime_context # type: ignore
+def detach(token: object) -> None:
+ """Resets the Context associated with the caller's current execution unit
+ to the value it had before attaching a specified Context.
+
+ Args:
+ token: The Token that was returned by a previous call to attach a Context.
+ """
+ try:
+ _RUNTIME_CONTEXT.detach(token) # type: ignore
+ except Exception: # pylint: disable=broad-except
+ logger.exception("Failed to detach context")
+
+
+# FIXME This is a temporary location for the suppress instrumentation key.
+# Once the decision around how to suppress instrumentation is made in the
+# spec, this key should be moved accordingly.
+_SUPPRESS_INSTRUMENTATION_KEY = create_key("suppress_instrumentation")
+_SUPPRESS_HTTP_INSTRUMENTATION_KEY = create_key(
+ "suppress_http_instrumentation"
+)
+
+__all__ = [
+ "Context",
+ "attach",
+ "create_key",
+ "detach",
+ "get_current",
+ "get_value",
+ "set_value",
+]
diff --git a/opentelemetry-api/src/opentelemetry/context/context.py b/opentelemetry-api/src/opentelemetry/context/context.py
new file mode 100644
index 0000000000..518f09f2b8
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/context/context.py
@@ -0,0 +1,53 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import typing
+from abc import ABC, abstractmethod
+
+
+class Context(typing.Dict[str, object]):
+ def __setitem__(self, key: str, value: object) -> None:
+ raise ValueError
+
+
+class _RuntimeContext(ABC):
+ """The RuntimeContext interface provides a wrapper for the different
+ mechanisms that are used to propagate context in Python.
+ Implementations can be made available via entry_points and
+ selected through environment variables.
+ """
+
+ @abstractmethod
+ def attach(self, context: Context) -> object:
+ """Sets the current `Context` object. Returns a
+ token that can be used to reset to the previous `Context`.
+
+ Args:
+ context: The Context to set.
+ """
+
+ @abstractmethod
+ def get_current(self) -> Context:
+ """Returns the current `Context` object."""
+
+ @abstractmethod
+ def detach(self, token: object) -> None:
+ """Resets Context to a previous value
+
+ Args:
+ token: A reference to a previous Context.
+ """
+
+
+__all__ = ["Context"]
diff --git a/opentelemetry-api/src/opentelemetry/context/contextvars_context.py b/opentelemetry-api/src/opentelemetry/context/contextvars_context.py
new file mode 100644
index 0000000000..5f606764fc
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/context/contextvars_context.py
@@ -0,0 +1,53 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from contextvars import ContextVar
+
+from opentelemetry.context.context import Context, _RuntimeContext
+
+
+class ContextVarsRuntimeContext(_RuntimeContext):
+ """An implementation of the RuntimeContext interface which wraps ContextVar under
+ the hood. This is the preferred implementation for usage with Python 3.5+
+ """
+
+ _CONTEXT_KEY = "current_context"
+
+ def __init__(self) -> None:
+ self._current_context = ContextVar(
+ self._CONTEXT_KEY, default=Context()
+ )
+
+ def attach(self, context: Context) -> object:
+ """Sets the current `Context` object. Returns a
+ token that can be used to reset to the previous `Context`.
+
+ Args:
+ context: The Context to set.
+ """
+ return self._current_context.set(context)
+
+ def get_current(self) -> Context:
+ """Returns the current `Context` object."""
+ return self._current_context.get()
+
+ def detach(self, token: object) -> None:
+ """Resets Context to a previous value
+
+ Args:
+ token: A reference to a previous Context.
+ """
+ self._current_context.reset(token) # type: ignore
+
+
+__all__ = ["ContextVarsRuntimeContext"]
diff --git a/opentelemetry-api/src/opentelemetry/environment_variables.py b/opentelemetry-api/src/opentelemetry/environment_variables.py
new file mode 100644
index 0000000000..c15b96be14
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/environment_variables.py
@@ -0,0 +1,83 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OTEL_LOGS_EXPORTER = "OTEL_LOGS_EXPORTER"
+"""
+.. envvar:: OTEL_LOGS_EXPORTER
+
+"""
+
+OTEL_METRICS_EXPORTER = "OTEL_METRICS_EXPORTER"
+"""
+.. envvar:: OTEL_METRICS_EXPORTER
+
+Specifies which exporter is used for metrics. See `General SDK Configuration
+`_.
+
+**Default value:** ``"otlp"``
+
+**Example:**
+
+``export OTEL_METRICS_EXPORTER="prometheus"``
+
+Accepted values for ``OTEL_METRICS_EXPORTER`` are:
+
+- ``"otlp"``
+- ``"prometheus"``
+- ``"none"``: No automatically configured exporter for metrics.
+
+.. note::
+
+ Exporter packages may add entry points for group ``opentelemetry_metrics_exporter`` which
+ can then be used with this environment variable by name. The entry point should point to
+ either a `opentelemetry.sdk.metrics.export.MetricExporter` (push exporter) or
+ `opentelemetry.sdk.metrics.export.MetricReader` (pull exporter) subclass; it must be
+ constructable without any required arguments. This mechanism is considered experimental and
+ may change in subsequent releases.
+"""
+
+OTEL_PROPAGATORS = "OTEL_PROPAGATORS"
+"""
+.. envvar:: OTEL_PROPAGATORS
+"""
+
+OTEL_PYTHON_CONTEXT = "OTEL_PYTHON_CONTEXT"
+"""
+.. envvar:: OTEL_PYTHON_CONTEXT
+"""
+
+OTEL_PYTHON_ID_GENERATOR = "OTEL_PYTHON_ID_GENERATOR"
+"""
+.. envvar:: OTEL_PYTHON_ID_GENERATOR
+"""
+
+OTEL_TRACES_EXPORTER = "OTEL_TRACES_EXPORTER"
+"""
+.. envvar:: OTEL_TRACES_EXPORTER
+"""
+
+OTEL_PYTHON_TRACER_PROVIDER = "OTEL_PYTHON_TRACER_PROVIDER"
+"""
+.. envvar:: OTEL_PYTHON_TRACER_PROVIDER
+"""
+
+OTEL_PYTHON_METER_PROVIDER = "OTEL_PYTHON_METER_PROVIDER"
+"""
+.. envvar:: OTEL_PYTHON_METER_PROVIDER
+"""
+
+_OTEL_PYTHON_LOGGER_PROVIDER = "OTEL_PYTHON_LOGGER_PROVIDER"
+"""
+.. envvar:: OTEL_PYTHON_LOGGER_PROVIDER
+"""
diff --git a/opentelemetry-api/src/opentelemetry/metrics/__init__.py b/opentelemetry-api/src/opentelemetry/metrics/__init__.py
new file mode 100644
index 0000000000..0de88ccdaa
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/metrics/__init__.py
@@ -0,0 +1,126 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The OpenTelemetry metrics API describes the classes used to generate
+metrics.
+
+The :class:`.MeterProvider` provides users access to the :class:`.Meter` which in
+turn is used to create :class:`.Instrument` objects. The :class:`.Instrument` objects are
+used to record measurements.
+
+This module provides abstract (i.e. unimplemented) classes required for
+metrics, and a concrete no-op implementation :class:`.NoOpMeter` that allows applications
+to use the API package alone without a supporting implementation.
+
+To get a meter, you need to provide the package name from which you are
+calling the meter APIs to OpenTelemetry by calling `MeterProvider.get_meter`
+with the calling instrumentation name and the version of your package.
+
+The following code shows how to obtain a meter using the global :class:`.MeterProvider`::
+
+ from opentelemetry.metrics import get_meter
+
+ meter = get_meter("example-meter")
+ counter = meter.create_counter("example-counter")
+
+.. versionadded:: 1.10.0
+.. versionchanged:: 1.12.0rc
+"""
+
+from opentelemetry.metrics._internal import (
+ Meter,
+ MeterProvider,
+ NoOpMeter,
+ NoOpMeterProvider,
+ get_meter,
+ get_meter_provider,
+ set_meter_provider,
+)
+from opentelemetry.metrics._internal.instrument import (
+ Asynchronous,
+ CallbackOptions,
+ CallbackT,
+ Counter,
+ Histogram,
+ Instrument,
+ NoOpCounter,
+ NoOpHistogram,
+ NoOpObservableCounter,
+ NoOpObservableGauge,
+ NoOpObservableUpDownCounter,
+ NoOpUpDownCounter,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ Synchronous,
+ UpDownCounter,
+)
+from opentelemetry.metrics._internal.observation import Observation
+
+for obj in [
+ Counter,
+ Synchronous,
+ Asynchronous,
+ CallbackOptions,
+ get_meter_provider,
+ get_meter,
+ Histogram,
+ Meter,
+ MeterProvider,
+ Instrument,
+ NoOpCounter,
+ NoOpHistogram,
+ NoOpMeter,
+ NoOpMeterProvider,
+ NoOpObservableCounter,
+ NoOpObservableGauge,
+ NoOpObservableUpDownCounter,
+ NoOpUpDownCounter,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ Observation,
+ set_meter_provider,
+ UpDownCounter,
+]:
+ obj.__module__ = __name__
+
+__all__ = [
+ "CallbackOptions",
+ "MeterProvider",
+ "NoOpMeterProvider",
+ "Meter",
+ "Counter",
+ "NoOpCounter",
+ "UpDownCounter",
+ "NoOpUpDownCounter",
+ "Histogram",
+ "NoOpHistogram",
+ "ObservableCounter",
+ "NoOpObservableCounter",
+ "ObservableUpDownCounter",
+ "Instrument",
+ "Synchronous",
+ "Asynchronous",
+ "NoOpObservableGauge",
+ "ObservableGauge",
+ "NoOpObservableUpDownCounter",
+ "get_meter",
+ "get_meter_provider",
+ "set_meter_provider",
+ "Observation",
+ "CallbackT",
+ "NoOpMeter",
+]
diff --git a/opentelemetry-api/src/opentelemetry/metrics/_internal/__init__.py b/opentelemetry-api/src/opentelemetry/metrics/_internal/__init__.py
new file mode 100644
index 0000000000..dc1e76c8ae
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/metrics/_internal/__init__.py
@@ -0,0 +1,773 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-ancestors
+
+"""
+The OpenTelemetry metrics API describes the classes used to generate
+metrics.
+
+The :class:`.MeterProvider` provides users access to the :class:`.Meter` which in
+turn is used to create :class:`.Instrument` objects. The :class:`.Instrument` objects are
+used to record measurements.
+
+This module provides abstract (i.e. unimplemented) classes required for
+metrics, and a concrete no-op implementation :class:`.NoOpMeter` that allows applications
+to use the API package alone without a supporting implementation.
+
+To get a meter, you need to provide the package name from which you are
+calling the meter APIs to OpenTelemetry by calling `MeterProvider.get_meter`
+with the calling instrumentation name and the version of your package.
+
+The following code shows how to obtain a meter using the global :class:`.MeterProvider`::
+
+ from opentelemetry.metrics import get_meter
+
+ meter = get_meter("example-meter")
+ counter = meter.create_counter("example-counter")
+
+.. versionadded:: 1.10.0
+"""
+
+
+from abc import ABC, abstractmethod
+from logging import getLogger
+from os import environ
+from threading import Lock
+from typing import List, Optional, Sequence, Set, Tuple, Union, cast
+
+from opentelemetry.environment_variables import OTEL_PYTHON_METER_PROVIDER
+from opentelemetry.metrics._internal.instrument import (
+ CallbackT,
+ Counter,
+ Histogram,
+ NoOpCounter,
+ NoOpHistogram,
+ NoOpObservableCounter,
+ NoOpObservableGauge,
+ NoOpObservableUpDownCounter,
+ NoOpUpDownCounter,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+ _ProxyCounter,
+ _ProxyHistogram,
+ _ProxyObservableCounter,
+ _ProxyObservableGauge,
+ _ProxyObservableUpDownCounter,
+ _ProxyUpDownCounter,
+)
+from opentelemetry.util._once import Once
+from opentelemetry.util._providers import _load_provider
+
+_logger = getLogger(__name__)
+
+
+# pylint: disable=invalid-name
+_ProxyInstrumentT = Union[
+ _ProxyCounter,
+ _ProxyHistogram,
+ _ProxyObservableCounter,
+ _ProxyObservableGauge,
+ _ProxyObservableUpDownCounter,
+ _ProxyUpDownCounter,
+]
+
+
+class MeterProvider(ABC):
+ """
+ MeterProvider is the entry point of the API. It provides access to `Meter` instances.
+ """
+
+ @abstractmethod
+ def get_meter(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> "Meter":
+ """Returns a `Meter` for use by the given instrumentation library.
+
+ For any two calls it is undefined whether the same or different
+ `Meter` instances are returned, even for different library names.
+
+ This function may return different `Meter` types (e.g. a no-op meter
+ vs. a functional meter).
+
+ Args:
+ name: The name of the instrumenting module.
+ ``__name__`` may not be used as this can result in
+ different meter names if the meters are in different files.
+ It is better to use a fixed string that can be imported where
+ needed and used consistently as the name of the meter.
+
+ This should *not* be the name of the module that is
+ instrumented but the name of the module doing the instrumentation.
+ E.g., instead of ``"requests"``, use
+ ``"opentelemetry.instrumentation.requests"``.
+
+ version: Optional. The version string of the
+ instrumenting library. Usually this should be the same as
+ ``importlib.metadata.version(instrumenting_library_name)``.
+
+ schema_url: Optional. Specifies the Schema URL of the emitted telemetry.
+ """
+
+
+class NoOpMeterProvider(MeterProvider):
+ """The default MeterProvider used when no MeterProvider implementation is available."""
+
+ def get_meter(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> "Meter":
+ """Returns a NoOpMeter."""
+ super().get_meter(name, version=version, schema_url=schema_url)
+ return NoOpMeter(name, version=version, schema_url=schema_url)
+
+
+class _ProxyMeterProvider(MeterProvider):
+ def __init__(self) -> None:
+ self._lock = Lock()
+ self._meters: List[_ProxyMeter] = []
+ self._real_meter_provider: Optional[MeterProvider] = None
+
+ def get_meter(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> "Meter":
+ with self._lock:
+ if self._real_meter_provider is not None:
+ return self._real_meter_provider.get_meter(
+ name, version, schema_url
+ )
+
+ meter = _ProxyMeter(name, version=version, schema_url=schema_url)
+ self._meters.append(meter)
+ return meter
+
+ def on_set_meter_provider(self, meter_provider: MeterProvider) -> None:
+ with self._lock:
+ self._real_meter_provider = meter_provider
+ for meter in self._meters:
+ meter.on_set_meter_provider(meter_provider)
+
+
+class Meter(ABC):
+ """Handles instrument creation.
+
+ This class provides methods for creating instruments which are then
+ used to produce measurements.
+ """
+
+ def __init__(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> None:
+ super().__init__()
+ self._name = name
+ self._version = version
+ self._schema_url = schema_url
+ self._instrument_ids: Set[str] = set()
+ self._instrument_ids_lock = Lock()
+
+ @property
+ def name(self) -> str:
+ """
+ The name of the instrumenting module.
+ """
+ return self._name
+
+ @property
+ def version(self) -> Optional[str]:
+ """
+ The version string of the instrumenting library.
+ """
+ return self._version
+
+ @property
+ def schema_url(self) -> Optional[str]:
+ """
+ Specifies the Schema URL of the emitted telemetry
+ """
+ return self._schema_url
+
+ def _is_instrument_registered(
+ self, name: str, type_: type, unit: str, description: str
+ ) -> Tuple[bool, str]:
+ """
+ Check if an instrument with the same name, type, unit and description
+ has been registered already.
+
+ Returns a tuple. The first value is `True` if the instrument has been
+ registered already, `False` otherwise. The second value is the
+ instrument id.
+ """
+
+ instrument_id = ",".join(
+ [name.strip().lower(), type_.__name__, unit, description]
+ )
+
+ result = False
+
+ with self._instrument_ids_lock:
+ if instrument_id in self._instrument_ids:
+ result = True
+ else:
+ self._instrument_ids.add(instrument_id)
+
+ return (result, instrument_id)
+
+ @abstractmethod
+ def create_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Counter:
+ """Creates a `Counter` instrument
+
+ Args:
+ name: The name of the instrument to be created
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+ @abstractmethod
+ def create_up_down_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> UpDownCounter:
+ """Creates an `UpDownCounter` instrument
+
+ Args:
+ name: The name of the instrument to be created
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+ @abstractmethod
+ def create_observable_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableCounter:
+ """Creates an `ObservableCounter` instrument
+
+ An observable counter observes a monotonically increasing count by calling provided
+ callbacks which accept a :class:`~opentelemetry.metrics.CallbackOptions` and return
+ multiple :class:`~opentelemetry.metrics.Observation`.
+
+ For example, an observable counter could be used to report system CPU
+ time periodically. Here is a basic implementation::
+
+ def cpu_time_callback(options: CallbackOptions) -> Iterable[Observation]:
+ observations = []
+ with open("/proc/stat") as procstat:
+ procstat.readline() # skip the first line
+ for line in procstat:
+ if not line.startswith("cpu"): break
+ cpu, *states = line.split()
+ observations.append(Observation(int(states[0]) // 100, {"cpu": cpu, "state": "user"}))
+ observations.append(Observation(int(states[1]) // 100, {"cpu": cpu, "state": "nice"}))
+ observations.append(Observation(int(states[2]) // 100, {"cpu": cpu, "state": "system"}))
+ # ... other states
+ return observations
+
+ meter.create_observable_counter(
+ "system.cpu.time",
+ callbacks=[cpu_time_callback],
+ unit="s",
+ description="CPU time"
+ )
+
+ To reduce memory usage, you can use generator callbacks instead of
+ building the full list::
+
+ def cpu_time_callback(options: CallbackOptions) -> Iterable[Observation]:
+ with open("/proc/stat") as procstat:
+ procstat.readline() # skip the first line
+ for line in procstat:
+ if not line.startswith("cpu"): break
+ cpu, *states = line.split()
+ yield Observation(int(states[0]) // 100, {"cpu": cpu, "state": "user"})
+ yield Observation(int(states[1]) // 100, {"cpu": cpu, "state": "nice"})
+ # ... other states
+
+ Alternatively, you can pass a sequence of generators directly instead of a sequence of
+ callbacks, which each should return iterables of :class:`~opentelemetry.metrics.Observation`::
+
+ def cpu_time_callback(states_to_include: set[str]) -> Iterable[Iterable[Observation]]:
+ # accept options sent in from OpenTelemetry
+ options = yield
+ while True:
+ observations = []
+ with open("/proc/stat") as procstat:
+ procstat.readline() # skip the first line
+ for line in procstat:
+ if not line.startswith("cpu"): break
+ cpu, *states = line.split()
+ if "user" in states_to_include:
+ observations.append(Observation(int(states[0]) // 100, {"cpu": cpu, "state": "user"}))
+ if "nice" in states_to_include:
+ observations.append(Observation(int(states[1]) // 100, {"cpu": cpu, "state": "nice"}))
+ # ... other states
+ # yield the observations and receive the options for next iteration
+ options = yield observations
+
+ meter.create_observable_counter(
+ "system.cpu.time",
+ callbacks=[cpu_time_callback({"user", "system"})],
+ unit="s",
+ description="CPU time"
+ )
+
+ The :class:`~opentelemetry.metrics.CallbackOptions` contain a timeout which the
+ callback should respect. For example if the callback does asynchronous work, like
+ making HTTP requests, it should respect the timeout::
+
+ def scrape_http_callback(options: CallbackOptions) -> Iterable[Observation]:
+ r = requests.get('http://scrapethis.com', timeout=options.timeout_millis / 10**3)
+ for value in r.json():
+ yield Observation(value)
+
+ Args:
+ name: The name of the instrument to be created
+ callbacks: A sequence of callbacks that return an iterable of
+ :class:`~opentelemetry.metrics.Observation`. Alternatively, can be a sequence of generators that each
+ yields iterables of :class:`~opentelemetry.metrics.Observation`.
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+ @abstractmethod
+ def create_histogram(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Histogram:
+ """Creates a :class:`~opentelemetry.metrics.Histogram` instrument
+
+ Args:
+ name: The name of the instrument to be created
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+ @abstractmethod
+ def create_observable_gauge(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableGauge:
+ """Creates an `ObservableGauge` instrument
+
+ Args:
+ name: The name of the instrument to be created
+ callbacks: A sequence of callbacks that return an iterable of
+ :class:`~opentelemetry.metrics.Observation`. Alternatively, can be a generator that yields iterables
+ of :class:`~opentelemetry.metrics.Observation`.
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+ @abstractmethod
+ def create_observable_up_down_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableUpDownCounter:
+ """Creates an `ObservableUpDownCounter` instrument
+
+ Args:
+ name: The name of the instrument to be created
+ callbacks: A sequence of callbacks that return an iterable of
+ :class:`~opentelemetry.metrics.Observation`. Alternatively, can be a generator that yields iterables
+ of :class:`~opentelemetry.metrics.Observation`.
+ unit: The unit for observations this instrument reports. For
+ example, ``By`` for bytes. UCUM units are recommended.
+ description: A description for this instrument and what it measures.
+ """
+
+
+class _ProxyMeter(Meter):
+ def __init__(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> None:
+ super().__init__(name, version=version, schema_url=schema_url)
+ self._lock = Lock()
+ self._instruments: List[_ProxyInstrumentT] = []
+ self._real_meter: Optional[Meter] = None
+
+ def on_set_meter_provider(self, meter_provider: MeterProvider) -> None:
+ """Called when a real meter provider is set on the creating _ProxyMeterProvider
+
+ Creates a real backing meter for this instance and notifies all created
+ instruments so they can create real backing instruments.
+ """
+ real_meter = meter_provider.get_meter(
+ self._name, self._version, self._schema_url
+ )
+
+ with self._lock:
+ self._real_meter = real_meter
+ # notify all proxy instruments of the new meter so they can create
+ # real instruments to back themselves
+ for instrument in self._instruments:
+ instrument.on_meter_set(real_meter)
+
+ def create_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Counter:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_counter(name, unit, description)
+ proxy = _ProxyCounter(name, unit, description)
+ self._instruments.append(proxy)
+ return proxy
+
+ def create_up_down_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> UpDownCounter:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_up_down_counter(
+ name, unit, description
+ )
+ proxy = _ProxyUpDownCounter(name, unit, description)
+ self._instruments.append(proxy)
+ return proxy
+
+ def create_observable_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableCounter:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_observable_counter(
+ name, callbacks, unit, description
+ )
+ proxy = _ProxyObservableCounter(
+ name, callbacks, unit=unit, description=description
+ )
+ self._instruments.append(proxy)
+ return proxy
+
+ def create_histogram(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Histogram:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_histogram(
+ name, unit, description
+ )
+ proxy = _ProxyHistogram(name, unit, description)
+ self._instruments.append(proxy)
+ return proxy
+
+ def create_observable_gauge(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableGauge:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_observable_gauge(
+ name, callbacks, unit, description
+ )
+ proxy = _ProxyObservableGauge(
+ name, callbacks, unit=unit, description=description
+ )
+ self._instruments.append(proxy)
+ return proxy
+
+ def create_observable_up_down_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableUpDownCounter:
+ with self._lock:
+ if self._real_meter:
+ return self._real_meter.create_observable_up_down_counter(
+ name,
+ callbacks,
+ unit,
+ description,
+ )
+ proxy = _ProxyObservableUpDownCounter(
+ name, callbacks, unit=unit, description=description
+ )
+ self._instruments.append(proxy)
+ return proxy
+
+
+class NoOpMeter(Meter):
+ """The default Meter used when no Meter implementation is available.
+
+ All operations are no-op.
+ """
+
+ def create_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Counter:
+ """Returns a no-op Counter."""
+ super().create_counter(name, unit=unit, description=description)
+ if self._is_instrument_registered(
+ name, NoOpCounter, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ Counter.__name__,
+ unit,
+ description,
+ )
+ return NoOpCounter(name, unit=unit, description=description)
+
+ def create_up_down_counter(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> UpDownCounter:
+ """Returns a no-op UpDownCounter."""
+ super().create_up_down_counter(
+ name, unit=unit, description=description
+ )
+ if self._is_instrument_registered(
+ name, NoOpUpDownCounter, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ UpDownCounter.__name__,
+ unit,
+ description,
+ )
+ return NoOpUpDownCounter(name, unit=unit, description=description)
+
+ def create_observable_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableCounter:
+ """Returns a no-op ObservableCounter."""
+ super().create_observable_counter(
+ name, callbacks, unit=unit, description=description
+ )
+ if self._is_instrument_registered(
+ name, NoOpObservableCounter, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ ObservableCounter.__name__,
+ unit,
+ description,
+ )
+ return NoOpObservableCounter(
+ name,
+ callbacks,
+ unit=unit,
+ description=description,
+ )
+
+ def create_histogram(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> Histogram:
+ """Returns a no-op Histogram."""
+ super().create_histogram(name, unit=unit, description=description)
+ if self._is_instrument_registered(
+ name, NoOpHistogram, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ Histogram.__name__,
+ unit,
+ description,
+ )
+ return NoOpHistogram(name, unit=unit, description=description)
+
+ def create_observable_gauge(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableGauge:
+ """Returns a no-op ObservableGauge."""
+ super().create_observable_gauge(
+ name, callbacks, unit=unit, description=description
+ )
+ if self._is_instrument_registered(
+ name, NoOpObservableGauge, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ ObservableGauge.__name__,
+ unit,
+ description,
+ )
+ return NoOpObservableGauge(
+ name,
+ callbacks,
+ unit=unit,
+ description=description,
+ )
+
+ def create_observable_up_down_counter(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> ObservableUpDownCounter:
+ """Returns a no-op ObservableUpDownCounter."""
+ super().create_observable_up_down_counter(
+ name, callbacks, unit=unit, description=description
+ )
+ if self._is_instrument_registered(
+ name, NoOpObservableUpDownCounter, unit, description
+ )[0]:
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ ObservableUpDownCounter.__name__,
+ unit,
+ description,
+ )
+ return NoOpObservableUpDownCounter(
+ name,
+ callbacks,
+ unit=unit,
+ description=description,
+ )
+
+
+_METER_PROVIDER_SET_ONCE = Once()
+_METER_PROVIDER: Optional[MeterProvider] = None
+_PROXY_METER_PROVIDER = _ProxyMeterProvider()
+
+
+def get_meter(
+ name: str,
+ version: str = "",
+ meter_provider: Optional[MeterProvider] = None,
+ schema_url: Optional[str] = None,
+) -> "Meter":
+ """Returns a `Meter` for use by the given instrumentation library.
+
+ This function is a convenience wrapper for
+ `opentelemetry.metrics.MeterProvider.get_meter`.
+
+ If meter_provider is omitted the current configured one is used.
+ """
+ if meter_provider is None:
+ meter_provider = get_meter_provider()
+ return meter_provider.get_meter(name, version, schema_url)
+
+
+def _set_meter_provider(meter_provider: MeterProvider, log: bool) -> None:
+ def set_mp() -> None:
+ global _METER_PROVIDER # pylint: disable=global-statement
+ _METER_PROVIDER = meter_provider
+
+ # gives all proxies real instruments off the newly set meter provider
+ _PROXY_METER_PROVIDER.on_set_meter_provider(meter_provider)
+
+ did_set = _METER_PROVIDER_SET_ONCE.do_once(set_mp)
+
+ if log and not did_set:
+ _logger.warning("Overriding of current MeterProvider is not allowed")
+
+
+def set_meter_provider(meter_provider: MeterProvider) -> None:
+ """Sets the current global :class:`~.MeterProvider` object.
+
+ This can only be done once, a warning will be logged if any further attempt
+ is made.
+ """
+ _set_meter_provider(meter_provider, log=True)
+
+
+def get_meter_provider() -> MeterProvider:
+ """Gets the current global :class:`~.MeterProvider` object."""
+
+ if _METER_PROVIDER is None:
+ if OTEL_PYTHON_METER_PROVIDER not in environ:
+ return _PROXY_METER_PROVIDER
+
+ meter_provider: MeterProvider = _load_provider( # type: ignore
+ OTEL_PYTHON_METER_PROVIDER, "meter_provider"
+ )
+ _set_meter_provider(meter_provider, log=False)
+
+ # _METER_PROVIDER will have been set by one thread
+ return cast("MeterProvider", _METER_PROVIDER)
diff --git a/opentelemetry-api/src/opentelemetry/metrics/_internal/instrument.py b/opentelemetry-api/src/opentelemetry/metrics/_internal/instrument.py
new file mode 100644
index 0000000000..b02a15005c
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/metrics/_internal/instrument.py
@@ -0,0 +1,398 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-ancestors
+
+
+from abc import ABC, abstractmethod
+from dataclasses import dataclass
+from logging import getLogger
+from re import compile as re_compile
+from typing import (
+ Callable,
+ Dict,
+ Generator,
+ Generic,
+ Iterable,
+ Optional,
+ Sequence,
+ TypeVar,
+ Union,
+)
+
+# pylint: disable=unused-import; needed for typing and sphinx
+from opentelemetry import metrics
+from opentelemetry.metrics._internal.observation import Observation
+from opentelemetry.util.types import Attributes
+
+_logger = getLogger(__name__)
+
+_name_regex = re_compile(r"[a-zA-Z][-_./a-zA-Z0-9]{0,254}")
+_unit_regex = re_compile(r"[\x00-\x7F]{0,63}")
+
+
+@dataclass(frozen=True)
+class CallbackOptions:
+ """Options for the callback
+
+ Args:
+ timeout_millis: Timeout for the callback's execution. If the callback does asynchronous
+ work (e.g. HTTP requests), it should respect this timeout.
+ """
+
+ timeout_millis: float = 10_000
+
+
+InstrumentT = TypeVar("InstrumentT", bound="Instrument")
+# pylint: disable=invalid-name
+CallbackT = Union[
+ Callable[[CallbackOptions], Iterable[Observation]],
+ Generator[Iterable[Observation], CallbackOptions, None],
+]
+
+
+class Instrument(ABC):
+ """Abstract class that serves as base for all instruments."""
+
+ @abstractmethod
+ def __init__(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ pass
+
+ @staticmethod
+ def _check_name_unit_description(
+ name: str, unit: str, description: str
+ ) -> Dict[str, Optional[str]]:
+ """
+ Checks the following instrument name, unit and description for
+ compliance with the spec.
+
+ Returns a dict with keys "name", "unit" and "description", the
+ corresponding values will be the checked strings or `None` if the value
+ is invalid. If valid, the checked strings should be used instead of the
+ original values.
+ """
+
+ result: Dict[str, Optional[str]] = {}
+
+ if _name_regex.fullmatch(name) is not None:
+ result["name"] = name
+ else:
+ result["name"] = None
+
+ if unit is None:
+ unit = ""
+ if _unit_regex.fullmatch(unit) is not None:
+ result["unit"] = unit
+ else:
+ result["unit"] = None
+
+ if description is None:
+ result["description"] = ""
+ else:
+ result["description"] = description
+
+ return result
+
+
+class _ProxyInstrument(ABC, Generic[InstrumentT]):
+ def __init__(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ self._name = name
+ self._unit = unit
+ self._description = description
+ self._real_instrument: Optional[InstrumentT] = None
+
+ def on_meter_set(self, meter: "metrics.Meter") -> None:
+ """Called when a real meter is set on the creating _ProxyMeter"""
+
+ # We don't need any locking on proxy instruments because it's OK if some
+ # measurements get dropped while a real backing instrument is being
+ # created.
+ self._real_instrument = self._create_real_instrument(meter)
+
+ @abstractmethod
+ def _create_real_instrument(self, meter: "metrics.Meter") -> InstrumentT:
+ """Create an instance of the real instrument. Implement this."""
+
+
+class _ProxyAsynchronousInstrument(_ProxyInstrument[InstrumentT]):
+ def __init__(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, unit, description)
+ self._callbacks = callbacks
+
+
+class Synchronous(Instrument):
+ """Base class for all synchronous instruments"""
+
+
+class Asynchronous(Instrument):
+ """Base class for all asynchronous instruments"""
+
+ @abstractmethod
+ def __init__(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, unit=unit, description=description)
+
+
+class Counter(Synchronous):
+ """A Counter is a synchronous `Instrument` which supports non-negative increments."""
+
+ @abstractmethod
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ pass
+
+
+class NoOpCounter(Counter):
+ """No-op implementation of `Counter`."""
+
+ def __init__(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, unit=unit, description=description)
+
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ return super().add(amount, attributes=attributes)
+
+
+class _ProxyCounter(_ProxyInstrument[Counter], Counter):
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ if self._real_instrument:
+ self._real_instrument.add(amount, attributes)
+
+ def _create_real_instrument(self, meter: "metrics.Meter") -> Counter:
+ return meter.create_counter(self._name, self._unit, self._description)
+
+
+class UpDownCounter(Synchronous):
+ """An UpDownCounter is a synchronous `Instrument` which supports increments and decrements."""
+
+ @abstractmethod
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ pass
+
+
+class NoOpUpDownCounter(UpDownCounter):
+ """No-op implementation of `UpDownCounter`."""
+
+ def __init__(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, unit=unit, description=description)
+
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ return super().add(amount, attributes=attributes)
+
+
+class _ProxyUpDownCounter(_ProxyInstrument[UpDownCounter], UpDownCounter):
+ def add(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ if self._real_instrument:
+ self._real_instrument.add(amount, attributes)
+
+ def _create_real_instrument(self, meter: "metrics.Meter") -> UpDownCounter:
+ return meter.create_up_down_counter(
+ self._name, self._unit, self._description
+ )
+
+
+class ObservableCounter(Asynchronous):
+ """An ObservableCounter is an asynchronous `Instrument` which reports monotonically
+ increasing value(s) when the instrument is being observed.
+ """
+
+
+class NoOpObservableCounter(ObservableCounter):
+ """No-op implementation of `ObservableCounter`."""
+
+ def __init__(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, callbacks, unit=unit, description=description)
+
+
+class _ProxyObservableCounter(
+ _ProxyAsynchronousInstrument[ObservableCounter], ObservableCounter
+):
+ def _create_real_instrument(
+ self, meter: "metrics.Meter"
+ ) -> ObservableCounter:
+ return meter.create_observable_counter(
+ self._name, self._callbacks, self._unit, self._description
+ )
+
+
+class ObservableUpDownCounter(Asynchronous):
+ """An ObservableUpDownCounter is an asynchronous `Instrument` which reports additive value(s) (e.g.
+ the process heap size - it makes sense to report the heap size from multiple processes and sum them
+ up, so we get the total heap usage) when the instrument is being observed.
+ """
+
+
+class NoOpObservableUpDownCounter(ObservableUpDownCounter):
+ """No-op implementation of `ObservableUpDownCounter`."""
+
+ def __init__(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, callbacks, unit=unit, description=description)
+
+
+class _ProxyObservableUpDownCounter(
+ _ProxyAsynchronousInstrument[ObservableUpDownCounter],
+ ObservableUpDownCounter,
+):
+ def _create_real_instrument(
+ self, meter: "metrics.Meter"
+ ) -> ObservableUpDownCounter:
+ return meter.create_observable_up_down_counter(
+ self._name, self._callbacks, self._unit, self._description
+ )
+
+
+class Histogram(Synchronous):
+ """Histogram is a synchronous `Instrument` which can be used to report arbitrary values
+ that are likely to be statistically meaningful. It is intended for statistics such as
+ histograms, summaries, and percentile.
+ """
+
+ @abstractmethod
+ def record(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ pass
+
+
+class NoOpHistogram(Histogram):
+ """No-op implementation of `Histogram`."""
+
+ def __init__(
+ self,
+ name: str,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, unit=unit, description=description)
+
+ def record(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ return super().record(amount, attributes=attributes)
+
+
+class _ProxyHistogram(_ProxyInstrument[Histogram], Histogram):
+ def record(
+ self,
+ amount: Union[int, float],
+ attributes: Optional[Attributes] = None,
+ ) -> None:
+ if self._real_instrument:
+ self._real_instrument.record(amount, attributes)
+
+ def _create_real_instrument(self, meter: "metrics.Meter") -> Histogram:
+ return meter.create_histogram(
+ self._name, self._unit, self._description
+ )
+
+
+class ObservableGauge(Asynchronous):
+ """Asynchronous Gauge is an asynchronous `Instrument` which reports non-additive value(s) (e.g.
+ the room temperature - it makes no sense to report the temperature value from multiple rooms
+ and sum them up) when the instrument is being observed.
+ """
+
+
+class NoOpObservableGauge(ObservableGauge):
+ """No-op implementation of `ObservableGauge`."""
+
+ def __init__(
+ self,
+ name: str,
+ callbacks: Optional[Sequence[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ) -> None:
+ super().__init__(name, callbacks, unit=unit, description=description)
+
+
+class _ProxyObservableGauge(
+ _ProxyAsynchronousInstrument[ObservableGauge],
+ ObservableGauge,
+):
+ def _create_real_instrument(
+ self, meter: "metrics.Meter"
+ ) -> ObservableGauge:
+ return meter.create_observable_gauge(
+ self._name, self._callbacks, self._unit, self._description
+ )
diff --git a/opentelemetry-api/src/opentelemetry/metrics/_internal/observation.py b/opentelemetry-api/src/opentelemetry/metrics/_internal/observation.py
new file mode 100644
index 0000000000..7aa24e3342
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/metrics/_internal/observation.py
@@ -0,0 +1,52 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import Union
+
+from opentelemetry.util.types import Attributes
+
+
+class Observation:
+ """A measurement observed in an asynchronous instrument
+
+ Return/yield instances of this class from asynchronous instrument callbacks.
+
+ Args:
+ value: The float or int measured value
+ attributes: The measurement's attributes
+ """
+
+ def __init__(
+ self, value: Union[int, float], attributes: Attributes = None
+ ) -> None:
+ self._value = value
+ self._attributes = attributes
+
+ @property
+ def value(self) -> Union[float, int]:
+ return self._value
+
+ @property
+ def attributes(self) -> Attributes:
+ return self._attributes
+
+ def __eq__(self, other: object) -> bool:
+ return (
+ isinstance(other, Observation)
+ and self.value == other.value
+ and self.attributes == other.attributes
+ )
+
+ def __repr__(self) -> str:
+ return f"Observation(value={self.value}, attributes={self.attributes})"
diff --git a/opentelemetry-api/src/opentelemetry/propagate/__init__.py b/opentelemetry-api/src/opentelemetry/propagate/__init__.py
new file mode 100644
index 0000000000..90f9e61744
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/propagate/__init__.py
@@ -0,0 +1,167 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+API for propagation of context.
+
+The propagators for the
+``opentelemetry.propagators.composite.CompositePropagator`` can be defined
+via configuration in the ``OTEL_PROPAGATORS`` environment variable. This
+variable should be set to a comma-separated string of names of values for the
+``opentelemetry_propagator`` entry point. For example, setting
+``OTEL_PROPAGATORS`` to ``tracecontext,baggage`` (which is the default value)
+would instantiate
+``opentelemetry.propagators.composite.CompositePropagator`` with 2
+propagators, one of type
+``opentelemetry.trace.propagation.tracecontext.TraceContextTextMapPropagator``
+and other of type ``opentelemetry.baggage.propagation.W3CBaggagePropagator``.
+Notice that these propagator classes are defined as
+``opentelemetry_propagator`` entry points in the ``pyproject.toml`` file of
+``opentelemetry``.
+
+Example::
+
+ import flask
+ import requests
+ from opentelemetry import propagate
+
+
+ PROPAGATOR = propagate.get_global_textmap()
+
+
+ def get_header_from_flask_request(request, key):
+ return request.headers.get_all(key)
+
+ def set_header_into_requests_request(request: requests.Request,
+ key: str, value: str):
+ request.headers[key] = value
+
+ def example_route():
+ context = PROPAGATOR.extract(
+ get_header_from_flask_request,
+ flask.request
+ )
+ request_to_downstream = requests.Request(
+ "GET", "http://httpbin.org/get"
+ )
+ PROPAGATOR.inject(
+ set_header_into_requests_request,
+ request_to_downstream,
+ context=context
+ )
+ session = requests.Session()
+ session.send(request_to_downstream.prepare())
+
+
+.. _Propagation API Specification:
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/context/api-propagators.md
+"""
+
+from logging import getLogger
+from os import environ
+from typing import Optional
+
+from opentelemetry.context.context import Context
+from opentelemetry.environment_variables import OTEL_PROPAGATORS
+from opentelemetry.propagators import composite, textmap
+from opentelemetry.util._importlib_metadata import entry_points
+
+logger = getLogger(__name__)
+
+
+def extract(
+ carrier: textmap.CarrierT,
+ context: Optional[Context] = None,
+ getter: textmap.Getter[textmap.CarrierT] = textmap.default_getter,
+) -> Context:
+ """Uses the configured propagator to extract a Context from the carrier.
+
+ Args:
+ getter: an object which contains a get function that can retrieve zero
+ or more values from the carrier and a keys function that can get all the keys
+ from carrier.
+ carrier: and object which contains values that are
+ used to construct a Context. This object
+ must be paired with an appropriate getter
+ which understands how to extract a value from it.
+ context: an optional Context to use. Defaults to root
+ context if not set.
+ """
+ return get_global_textmap().extract(carrier, context, getter=getter)
+
+
+def inject(
+ carrier: textmap.CarrierT,
+ context: Optional[Context] = None,
+ setter: textmap.Setter[textmap.CarrierT] = textmap.default_setter,
+) -> None:
+ """Uses the configured propagator to inject a Context into the carrier.
+
+ Args:
+ carrier: An object that contains a representation of HTTP
+ headers. Should be paired with setter, which
+ should know how to set header values on the carrier.
+ context: An optional Context to use. Defaults to current
+ context if not set.
+ setter: An optional `Setter` object that can set values
+ on the carrier.
+ """
+ get_global_textmap().inject(carrier, context=context, setter=setter)
+
+
+propagators = []
+
+# Single use variable here to hack black and make lint pass
+environ_propagators = environ.get(
+ OTEL_PROPAGATORS,
+ "tracecontext,baggage",
+)
+
+
+for propagator in environ_propagators.split(","):
+ propagator = propagator.strip()
+
+ try:
+
+ propagators.append( # type: ignore
+ next( # type: ignore
+ iter( # type: ignore
+ entry_points( # type: ignore
+ group="opentelemetry_propagator",
+ name=propagator,
+ )
+ )
+ ).load()()
+ )
+ except StopIteration:
+ raise ValueError(
+ f"Propagator {propagator} not found. It is either misspelled or not installed."
+ )
+ except Exception: # pylint: disable=broad-except
+ logger.exception("Failed to load propagator: %s", propagator)
+ raise
+
+
+_HTTP_TEXT_FORMAT = composite.CompositePropagator(propagators) # type: ignore
+
+
+def get_global_textmap() -> textmap.TextMapPropagator:
+ return _HTTP_TEXT_FORMAT
+
+
+def set_global_textmap(
+ http_text_format: textmap.TextMapPropagator,
+) -> None:
+ global _HTTP_TEXT_FORMAT # pylint:disable=global-statement
+ _HTTP_TEXT_FORMAT = http_text_format # type: ignore
diff --git a/opentelemetry-api/src/opentelemetry/propagators/composite.py b/opentelemetry-api/src/opentelemetry/propagators/composite.py
new file mode 100644
index 0000000000..77330d9410
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/propagators/composite.py
@@ -0,0 +1,91 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import logging
+import typing
+
+from deprecated import deprecated
+
+from opentelemetry.context.context import Context
+from opentelemetry.propagators import textmap
+
+logger = logging.getLogger(__name__)
+
+
+class CompositePropagator(textmap.TextMapPropagator):
+ """CompositePropagator provides a mechanism for combining multiple
+ propagators into a single one.
+
+ Args:
+ propagators: the list of propagators to use
+ """
+
+ def __init__(
+ self, propagators: typing.Sequence[textmap.TextMapPropagator]
+ ) -> None:
+ self._propagators = propagators
+
+ def extract(
+ self,
+ carrier: textmap.CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: textmap.Getter[textmap.CarrierT] = textmap.default_getter,
+ ) -> Context:
+ """Run each of the configured propagators with the given context and carrier.
+ Propagators are run in the order they are configured, if multiple
+ propagators write the same context key, the propagator later in the list
+ will override previous propagators.
+
+ See `opentelemetry.propagators.textmap.TextMapPropagator.extract`
+ """
+ for propagator in self._propagators:
+ context = propagator.extract(carrier, context, getter=getter)
+ return context # type: ignore
+
+ def inject(
+ self,
+ carrier: textmap.CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: textmap.Setter[textmap.CarrierT] = textmap.default_setter,
+ ) -> None:
+ """Run each of the configured propagators with the given context and carrier.
+ Propagators are run in the order they are configured, if multiple
+ propagators write the same carrier key, the propagator later in the list
+ will override previous propagators.
+
+ See `opentelemetry.propagators.textmap.TextMapPropagator.inject`
+ """
+ for propagator in self._propagators:
+ propagator.inject(carrier, context, setter=setter)
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ """Returns a set with the fields set in `inject`.
+
+ See
+ `opentelemetry.propagators.textmap.TextMapPropagator.fields`
+ """
+ composite_fields = set()
+
+ for propagator in self._propagators:
+ for field in propagator.fields:
+ composite_fields.add(field)
+
+ return composite_fields
+
+
+@deprecated(version="1.2.0", reason="You should use CompositePropagator") # type: ignore
+class CompositeHTTPPropagator(CompositePropagator):
+ """CompositeHTTPPropagator provides a mechanism for combining multiple
+ propagators into a single one.
+ """
diff --git a/opentelemetry-api/src/opentelemetry/propagators/textmap.py b/opentelemetry-api/src/opentelemetry/propagators/textmap.py
new file mode 100644
index 0000000000..42f1124f36
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/propagators/textmap.py
@@ -0,0 +1,197 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import abc
+import typing
+
+from opentelemetry.context.context import Context
+
+CarrierT = typing.TypeVar("CarrierT")
+# pylint: disable=invalid-name
+CarrierValT = typing.Union[typing.List[str], str]
+
+
+class Getter(abc.ABC, typing.Generic[CarrierT]):
+ """This class implements a Getter that enables extracting propagated
+ fields from a carrier.
+ """
+
+ @abc.abstractmethod
+ def get(
+ self, carrier: CarrierT, key: str
+ ) -> typing.Optional[typing.List[str]]:
+ """Function that can retrieve zero
+ or more values from the carrier. In the case that
+ the value does not exist, returns None.
+
+ Args:
+ carrier: An object which contains values that are used to
+ construct a Context.
+ key: key of a field in carrier.
+ Returns: first value of the propagation key or None if the key doesn't
+ exist.
+ """
+
+ @abc.abstractmethod
+ def keys(self, carrier: CarrierT) -> typing.List[str]:
+ """Function that can retrieve all the keys in a carrier object.
+
+ Args:
+ carrier: An object which contains values that are
+ used to construct a Context.
+ Returns:
+ list of keys from the carrier.
+ """
+
+
+class Setter(abc.ABC, typing.Generic[CarrierT]):
+ """This class implements a Setter that enables injecting propagated
+ fields into a carrier.
+ """
+
+ @abc.abstractmethod
+ def set(self, carrier: CarrierT, key: str, value: str) -> None:
+ """Function that can set a value into a carrier""
+
+ Args:
+ carrier: An object which contains values that are used to
+ construct a Context.
+ key: key of a field in carrier.
+ value: value for a field in carrier.
+ """
+
+
+class DefaultGetter(Getter[typing.Mapping[str, CarrierValT]]):
+ def get(
+ self, carrier: typing.Mapping[str, CarrierValT], key: str
+ ) -> typing.Optional[typing.List[str]]:
+ """Getter implementation to retrieve a value from a dictionary.
+
+ Args:
+ carrier: dictionary in which to get value
+ key: the key used to get the value
+ Returns:
+ A list with a single string with the value if it exists, else None.
+ """
+ val = carrier.get(key, None)
+ if val is None:
+ return None
+ if isinstance(val, typing.Iterable) and not isinstance(val, str):
+ return list(val)
+ return [val]
+
+ def keys(
+ self, carrier: typing.Mapping[str, CarrierValT]
+ ) -> typing.List[str]:
+ """Keys implementation that returns all keys from a dictionary."""
+ return list(carrier.keys())
+
+
+default_getter: Getter[CarrierT] = DefaultGetter() # type: ignore
+
+
+class DefaultSetter(Setter[typing.MutableMapping[str, CarrierValT]]):
+ def set(
+ self,
+ carrier: typing.MutableMapping[str, CarrierValT],
+ key: str,
+ value: CarrierValT,
+ ) -> None:
+ """Setter implementation to set a value into a dictionary.
+
+ Args:
+ carrier: dictionary in which to set value
+ key: the key used to set the value
+ value: the value to set
+ """
+ carrier[key] = value
+
+
+default_setter: Setter[CarrierT] = DefaultSetter() # type: ignore
+
+
+class TextMapPropagator(abc.ABC):
+ """This class provides an interface that enables extracting and injecting
+ context into headers of HTTP requests. HTTP frameworks and clients
+ can integrate with TextMapPropagator by providing the object containing the
+ headers, and a getter and setter function for the extraction and
+ injection of values, respectively.
+
+ """
+
+ @abc.abstractmethod
+ def extract(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: Getter[CarrierT] = default_getter,
+ ) -> Context:
+ """Create a Context from values in the carrier.
+
+ The extract function should retrieve values from the carrier
+ object using getter, and use values to populate a
+ Context value and return it.
+
+ Args:
+ getter: a function that can retrieve zero
+ or more values from the carrier. In the case that
+ the value does not exist, return an empty list.
+ carrier: and object which contains values that are
+ used to construct a Context. This object
+ must be paired with an appropriate getter
+ which understands how to extract a value from it.
+ context: an optional Context to use. Defaults to root
+ context if not set.
+ Returns:
+ A Context with configuration found in the carrier.
+
+ """
+
+ @abc.abstractmethod
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter[CarrierT] = default_setter,
+ ) -> None:
+ """Inject values from a Context into a carrier.
+
+ inject enables the propagation of values into HTTP clients or
+ other objects which perform an HTTP request. Implementations
+ should use the `Setter` 's set method to set values on the
+ carrier.
+
+ Args:
+ carrier: An object that a place to define HTTP headers.
+ Should be paired with setter, which should
+ know how to set header values on the carrier.
+ context: an optional Context to use. Defaults to current
+ context if not set.
+ setter: An optional `Setter` object that can set values
+ on the carrier.
+
+ """
+
+ @property
+ @abc.abstractmethod
+ def fields(self) -> typing.Set[str]:
+ """
+ Gets the fields set in the carrier by the `inject` method.
+
+ If the carrier is reused, its fields that correspond with the ones
+ present in this attribute should be deleted before calling `inject`.
+
+ Returns:
+ A set with the fields set in `inject`.
+ """
diff --git a/opentelemetry-api/src/opentelemetry/py.typed b/opentelemetry-api/src/opentelemetry/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-api/src/opentelemetry/trace/__init__.py b/opentelemetry-api/src/opentelemetry/trace/__init__.py
new file mode 100644
index 0000000000..bf9e0b89a4
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/trace/__init__.py
@@ -0,0 +1,629 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The OpenTelemetry tracing API describes the classes used to generate
+distributed traces.
+
+The :class:`.Tracer` class controls access to the execution context, and
+manages span creation. Each operation in a trace is represented by a
+:class:`.Span`, which records the start, end time, and metadata associated with
+the operation.
+
+This module provides abstract (i.e. unimplemented) classes required for
+tracing, and a concrete no-op :class:`.NonRecordingSpan` that allows applications
+to use the API package alone without a supporting implementation.
+
+To get a tracer, you need to provide the package name from which you are
+calling the tracer APIs to OpenTelemetry by calling `TracerProvider.get_tracer`
+with the calling module name and the version of your package.
+
+The tracer supports creating spans that are "attached" or "detached" from the
+context. New spans are "attached" to the context in that they are
+created as children of the currently active span, and the newly-created span
+can optionally become the new active span::
+
+ from opentelemetry import trace
+
+ tracer = trace.get_tracer(__name__)
+
+ # Create a new root span, set it as the current span in context
+ with tracer.start_as_current_span("parent"):
+ # Attach a new child and update the current span
+ with tracer.start_as_current_span("child"):
+ do_work():
+ # Close child span, set parent as current
+ # Close parent span, set default span as current
+
+When creating a span that's "detached" from the context the active span doesn't
+change, and the caller is responsible for managing the span's lifetime::
+
+ # Explicit parent span assignment is done via the Context
+ from opentelemetry.trace import set_span_in_context
+
+ context = set_span_in_context(parent)
+ child = tracer.start_span("child", context=context)
+
+ try:
+ do_work(span=child)
+ finally:
+ child.end()
+
+Applications should generally use a single global TracerProvider, and use
+either implicit or explicit context propagation consistently throughout.
+
+.. versionadded:: 0.1.0
+.. versionchanged:: 0.3.0
+ `TracerProvider` was introduced and the global ``tracer`` getter was
+ replaced by ``tracer_provider``.
+.. versionchanged:: 0.5.0
+ ``tracer_provider`` was replaced by `get_tracer_provider`,
+ ``set_preferred_tracer_provider_implementation`` was replaced by
+ `set_tracer_provider`.
+"""
+
+
+import os
+import typing
+from abc import ABC, abstractmethod
+from contextlib import contextmanager
+from enum import Enum
+from logging import getLogger
+from typing import Iterator, Optional, Sequence, cast
+
+from deprecated import deprecated
+
+from opentelemetry import context as context_api
+from opentelemetry.attributes import BoundedAttributes # type: ignore
+from opentelemetry.context.context import Context
+from opentelemetry.environment_variables import OTEL_PYTHON_TRACER_PROVIDER
+from opentelemetry.trace.propagation import (
+ _SPAN_KEY,
+ get_current_span,
+ set_span_in_context,
+)
+from opentelemetry.trace.span import (
+ DEFAULT_TRACE_OPTIONS,
+ DEFAULT_TRACE_STATE,
+ INVALID_SPAN,
+ INVALID_SPAN_CONTEXT,
+ INVALID_SPAN_ID,
+ INVALID_TRACE_ID,
+ NonRecordingSpan,
+ Span,
+ SpanContext,
+ TraceFlags,
+ TraceState,
+ format_span_id,
+ format_trace_id,
+)
+from opentelemetry.trace.status import Status, StatusCode
+from opentelemetry.util import types
+from opentelemetry.util._once import Once
+from opentelemetry.util._providers import _load_provider
+
+logger = getLogger(__name__)
+
+
+class _LinkBase(ABC):
+ def __init__(self, context: "SpanContext") -> None:
+ self._context = context
+
+ @property
+ def context(self) -> "SpanContext":
+ return self._context
+
+ @property
+ @abstractmethod
+ def attributes(self) -> types.Attributes:
+ pass
+
+
+class Link(_LinkBase):
+ """A link to a `Span`. The attributes of a Link are immutable.
+
+ Args:
+ context: `SpanContext` of the `Span` to link to.
+ attributes: Link's attributes.
+ """
+
+ def __init__(
+ self,
+ context: "SpanContext",
+ attributes: types.Attributes = None,
+ ) -> None:
+ super().__init__(context)
+ self._attributes = BoundedAttributes(
+ attributes=attributes
+ ) # type: types.Attributes
+
+ @property
+ def attributes(self) -> types.Attributes:
+ return self._attributes
+
+
+_Links = Optional[Sequence[Link]]
+
+
+class SpanKind(Enum):
+ """Specifies additional details on how this span relates to its parent span.
+
+ Note that this enumeration is experimental and likely to change. See
+ https://github.com/open-telemetry/opentelemetry-specification/pull/226.
+ """
+
+ #: Default value. Indicates that the span is used internally in the
+ # application.
+ INTERNAL = 0
+
+ #: Indicates that the span describes an operation that handles a remote
+ # request.
+ SERVER = 1
+
+ #: Indicates that the span describes a request to some remote service.
+ CLIENT = 2
+
+ #: Indicates that the span describes a producer sending a message to a
+ #: broker. Unlike client and server, there is usually no direct critical
+ #: path latency relationship between producer and consumer spans.
+ PRODUCER = 3
+
+ #: Indicates that the span describes a consumer receiving a message from a
+ #: broker. Unlike client and server, there is usually no direct critical
+ #: path latency relationship between producer and consumer spans.
+ CONSUMER = 4
+
+
+class TracerProvider(ABC):
+ @abstractmethod
+ def get_tracer(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> "Tracer":
+ """Returns a `Tracer` for use by the given instrumentation library.
+
+ For any two calls it is undefined whether the same or different
+ `Tracer` instances are returned, even for different library names.
+
+ This function may return different `Tracer` types (e.g. a no-op tracer
+ vs. a functional tracer).
+
+ Args:
+ instrumenting_module_name: The uniquely identifiable name for instrumentation
+ scope, such as instrumentation library, package, module or class name.
+ ``__name__`` may not be used as this can result in
+ different tracer names if the tracers are in different files.
+ It is better to use a fixed string that can be imported where
+ needed and used consistently as the name of the tracer.
+
+ This should *not* be the name of the module that is
+ instrumented but the name of the module doing the instrumentation.
+ E.g., instead of ``"requests"``, use
+ ``"opentelemetry.instrumentation.requests"``.
+
+ instrumenting_library_version: Optional. The version string of the
+ instrumenting library. Usually this should be the same as
+ ``importlib.metadata.version(instrumenting_library_name)``.
+
+ schema_url: Optional. Specifies the Schema URL of the emitted telemetry.
+ """
+
+
+class NoOpTracerProvider(TracerProvider):
+ """The default TracerProvider, used when no implementation is available.
+
+ All operations are no-op.
+ """
+
+ def get_tracer(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> "Tracer":
+ # pylint:disable=no-self-use,unused-argument
+ return NoOpTracer()
+
+
+@deprecated(version="1.9.0", reason="You should use NoOpTracerProvider") # type: ignore
+class _DefaultTracerProvider(NoOpTracerProvider):
+ """The default TracerProvider, used when no implementation is available.
+
+ All operations are no-op.
+ """
+
+
+class ProxyTracerProvider(TracerProvider):
+ def get_tracer(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> "Tracer":
+ if _TRACER_PROVIDER:
+ return _TRACER_PROVIDER.get_tracer(
+ instrumenting_module_name,
+ instrumenting_library_version,
+ schema_url,
+ )
+ return ProxyTracer(
+ instrumenting_module_name,
+ instrumenting_library_version,
+ schema_url,
+ )
+
+
+class Tracer(ABC):
+ """Handles span creation and in-process context propagation.
+
+ This class provides methods for manipulating the context, creating spans,
+ and controlling spans' lifecycles.
+ """
+
+ @abstractmethod
+ def start_span(
+ self,
+ name: str,
+ context: Optional[Context] = None,
+ kind: SpanKind = SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: _Links = None,
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ ) -> "Span":
+ """Starts a span.
+
+ Create a new span. Start the span without setting it as the current
+ span in the context. To start the span and use the context in a single
+ method, see :meth:`start_as_current_span`.
+
+ By default the current span in the context will be used as parent, but an
+ explicit context can also be specified, by passing in a `Context` containing
+ a current `Span`. If there is no current span in the global `Context` or in
+ the specified context, the created span will be a root span.
+
+ The span can be used as a context manager. On exiting the context manager,
+ the span's end() method will be called.
+
+ Example::
+
+ # trace.get_current_span() will be used as the implicit parent.
+ # If none is found, the created span will be a root instance.
+ with tracer.start_span("one") as child:
+ child.add_event("child's event")
+
+ Args:
+ name: The name of the span to be created.
+ context: An optional Context containing the span's parent. Defaults to the
+ global context.
+ kind: The span's kind (relationship to parent). Note that is
+ meaningful even if there is no parent.
+ attributes: The span's attributes.
+ links: Links span to other spans
+ start_time: Sets the start time of a span
+ record_exception: Whether to record any exceptions raised within the
+ context as error event on the span.
+ set_status_on_exception: Only relevant if the returned span is used
+ in a with/context manager. Defines whether the span status will
+ be automatically set to ERROR when an uncaught exception is
+ raised in the span with block. The span status won't be set by
+ this mechanism if it was previously set manually.
+
+ Returns:
+ The newly-created span.
+ """
+
+ @contextmanager
+ @abstractmethod
+ def start_as_current_span(
+ self,
+ name: str,
+ context: Optional[Context] = None,
+ kind: SpanKind = SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: _Links = None,
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ end_on_exit: bool = True,
+ ) -> Iterator["Span"]:
+ """Context manager for creating a new span and set it
+ as the current span in this tracer's context.
+
+ Exiting the context manager will call the span's end method,
+ as well as return the current span to its previous value by
+ returning to the previous context.
+
+ Example::
+
+ with tracer.start_as_current_span("one") as parent:
+ parent.add_event("parent's event")
+ with tracer.start_as_current_span("two") as child:
+ child.add_event("child's event")
+ trace.get_current_span() # returns child
+ trace.get_current_span() # returns parent
+ trace.get_current_span() # returns previously active span
+
+ This is a convenience method for creating spans attached to the
+ tracer's context. Applications that need more control over the span
+ lifetime should use :meth:`start_span` instead. For example::
+
+ with tracer.start_as_current_span(name) as span:
+ do_work()
+
+ is equivalent to::
+
+ span = tracer.start_span(name)
+ with opentelemetry.trace.use_span(span, end_on_exit=True):
+ do_work()
+
+ This can also be used as a decorator::
+
+ @tracer.start_as_current_span("name")
+ def function():
+ ...
+
+ function()
+
+ Args:
+ name: The name of the span to be created.
+ context: An optional Context containing the span's parent. Defaults to the
+ global context.
+ kind: The span's kind (relationship to parent). Note that is
+ meaningful even if there is no parent.
+ attributes: The span's attributes.
+ links: Links span to other spans
+ start_time: Sets the start time of a span
+ record_exception: Whether to record any exceptions raised within the
+ context as error event on the span.
+ set_status_on_exception: Only relevant if the returned span is used
+ in a with/context manager. Defines whether the span status will
+ be automatically set to ERROR when an uncaught exception is
+ raised in the span with block. The span status won't be set by
+ this mechanism if it was previously set manually.
+ end_on_exit: Whether to end the span automatically when leaving the
+ context manager.
+
+ Yields:
+ The newly-created span.
+ """
+
+
+class ProxyTracer(Tracer):
+ # pylint: disable=W0222,signature-differs
+ def __init__(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ):
+ self._instrumenting_module_name = instrumenting_module_name
+ self._instrumenting_library_version = instrumenting_library_version
+ self._schema_url = schema_url
+ self._real_tracer: Optional[Tracer] = None
+ self._noop_tracer = NoOpTracer()
+
+ @property
+ def _tracer(self) -> Tracer:
+ if self._real_tracer:
+ return self._real_tracer
+
+ if _TRACER_PROVIDER:
+ self._real_tracer = _TRACER_PROVIDER.get_tracer(
+ self._instrumenting_module_name,
+ self._instrumenting_library_version,
+ self._schema_url,
+ )
+ return self._real_tracer
+ return self._noop_tracer
+
+ def start_span(self, *args, **kwargs) -> Span: # type: ignore
+ return self._tracer.start_span(*args, **kwargs) # type: ignore
+
+ @contextmanager # type: ignore
+ def start_as_current_span(self, *args, **kwargs) -> Iterator[Span]: # type: ignore
+ with self._tracer.start_as_current_span(*args, **kwargs) as span: # type: ignore
+ yield span
+
+
+class NoOpTracer(Tracer):
+ """The default Tracer, used when no Tracer implementation is available.
+
+ All operations are no-op.
+ """
+
+ def start_span(
+ self,
+ name: str,
+ context: Optional[Context] = None,
+ kind: SpanKind = SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: _Links = None,
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ ) -> "Span":
+ # pylint: disable=unused-argument,no-self-use
+ return INVALID_SPAN
+
+ @contextmanager
+ def start_as_current_span(
+ self,
+ name: str,
+ context: Optional[Context] = None,
+ kind: SpanKind = SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: _Links = None,
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ end_on_exit: bool = True,
+ ) -> Iterator["Span"]:
+ # pylint: disable=unused-argument,no-self-use
+ yield INVALID_SPAN
+
+
+@deprecated(version="1.9.0", reason="You should use NoOpTracer") # type: ignore
+class _DefaultTracer(NoOpTracer):
+ """The default Tracer, used when no Tracer implementation is available.
+
+ All operations are no-op.
+ """
+
+
+_TRACER_PROVIDER_SET_ONCE = Once()
+_TRACER_PROVIDER: Optional[TracerProvider] = None
+_PROXY_TRACER_PROVIDER = ProxyTracerProvider()
+
+
+def get_tracer(
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ tracer_provider: Optional[TracerProvider] = None,
+ schema_url: typing.Optional[str] = None,
+) -> "Tracer":
+ """Returns a `Tracer` for use by the given instrumentation library.
+
+ This function is a convenience wrapper for
+ opentelemetry.trace.TracerProvider.get_tracer.
+
+ If tracer_provider is omitted the current configured one is used.
+ """
+ if tracer_provider is None:
+ tracer_provider = get_tracer_provider()
+ return tracer_provider.get_tracer(
+ instrumenting_module_name, instrumenting_library_version, schema_url
+ )
+
+
+def _set_tracer_provider(tracer_provider: TracerProvider, log: bool) -> None:
+ def set_tp() -> None:
+ global _TRACER_PROVIDER # pylint: disable=global-statement
+ _TRACER_PROVIDER = tracer_provider
+
+ did_set = _TRACER_PROVIDER_SET_ONCE.do_once(set_tp)
+
+ if log and not did_set:
+ logger.warning("Overriding of current TracerProvider is not allowed")
+
+
+def set_tracer_provider(tracer_provider: TracerProvider) -> None:
+ """Sets the current global :class:`~.TracerProvider` object.
+
+ This can only be done once, a warning will be logged if any further attempt
+ is made.
+ """
+ _set_tracer_provider(tracer_provider, log=True)
+
+
+def get_tracer_provider() -> TracerProvider:
+ """Gets the current global :class:`~.TracerProvider` object."""
+ if _TRACER_PROVIDER is None:
+ # if a global tracer provider has not been set either via code or env
+ # vars, return a proxy tracer provider
+ if OTEL_PYTHON_TRACER_PROVIDER not in os.environ:
+ return _PROXY_TRACER_PROVIDER
+
+ tracer_provider: TracerProvider = _load_provider(
+ OTEL_PYTHON_TRACER_PROVIDER, "tracer_provider"
+ )
+ _set_tracer_provider(tracer_provider, log=False)
+ # _TRACER_PROVIDER will have been set by one thread
+ return cast("TracerProvider", _TRACER_PROVIDER)
+
+
+@contextmanager
+def use_span(
+ span: Span,
+ end_on_exit: bool = False,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+) -> Iterator[Span]:
+ """Takes a non-active span and activates it in the current context.
+
+ Args:
+ span: The span that should be activated in the current context.
+ end_on_exit: Whether to end the span automatically when leaving the
+ context manager scope.
+ record_exception: Whether to record any exceptions raised within the
+ context as error event on the span.
+ set_status_on_exception: Only relevant if the returned span is used
+ in a with/context manager. Defines whether the span status will
+ be automatically set to ERROR when an uncaught exception is
+ raised in the span with block. The span status won't be set by
+ this mechanism if it was previously set manually.
+ """
+ try:
+ token = context_api.attach(context_api.set_value(_SPAN_KEY, span))
+ try:
+ yield span
+ finally:
+ context_api.detach(token)
+
+ except Exception as exc: # pylint: disable=broad-except
+ if isinstance(span, Span) and span.is_recording():
+ # Record the exception as an event
+ if record_exception:
+ span.record_exception(exc)
+
+ # Set status in case exception was raised
+ if set_status_on_exception:
+ span.set_status(
+ Status(
+ status_code=StatusCode.ERROR,
+ description=f"{type(exc).__name__}: {exc}",
+ )
+ )
+
+ # This causes parent spans to set their status to ERROR and to record
+ # an exception as an event if a child span raises an exception even if
+ # such child span was started with both record_exception and
+ # set_status_on_exception attributes set to False.
+ raise
+
+ finally:
+ if end_on_exit:
+ span.end()
+
+
+__all__ = [
+ "DEFAULT_TRACE_OPTIONS",
+ "DEFAULT_TRACE_STATE",
+ "INVALID_SPAN",
+ "INVALID_SPAN_CONTEXT",
+ "INVALID_SPAN_ID",
+ "INVALID_TRACE_ID",
+ "NonRecordingSpan",
+ "Link",
+ "Span",
+ "SpanContext",
+ "SpanKind",
+ "TraceFlags",
+ "TraceState",
+ "TracerProvider",
+ "Tracer",
+ "format_span_id",
+ "format_trace_id",
+ "get_current_span",
+ "get_tracer",
+ "get_tracer_provider",
+ "set_tracer_provider",
+ "set_span_in_context",
+ "use_span",
+ "Status",
+ "StatusCode",
+]
diff --git a/opentelemetry-api/src/opentelemetry/trace/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/trace/propagation/__init__.py
new file mode 100644
index 0000000000..d3529e1779
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/trace/propagation/__init__.py
@@ -0,0 +1,51 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from typing import Optional
+
+from opentelemetry.context import create_key, get_value, set_value
+from opentelemetry.context.context import Context
+from opentelemetry.trace.span import INVALID_SPAN, Span
+
+SPAN_KEY = "current-span"
+_SPAN_KEY = create_key("current-span")
+
+
+def set_span_in_context(
+ span: Span, context: Optional[Context] = None
+) -> Context:
+ """Set the span in the given context.
+
+ Args:
+ span: The Span to set.
+ context: a Context object. if one is not passed, the
+ default current context is used instead.
+ """
+ ctx = set_value(_SPAN_KEY, span, context=context)
+ return ctx
+
+
+def get_current_span(context: Optional[Context] = None) -> Span:
+ """Retrieve the current span.
+
+ Args:
+ context: A Context object. If one is not passed, the
+ default current context is used instead.
+
+ Returns:
+ The Span set in the context if it exists. INVALID_SPAN otherwise.
+ """
+ span = get_value(_SPAN_KEY, context=context)
+ if span is None or not isinstance(span, Span):
+ return INVALID_SPAN
+ return span
diff --git a/opentelemetry-api/src/opentelemetry/trace/propagation/tracecontext.py b/opentelemetry-api/src/opentelemetry/trace/propagation/tracecontext.py
new file mode 100644
index 0000000000..af16a08f0b
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/trace/propagation/tracecontext.py
@@ -0,0 +1,118 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+import re
+import typing
+
+from opentelemetry import trace
+from opentelemetry.context.context import Context
+from opentelemetry.propagators import textmap
+from opentelemetry.trace import format_span_id, format_trace_id
+from opentelemetry.trace.span import TraceState
+
+
+class TraceContextTextMapPropagator(textmap.TextMapPropagator):
+ """Extracts and injects using w3c TraceContext's headers."""
+
+ _TRACEPARENT_HEADER_NAME = "traceparent"
+ _TRACESTATE_HEADER_NAME = "tracestate"
+ _TRACEPARENT_HEADER_FORMAT = (
+ "^[ \t]*([0-9a-f]{2})-([0-9a-f]{32})-([0-9a-f]{16})-([0-9a-f]{2})"
+ + "(-.*)?[ \t]*$"
+ )
+ _TRACEPARENT_HEADER_FORMAT_RE = re.compile(_TRACEPARENT_HEADER_FORMAT)
+
+ def extract(
+ self,
+ carrier: textmap.CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: textmap.Getter[textmap.CarrierT] = textmap.default_getter,
+ ) -> Context:
+ """Extracts SpanContext from the carrier.
+
+ See `opentelemetry.propagators.textmap.TextMapPropagator.extract`
+ """
+ if context is None:
+ context = Context()
+
+ header = getter.get(carrier, self._TRACEPARENT_HEADER_NAME)
+
+ if not header:
+ return context
+
+ match = re.search(self._TRACEPARENT_HEADER_FORMAT_RE, header[0])
+ if not match:
+ return context
+
+ version: str = match.group(1)
+ trace_id: str = match.group(2)
+ span_id: str = match.group(3)
+ trace_flags: str = match.group(4)
+
+ if trace_id == "0" * 32 or span_id == "0" * 16:
+ return context
+
+ if version == "00":
+ if match.group(5): # type: ignore
+ return context
+ if version == "ff":
+ return context
+
+ tracestate_headers = getter.get(carrier, self._TRACESTATE_HEADER_NAME)
+ if tracestate_headers is None:
+ tracestate = None
+ else:
+ tracestate = TraceState.from_header(tracestate_headers)
+
+ span_context = trace.SpanContext(
+ trace_id=int(trace_id, 16),
+ span_id=int(span_id, 16),
+ is_remote=True,
+ trace_flags=trace.TraceFlags(int(trace_flags, 16)),
+ trace_state=tracestate,
+ )
+ return trace.set_span_in_context(
+ trace.NonRecordingSpan(span_context), context
+ )
+
+ def inject(
+ self,
+ carrier: textmap.CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: textmap.Setter[textmap.CarrierT] = textmap.default_setter,
+ ) -> None:
+ """Injects SpanContext into the carrier.
+
+ See `opentelemetry.propagators.textmap.TextMapPropagator.inject`
+ """
+ span = trace.get_current_span(context)
+ span_context = span.get_span_context()
+ if span_context == trace.INVALID_SPAN_CONTEXT:
+ return
+ traceparent_string = f"00-{format_trace_id(span_context.trace_id)}-{format_span_id(span_context.span_id)}-{span_context.trace_flags:02x}"
+ setter.set(carrier, self._TRACEPARENT_HEADER_NAME, traceparent_string)
+ if span_context.trace_state:
+ tracestate_string = span_context.trace_state.to_header()
+ setter.set(
+ carrier, self._TRACESTATE_HEADER_NAME, tracestate_string
+ )
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ """Returns a set with the fields set in `inject`.
+
+ See
+ `opentelemetry.propagators.textmap.TextMapPropagator.fields`
+ """
+ return {self._TRACEPARENT_HEADER_NAME, self._TRACESTATE_HEADER_NAME}
diff --git a/opentelemetry-api/src/opentelemetry/trace/span.py b/opentelemetry-api/src/opentelemetry/trace/span.py
new file mode 100644
index 0000000000..805b2b06b1
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/trace/span.py
@@ -0,0 +1,582 @@
+import abc
+import logging
+import re
+import types as python_types
+import typing
+from collections import OrderedDict
+
+from opentelemetry.trace.status import Status, StatusCode
+from opentelemetry.util import types
+
+# The key MUST begin with a lowercase letter or a digit,
+# and can only contain lowercase letters (a-z), digits (0-9),
+# underscores (_), dashes (-), asterisks (*), and forward slashes (/).
+# For multi-tenant vendor scenarios, an at sign (@) can be used to
+# prefix the vendor name. Vendors SHOULD set the tenant ID
+# at the beginning of the key.
+
+# key = ( lcalpha ) 0*255( lcalpha / DIGIT / "_" / "-"/ "*" / "/" )
+# key = ( lcalpha / DIGIT ) 0*240( lcalpha / DIGIT / "_" / "-"/ "*" / "/" ) "@" lcalpha 0*13( lcalpha / DIGIT / "_" / "-"/ "*" / "/" )
+# lcalpha = %x61-7A ; a-z
+
+_KEY_FORMAT = (
+ r"[a-z][_0-9a-z\-\*\/]{0,255}|"
+ r"[a-z0-9][_0-9a-z\-\*\/]{0,240}@[a-z][_0-9a-z\-\*\/]{0,13}"
+)
+_KEY_PATTERN = re.compile(_KEY_FORMAT)
+
+# The value is an opaque string containing up to 256 printable
+# ASCII [RFC0020] characters (i.e., the range 0x20 to 0x7E)
+# except comma (,) and (=).
+# value = 0*255(chr) nblk-chr
+# nblk-chr = %x21-2B / %x2D-3C / %x3E-7E
+# chr = %x20 / nblk-chr
+
+_VALUE_FORMAT = (
+ r"[\x20-\x2b\x2d-\x3c\x3e-\x7e]{0,255}[\x21-\x2b\x2d-\x3c\x3e-\x7e]"
+)
+_VALUE_PATTERN = re.compile(_VALUE_FORMAT)
+
+
+_TRACECONTEXT_MAXIMUM_TRACESTATE_KEYS = 32
+_delimiter_pattern = re.compile(r"[ \t]*,[ \t]*")
+_member_pattern = re.compile(f"({_KEY_FORMAT})(=)({_VALUE_FORMAT})[ \t]*")
+_logger = logging.getLogger(__name__)
+
+
+def _is_valid_pair(key: str, value: str) -> bool:
+
+ return (
+ isinstance(key, str)
+ and _KEY_PATTERN.fullmatch(key) is not None
+ and isinstance(value, str)
+ and _VALUE_PATTERN.fullmatch(value) is not None
+ )
+
+
+class Span(abc.ABC):
+ """A span represents a single operation within a trace."""
+
+ @abc.abstractmethod
+ def end(self, end_time: typing.Optional[int] = None) -> None:
+ """Sets the current time as the span's end time.
+
+ The span's end time is the wall time at which the operation finished.
+
+ Only the first call to `end` should modify the span, and
+ implementations are free to ignore or raise on further calls.
+ """
+
+ @abc.abstractmethod
+ def get_span_context(self) -> "SpanContext":
+ """Gets the span's SpanContext.
+
+ Get an immutable, serializable identifier for this span that can be
+ used to create new child spans.
+
+ Returns:
+ A :class:`opentelemetry.trace.SpanContext` with a copy of this span's immutable state.
+ """
+
+ @abc.abstractmethod
+ def set_attributes(
+ self, attributes: typing.Dict[str, types.AttributeValue]
+ ) -> None:
+ """Sets Attributes.
+
+ Sets Attributes with the key and value passed as arguments dict.
+
+ Note: The behavior of `None` value attributes is undefined, and hence
+ strongly discouraged. It is also preferred to set attributes at span
+ creation, instead of calling this method later since samplers can only
+ consider information already present during span creation.
+ """
+
+ @abc.abstractmethod
+ def set_attribute(self, key: str, value: types.AttributeValue) -> None:
+ """Sets an Attribute.
+
+ Sets a single Attribute with the key and value passed as arguments.
+
+ Note: The behavior of `None` value attributes is undefined, and hence
+ strongly discouraged. It is also preferred to set attributes at span
+ creation, instead of calling this method later since samplers can only
+ consider information already present during span creation.
+ """
+
+ @abc.abstractmethod
+ def add_event(
+ self,
+ name: str,
+ attributes: types.Attributes = None,
+ timestamp: typing.Optional[int] = None,
+ ) -> None:
+ """Adds an `Event`.
+
+ Adds a single `Event` with the name and, optionally, a timestamp and
+ attributes passed as arguments. Implementations should generate a
+ timestamp if the `timestamp` argument is omitted.
+ """
+
+ @abc.abstractmethod
+ def update_name(self, name: str) -> None:
+ """Updates the `Span` name.
+
+ This will override the name provided via :func:`opentelemetry.trace.Tracer.start_span`.
+
+ Upon this update, any sampling behavior based on Span name will depend
+ on the implementation.
+ """
+
+ @abc.abstractmethod
+ def is_recording(self) -> bool:
+ """Returns whether this span will be recorded.
+
+ Returns true if this Span is active and recording information like
+ events with the add_event operation and attributes using set_attribute.
+ """
+
+ @abc.abstractmethod
+ def set_status(
+ self,
+ status: typing.Union[Status, StatusCode],
+ description: typing.Optional[str] = None,
+ ) -> None:
+ """Sets the Status of the Span. If used, this will override the default
+ Span status.
+ """
+
+ @abc.abstractmethod
+ def record_exception(
+ self,
+ exception: Exception,
+ attributes: types.Attributes = None,
+ timestamp: typing.Optional[int] = None,
+ escaped: bool = False,
+ ) -> None:
+ """Records an exception as a span event."""
+
+ def __enter__(self) -> "Span":
+ """Invoked when `Span` is used as a context manager.
+
+ Returns the `Span` itself.
+ """
+ return self
+
+ def __exit__(
+ self,
+ exc_type: typing.Optional[typing.Type[BaseException]],
+ exc_val: typing.Optional[BaseException],
+ exc_tb: typing.Optional[python_types.TracebackType],
+ ) -> None:
+ """Ends context manager and calls `end` on the `Span`."""
+
+ self.end()
+
+
+class TraceFlags(int):
+ """A bitmask that represents options specific to the trace.
+
+ The only supported option is the "sampled" flag (``0x01``). If set, this
+ flag indicates that the trace may have been sampled upstream.
+
+ See the `W3C Trace Context - Traceparent`_ spec for details.
+
+ .. _W3C Trace Context - Traceparent:
+ https://www.w3.org/TR/trace-context/#trace-flags
+ """
+
+ DEFAULT = 0x00
+ SAMPLED = 0x01
+
+ @classmethod
+ def get_default(cls) -> "TraceFlags":
+ return cls(cls.DEFAULT)
+
+ @property
+ def sampled(self) -> bool:
+ return bool(self & TraceFlags.SAMPLED)
+
+
+DEFAULT_TRACE_OPTIONS = TraceFlags.get_default()
+
+
+class TraceState(typing.Mapping[str, str]):
+ """A list of key-value pairs representing vendor-specific trace info.
+
+ Keys and values are strings of up to 256 printable US-ASCII characters.
+ Implementations should conform to the `W3C Trace Context - Tracestate`_
+ spec, which describes additional restrictions on valid field values.
+
+ .. _W3C Trace Context - Tracestate:
+ https://www.w3.org/TR/trace-context/#tracestate-field
+ """
+
+ def __init__(
+ self,
+ entries: typing.Optional[
+ typing.Sequence[typing.Tuple[str, str]]
+ ] = None,
+ ) -> None:
+ self._dict = OrderedDict() # type: OrderedDict[str, str]
+ if entries is None:
+ return
+ if len(entries) > _TRACECONTEXT_MAXIMUM_TRACESTATE_KEYS:
+ _logger.warning(
+ "There can't be more than %s key/value pairs.",
+ _TRACECONTEXT_MAXIMUM_TRACESTATE_KEYS,
+ )
+ return
+
+ for key, value in entries:
+ if _is_valid_pair(key, value):
+ if key in self._dict:
+ _logger.warning("Duplicate key: %s found.", key)
+ continue
+ self._dict[key] = value
+ else:
+ _logger.warning(
+ "Invalid key/value pair (%s, %s) found.", key, value
+ )
+
+ def __contains__(self, item: object) -> bool:
+ return item in self._dict
+
+ def __getitem__(self, key: str) -> str:
+ return self._dict[key]
+
+ def __iter__(self) -> typing.Iterator[str]:
+ return iter(self._dict)
+
+ def __len__(self) -> int:
+ return len(self._dict)
+
+ def __repr__(self) -> str:
+ pairs = [
+ f"{{key={key}, value={value}}}"
+ for key, value in self._dict.items()
+ ]
+ return str(pairs)
+
+ def add(self, key: str, value: str) -> "TraceState":
+ """Adds a key-value pair to tracestate. The provided pair should
+ adhere to w3c tracestate identifiers format.
+
+ Args:
+ key: A valid tracestate key to add
+ value: A valid tracestate value to add
+
+ Returns:
+ A new TraceState with the modifications applied.
+
+ If the provided key-value pair is invalid or results in tracestate
+ that violates tracecontext specification, they are discarded and
+ same tracestate will be returned.
+ """
+ if not _is_valid_pair(key, value):
+ _logger.warning(
+ "Invalid key/value pair (%s, %s) found.", key, value
+ )
+ return self
+ # There can be a maximum of 32 pairs
+ if len(self) >= _TRACECONTEXT_MAXIMUM_TRACESTATE_KEYS:
+ _logger.warning("There can't be more 32 key/value pairs.")
+ return self
+ # Duplicate entries are not allowed
+ if key in self._dict:
+ _logger.warning("The provided key %s already exists.", key)
+ return self
+ new_state = [(key, value)] + list(self._dict.items())
+ return TraceState(new_state)
+
+ def update(self, key: str, value: str) -> "TraceState":
+ """Updates a key-value pair in tracestate. The provided pair should
+ adhere to w3c tracestate identifiers format.
+
+ Args:
+ key: A valid tracestate key to update
+ value: A valid tracestate value to update for key
+
+ Returns:
+ A new TraceState with the modifications applied.
+
+ If the provided key-value pair is invalid or results in tracestate
+ that violates tracecontext specification, they are discarded and
+ same tracestate will be returned.
+ """
+ if not _is_valid_pair(key, value):
+ _logger.warning(
+ "Invalid key/value pair (%s, %s) found.", key, value
+ )
+ return self
+ prev_state = self._dict.copy()
+ prev_state[key] = value
+ prev_state.move_to_end(key, last=False)
+ new_state = list(prev_state.items())
+ return TraceState(new_state)
+
+ def delete(self, key: str) -> "TraceState":
+ """Deletes a key-value from tracestate.
+
+ Args:
+ key: A valid tracestate key to remove key-value pair from tracestate
+
+ Returns:
+ A new TraceState with the modifications applied.
+
+ If the provided key-value pair is invalid or results in tracestate
+ that violates tracecontext specification, they are discarded and
+ same tracestate will be returned.
+ """
+ if key not in self._dict:
+ _logger.warning("The provided key %s doesn't exist.", key)
+ return self
+ prev_state = self._dict.copy()
+ prev_state.pop(key)
+ new_state = list(prev_state.items())
+ return TraceState(new_state)
+
+ def to_header(self) -> str:
+ """Creates a w3c tracestate header from a TraceState.
+
+ Returns:
+ A string that adheres to the w3c tracestate
+ header format.
+ """
+ return ",".join(key + "=" + value for key, value in self._dict.items())
+
+ @classmethod
+ def from_header(cls, header_list: typing.List[str]) -> "TraceState":
+ """Parses one or more w3c tracestate header into a TraceState.
+
+ Args:
+ header_list: one or more w3c tracestate headers.
+
+ Returns:
+ A valid TraceState that contains values extracted from
+ the tracestate header.
+
+ If the format of one headers is illegal, all values will
+ be discarded and an empty tracestate will be returned.
+
+ If the number of keys is beyond the maximum, all values
+ will be discarded and an empty tracestate will be returned.
+ """
+ pairs = OrderedDict() # type: OrderedDict[str, str]
+ for header in header_list:
+ members: typing.List[str] = re.split(_delimiter_pattern, header)
+ for member in members:
+ # empty members are valid, but no need to process further.
+ if not member:
+ continue
+ match = _member_pattern.fullmatch(member)
+ if not match:
+ _logger.warning(
+ "Member doesn't match the w3c identifiers format %s",
+ member,
+ )
+ return cls()
+ groups: typing.Tuple[str, ...] = match.groups()
+ key, _eq, value = groups
+ # duplicate keys are not legal in header
+ if key in pairs:
+ return cls()
+ pairs[key] = value
+ return cls(list(pairs.items()))
+
+ @classmethod
+ def get_default(cls) -> "TraceState":
+ return cls()
+
+ def keys(self) -> typing.KeysView[str]:
+ return self._dict.keys()
+
+ def items(self) -> typing.ItemsView[str, str]:
+ return self._dict.items()
+
+ def values(self) -> typing.ValuesView[str]:
+ return self._dict.values()
+
+
+DEFAULT_TRACE_STATE = TraceState.get_default()
+_TRACE_ID_MAX_VALUE = 2**128 - 1
+_SPAN_ID_MAX_VALUE = 2**64 - 1
+
+
+class SpanContext(
+ typing.Tuple[int, int, bool, "TraceFlags", "TraceState", bool]
+):
+ """The state of a Span to propagate between processes.
+
+ This class includes the immutable attributes of a :class:`.Span` that must
+ be propagated to a span's children and across process boundaries.
+
+ Args:
+ trace_id: The ID of the trace that this span belongs to.
+ span_id: This span's ID.
+ is_remote: True if propagated from a remote parent.
+ trace_flags: Trace options to propagate.
+ trace_state: Tracing-system-specific info to propagate.
+ """
+
+ def __new__(
+ cls,
+ trace_id: int,
+ span_id: int,
+ is_remote: bool,
+ trace_flags: typing.Optional["TraceFlags"] = DEFAULT_TRACE_OPTIONS,
+ trace_state: typing.Optional["TraceState"] = DEFAULT_TRACE_STATE,
+ ) -> "SpanContext":
+ if trace_flags is None:
+ trace_flags = DEFAULT_TRACE_OPTIONS
+ if trace_state is None:
+ trace_state = DEFAULT_TRACE_STATE
+
+ is_valid = (
+ INVALID_TRACE_ID < trace_id <= _TRACE_ID_MAX_VALUE
+ and INVALID_SPAN_ID < span_id <= _SPAN_ID_MAX_VALUE
+ )
+
+ return tuple.__new__(
+ cls,
+ (trace_id, span_id, is_remote, trace_flags, trace_state, is_valid),
+ )
+
+ def __getnewargs__(
+ self,
+ ) -> typing.Tuple[int, int, bool, "TraceFlags", "TraceState"]:
+ return (
+ self.trace_id,
+ self.span_id,
+ self.is_remote,
+ self.trace_flags,
+ self.trace_state,
+ )
+
+ @property
+ def trace_id(self) -> int:
+ return self[0] # pylint: disable=unsubscriptable-object
+
+ @property
+ def span_id(self) -> int:
+ return self[1] # pylint: disable=unsubscriptable-object
+
+ @property
+ def is_remote(self) -> bool:
+ return self[2] # pylint: disable=unsubscriptable-object
+
+ @property
+ def trace_flags(self) -> "TraceFlags":
+ return self[3] # pylint: disable=unsubscriptable-object
+
+ @property
+ def trace_state(self) -> "TraceState":
+ return self[4] # pylint: disable=unsubscriptable-object
+
+ @property
+ def is_valid(self) -> bool:
+ return self[5] # pylint: disable=unsubscriptable-object
+
+ def __setattr__(self, *args: str) -> None:
+ _logger.debug(
+ "Immutable type, ignoring call to set attribute", stack_info=True
+ )
+
+ def __delattr__(self, *args: str) -> None:
+ _logger.debug(
+ "Immutable type, ignoring call to set attribute", stack_info=True
+ )
+
+ def __repr__(self) -> str:
+ return f"{type(self).__name__}(trace_id=0x{format_trace_id(self.trace_id)}, span_id=0x{format_span_id(self.span_id)}, trace_flags=0x{self.trace_flags:02x}, trace_state={self.trace_state!r}, is_remote={self.is_remote})"
+
+
+class NonRecordingSpan(Span):
+ """The Span that is used when no Span implementation is available.
+
+ All operations are no-op except context propagation.
+ """
+
+ def __init__(self, context: "SpanContext") -> None:
+ self._context = context
+
+ def get_span_context(self) -> "SpanContext":
+ return self._context
+
+ def is_recording(self) -> bool:
+ return False
+
+ def end(self, end_time: typing.Optional[int] = None) -> None:
+ pass
+
+ def set_attributes(
+ self, attributes: typing.Dict[str, types.AttributeValue]
+ ) -> None:
+ pass
+
+ def set_attribute(self, key: str, value: types.AttributeValue) -> None:
+ pass
+
+ def add_event(
+ self,
+ name: str,
+ attributes: types.Attributes = None,
+ timestamp: typing.Optional[int] = None,
+ ) -> None:
+ pass
+
+ def update_name(self, name: str) -> None:
+ pass
+
+ def set_status(
+ self,
+ status: typing.Union[Status, StatusCode],
+ description: typing.Optional[str] = None,
+ ) -> None:
+ pass
+
+ def record_exception(
+ self,
+ exception: Exception,
+ attributes: types.Attributes = None,
+ timestamp: typing.Optional[int] = None,
+ escaped: bool = False,
+ ) -> None:
+ pass
+
+ def __repr__(self) -> str:
+ return f"NonRecordingSpan({self._context!r})"
+
+
+INVALID_SPAN_ID = 0x0000000000000000
+INVALID_TRACE_ID = 0x00000000000000000000000000000000
+INVALID_SPAN_CONTEXT = SpanContext(
+ trace_id=INVALID_TRACE_ID,
+ span_id=INVALID_SPAN_ID,
+ is_remote=False,
+ trace_flags=DEFAULT_TRACE_OPTIONS,
+ trace_state=DEFAULT_TRACE_STATE,
+)
+INVALID_SPAN = NonRecordingSpan(INVALID_SPAN_CONTEXT)
+
+
+def format_trace_id(trace_id: int) -> str:
+ """Convenience trace ID formatting method
+ Args:
+ trace_id: Trace ID int
+
+ Returns:
+ The trace ID as 32-byte hexadecimal string
+ """
+ return format(trace_id, "032x")
+
+
+def format_span_id(span_id: int) -> str:
+ """Convenience span ID formatting method
+ Args:
+ span_id: Span ID int
+
+ Returns:
+ The span ID as 16-byte hexadecimal string
+ """
+ return format(span_id, "016x")
diff --git a/opentelemetry-api/src/opentelemetry/trace/status.py b/opentelemetry-api/src/opentelemetry/trace/status.py
new file mode 100644
index 0000000000..ada7fa1ebd
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/trace/status.py
@@ -0,0 +1,82 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import enum
+import logging
+import typing
+
+logger = logging.getLogger(__name__)
+
+
+class StatusCode(enum.Enum):
+ """Represents the canonical set of status codes of a finished Span."""
+
+ UNSET = 0
+ """The default status."""
+
+ OK = 1
+ """The operation has been validated by an Application developer or Operator to have completed successfully."""
+
+ ERROR = 2
+ """The operation contains an error."""
+
+
+class Status:
+ """Represents the status of a finished Span.
+
+ Args:
+ status_code: The canonical status code that describes the result
+ status of the operation.
+ description: An optional description of the status.
+ """
+
+ def __init__(
+ self,
+ status_code: StatusCode = StatusCode.UNSET,
+ description: typing.Optional[str] = None,
+ ):
+ self._status_code = status_code
+ self._description = None
+
+ if description:
+ if not isinstance(description, str):
+ logger.warning("Invalid status description type, expected str")
+ return
+ if status_code is not StatusCode.ERROR:
+ logger.warning(
+ "description should only be set when status_code is set to StatusCode.ERROR"
+ )
+ return
+
+ self._description = description
+
+ @property
+ def status_code(self) -> StatusCode:
+ """Represents the canonical status code of a finished Span."""
+ return self._status_code
+
+ @property
+ def description(self) -> typing.Optional[str]:
+ """Status description"""
+ return self._description
+
+ @property
+ def is_ok(self) -> bool:
+ """Returns false if this represents an error, true otherwise."""
+ return self.is_unset or self._status_code is StatusCode.OK
+
+ @property
+ def is_unset(self) -> bool:
+ """Returns true if unset, false otherwise."""
+ return self._status_code is StatusCode.UNSET
diff --git a/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py
new file mode 100644
index 0000000000..cbf09f3ef8
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py
@@ -0,0 +1,29 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# FIXME: Use importlib.metadata when support for 3.11 is dropped if the rest of
+# the supported versions at that time have the same API.
+from importlib_metadata import ( # type: ignore
+ EntryPoint,
+ EntryPoints,
+ entry_points,
+ version,
+)
+
+# The importlib-metadata library has introduced breaking changes before to its
+# API, this module is kept just to act as a layer between the
+# importlib-metadata library and our project if in any case it is necessary to
+# do so.
+
+__all__ = ["entry_points", "version", "EntryPoint", "EntryPoints"]
diff --git a/opentelemetry-api/src/opentelemetry/util/_once.py b/opentelemetry-api/src/opentelemetry/util/_once.py
new file mode 100644
index 0000000000..c0cee43a17
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/util/_once.py
@@ -0,0 +1,47 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from threading import Lock
+from typing import Callable
+
+
+class Once:
+ """Execute a function exactly once and block all callers until the function returns
+
+ Same as golang's `sync.Once `_
+ """
+
+ def __init__(self) -> None:
+ self._lock = Lock()
+ self._done = False
+
+ def do_once(self, func: Callable[[], None]) -> bool:
+ """Execute ``func`` if it hasn't been executed or return.
+
+ Will block until ``func`` has been called by one thread.
+
+ Returns:
+ Whether or not ``func`` was executed in this call
+ """
+
+ # fast path, try to avoid locking
+ if self._done:
+ return False
+
+ with self._lock:
+ if not self._done:
+ func()
+ self._done = True
+ return True
+ return False
diff --git a/opentelemetry-api/src/opentelemetry/util/_providers.py b/opentelemetry-api/src/opentelemetry/util/_providers.py
new file mode 100644
index 0000000000..d255ac999f
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/util/_providers.py
@@ -0,0 +1,54 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+from os import environ
+from typing import TYPE_CHECKING, TypeVar, cast
+
+from opentelemetry.util._importlib_metadata import entry_points
+
+if TYPE_CHECKING:
+ from opentelemetry.metrics import MeterProvider
+ from opentelemetry.trace import TracerProvider
+
+Provider = TypeVar("Provider", "TracerProvider", "MeterProvider")
+
+logger = getLogger(__name__)
+
+
+def _load_provider(
+ provider_environment_variable: str, provider: str
+) -> Provider:
+
+ try:
+
+ provider_name = cast(
+ str,
+ environ.get(provider_environment_variable, f"default_{provider}"),
+ )
+
+ return cast(
+ Provider,
+ next( # type: ignore
+ iter( # type: ignore
+ entry_points( # type: ignore
+ group=f"opentelemetry_{provider}",
+ name=provider_name,
+ )
+ )
+ ).load()(),
+ )
+ except Exception: # pylint: disable=broad-except
+ logger.exception("Failed to load configured provider %s", provider)
+ raise
diff --git a/opentelemetry-api/src/opentelemetry/util/re.py b/opentelemetry-api/src/opentelemetry/util/re.py
new file mode 100644
index 0000000000..5f19521d04
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/util/re.py
@@ -0,0 +1,78 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+from re import compile, split
+from typing import Dict, List, Mapping
+from urllib.parse import unquote
+
+from deprecated import deprecated
+
+_logger = getLogger(__name__)
+
+# The following regexes reference this spec: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md#specifying-headers-via-environment-variables
+
+# Optional whitespace
+_OWS = r"[ \t]*"
+# A key contains printable US-ASCII characters except: SP and "(),/:;<=>?@[\]{}
+_KEY_FORMAT = (
+ r"[\x21\x23-\x27\x2a\x2b\x2d\x2e\x30-\x39\x41-\x5a\x5e-\x7a\x7c\x7e]+"
+)
+# A value contains a URL-encoded UTF-8 string. The encoded form can contain any
+# printable US-ASCII characters (0x20-0x7f) other than SP, DEL, and ",;/
+_VALUE_FORMAT = r"[\x21\x23-\x2b\x2d-\x3a\x3c-\x5b\x5d-\x7e]*"
+# A key-value is key=value, with optional whitespace surrounding key and value
+_KEY_VALUE_FORMAT = rf"{_OWS}{_KEY_FORMAT}{_OWS}={_OWS}{_VALUE_FORMAT}{_OWS}"
+
+_HEADER_PATTERN = compile(_KEY_VALUE_FORMAT)
+_DELIMITER_PATTERN = compile(r"[ \t]*,[ \t]*")
+
+_BAGGAGE_PROPERTY_FORMAT = rf"{_KEY_VALUE_FORMAT}|{_OWS}{_KEY_FORMAT}{_OWS}"
+
+
+# pylint: disable=invalid-name
+
+
+@deprecated(version="1.15.0", reason="You should use parse_env_headers") # type: ignore
+def parse_headers(s: str) -> Mapping[str, str]:
+ return parse_env_headers(s)
+
+
+def parse_env_headers(s: str) -> Mapping[str, str]:
+ """
+ Parse ``s``, which is a ``str`` instance containing HTTP headers encoded
+ for use in ENV variables per the W3C Baggage HTTP header format at
+ https://www.w3.org/TR/baggage/#baggage-http-header-format, except that
+ additional semi-colon delimited metadata is not supported.
+ """
+ headers: Dict[str, str] = {}
+ headers_list: List[str] = split(_DELIMITER_PATTERN, s)
+ for header in headers_list:
+ if not header: # empty string
+ continue
+ match = _HEADER_PATTERN.fullmatch(header.strip())
+ if not match:
+ _logger.warning(
+ "Header format invalid! Header values in environment variables must be "
+ "URL encoded per the OpenTelemetry Protocol Exporter specification: %s",
+ header,
+ )
+ continue
+ # value may contain any number of `=`
+ name, value = match.string.split("=", 1)
+ name = unquote(name).strip().lower()
+ value = unquote(value).strip()
+ headers[name] = value
+
+ return headers
diff --git a/opentelemetry-api/src/opentelemetry/util/types.py b/opentelemetry-api/src/opentelemetry/util/types.py
new file mode 100644
index 0000000000..be171ef0ea
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/util/types.py
@@ -0,0 +1,44 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from typing import Mapping, Optional, Sequence, Tuple, Union
+
+AttributeValue = Union[
+ str,
+ bool,
+ int,
+ float,
+ Sequence[str],
+ Sequence[bool],
+ Sequence[int],
+ Sequence[float],
+]
+Attributes = Optional[Mapping[str, AttributeValue]]
+AttributesAsKey = Tuple[
+ Tuple[
+ str,
+ Union[
+ str,
+ bool,
+ int,
+ float,
+ Tuple[Optional[str], ...],
+ Tuple[Optional[bool], ...],
+ Tuple[Optional[int], ...],
+ Tuple[Optional[float], ...],
+ ],
+ ],
+ ...,
+]
diff --git a/opentelemetry-api/src/opentelemetry/version.py b/opentelemetry-api/src/opentelemetry/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/opentelemetry-api/src/opentelemetry/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/opentelemetry-api/tests/__init__.py b/opentelemetry-api/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-api/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-api/tests/attributes/test_attributes.py b/opentelemetry-api/tests/attributes/test_attributes.py
new file mode 100644
index 0000000000..121dec3d25
--- /dev/null
+++ b/opentelemetry-api/tests/attributes/test_attributes.py
@@ -0,0 +1,185 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import collections
+import unittest
+from typing import MutableSequence
+
+from opentelemetry.attributes import BoundedAttributes, _clean_attribute
+
+
+class TestAttributes(unittest.TestCase):
+ def assertValid(self, value, key="k"):
+ expected = value
+ if isinstance(value, MutableSequence):
+ expected = tuple(value)
+ self.assertEqual(_clean_attribute(key, value, None), expected)
+
+ def assertInvalid(self, value, key="k"):
+ self.assertIsNone(_clean_attribute(key, value, None))
+
+ def test_attribute_key_validation(self):
+ # only non-empty strings are valid keys
+ self.assertInvalid(1, "")
+ self.assertInvalid(1, 1)
+ self.assertInvalid(1, {})
+ self.assertInvalid(1, [])
+ self.assertInvalid(1, b"1")
+ self.assertValid(1, "k")
+ self.assertValid(1, "1")
+
+ def test_clean_attribute(self):
+ self.assertInvalid([1, 2, 3.4, "ss", 4])
+ self.assertInvalid([{}, 1, 2, 3.4, 4])
+ self.assertInvalid(["sw", "lf", 3.4, "ss"])
+ self.assertInvalid([1, 2, 3.4, 5])
+ self.assertInvalid({})
+ self.assertInvalid([1, True])
+ self.assertValid(True)
+ self.assertValid("hi")
+ self.assertValid(3.4)
+ self.assertValid(15)
+ self.assertValid([1, 2, 3, 5])
+ self.assertValid([1.2, 2.3, 3.4, 4.5])
+ self.assertValid([True, False])
+ self.assertValid(["ss", "dw", "fw"])
+ self.assertValid([])
+ # None in sequences are valid
+ self.assertValid(["A", None, None])
+ self.assertValid(["A", None, None, "B"])
+ self.assertValid([None, None])
+ self.assertInvalid(["A", None, 1])
+ self.assertInvalid([None, "A", None, 1])
+
+ # test keys
+ self.assertValid("value", "key")
+ self.assertInvalid("value", "")
+ self.assertInvalid("value", None)
+
+ def test_sequence_attr_decode(self):
+ seq = [
+ None,
+ b"Content-Disposition",
+ b"Content-Type",
+ b"\x81",
+ b"Keep-Alive",
+ ]
+ expected = [
+ None,
+ "Content-Disposition",
+ "Content-Type",
+ None,
+ "Keep-Alive",
+ ]
+ self.assertEqual(
+ _clean_attribute("headers", seq, None), tuple(expected)
+ )
+
+
+class TestBoundedAttributes(unittest.TestCase):
+ base = collections.OrderedDict(
+ [
+ ("name", "Firulais"),
+ ("age", 7),
+ ("weight", 13),
+ ("vaccinated", True),
+ ]
+ )
+
+ def test_negative_maxlen(self):
+ with self.assertRaises(ValueError):
+ BoundedAttributes(-1)
+
+ def test_from_map(self):
+ dic_len = len(self.base)
+ base_copy = collections.OrderedDict(self.base)
+ bdict = BoundedAttributes(dic_len, base_copy)
+
+ self.assertEqual(len(bdict), dic_len)
+
+ # modify base_copy and test that bdict is not changed
+ base_copy["name"] = "Bruno"
+ base_copy["age"] = 3
+
+ for key in self.base:
+ self.assertEqual(bdict[key], self.base[key])
+
+ # test that iter yields the correct number of elements
+ self.assertEqual(len(tuple(bdict)), dic_len)
+
+ # map too big
+ half_len = dic_len // 2
+ bdict = BoundedAttributes(half_len, self.base)
+ self.assertEqual(len(tuple(bdict)), half_len)
+ self.assertEqual(bdict.dropped, dic_len - half_len)
+
+ def test_bounded_dict(self):
+ # create empty dict
+ dic_len = len(self.base)
+ bdict = BoundedAttributes(dic_len, immutable=False)
+ self.assertEqual(len(bdict), 0)
+
+ # fill dict
+ for key in self.base:
+ bdict[key] = self.base[key]
+
+ self.assertEqual(len(bdict), dic_len)
+ self.assertEqual(bdict.dropped, 0)
+
+ for key in self.base:
+ self.assertEqual(bdict[key], self.base[key])
+
+ # test __iter__ in BoundedAttributes
+ for key in bdict:
+ self.assertEqual(bdict[key], self.base[key])
+
+ # updating an existing element should not drop
+ bdict["name"] = "Bruno"
+ self.assertEqual(bdict.dropped, 0)
+
+ # try to append more elements
+ for key in self.base:
+ bdict["new-" + key] = self.base[key]
+
+ self.assertEqual(len(bdict), dic_len)
+ self.assertEqual(bdict.dropped, dic_len)
+ # Invalid values shouldn't be considered for `dropped`
+ bdict["invalid-seq"] = [None, 1, "2"]
+ self.assertEqual(bdict.dropped, dic_len)
+
+ # test that elements in the dict are the new ones
+ for key in self.base:
+ self.assertEqual(bdict["new-" + key], self.base[key])
+
+ # delete an element
+ del bdict["new-name"]
+ self.assertEqual(len(bdict), dic_len - 1)
+
+ with self.assertRaises(KeyError):
+ _ = bdict["new-name"]
+
+ def test_no_limit_code(self):
+ bdict = BoundedAttributes(maxlen=None, immutable=False)
+ for num in range(100):
+ bdict[str(num)] = num
+
+ for num in range(100):
+ self.assertEqual(bdict[str(num)], num)
+
+ def test_immutable(self):
+ bdict = BoundedAttributes()
+ with self.assertRaises(TypeError):
+ bdict["should-not-work"] = "dict immutable"
diff --git a/opentelemetry-api/tests/baggage/propagation/test_propagation.py b/opentelemetry-api/tests/baggage/propagation/test_propagation.py
new file mode 100644
index 0000000000..b9de7f37b3
--- /dev/null
+++ b/opentelemetry-api/tests/baggage/propagation/test_propagation.py
@@ -0,0 +1,39 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# type: ignore
+
+from unittest import TestCase
+
+from opentelemetry.baggage import get_baggage, set_baggage
+from opentelemetry.baggage.propagation import W3CBaggagePropagator
+
+
+class TestBaggageManager(TestCase):
+ def test_propagate_baggage(self):
+ carrier = {}
+ propagator = W3CBaggagePropagator()
+
+ ctx = set_baggage("Test1", "value1")
+ ctx = set_baggage("test2", "value2", context=ctx)
+
+ propagator.inject(carrier, ctx)
+ ctx_propagated = propagator.extract(carrier)
+
+ self.assertEqual(
+ get_baggage("Test1", context=ctx_propagated), "value1"
+ )
+ self.assertEqual(
+ get_baggage("test2", context=ctx_propagated), "value2"
+ )
diff --git a/opentelemetry-api/tests/baggage/test_baggage.py b/opentelemetry-api/tests/baggage/test_baggage.py
new file mode 100644
index 0000000000..5eb73d53dc
--- /dev/null
+++ b/opentelemetry-api/tests/baggage/test_baggage.py
@@ -0,0 +1,83 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+from unittest import TestCase
+
+from opentelemetry.baggage import (
+ _is_valid_value,
+ clear,
+ get_all,
+ get_baggage,
+ remove_baggage,
+ set_baggage,
+)
+from opentelemetry.context import attach, detach
+
+
+class TestBaggageManager(TestCase):
+ def test_set_baggage(self):
+ self.assertEqual({}, get_all())
+
+ ctx = set_baggage("test", "value")
+ self.assertEqual(get_baggage("test", context=ctx), "value")
+
+ ctx = set_baggage("test", "value2", context=ctx)
+ self.assertEqual(get_baggage("test", context=ctx), "value2")
+
+ def test_baggages_current_context(self):
+ token = attach(set_baggage("test", "value"))
+ self.assertEqual(get_baggage("test"), "value")
+ detach(token)
+ self.assertEqual(get_baggage("test"), None)
+
+ def test_set_multiple_baggage_entries(self):
+ ctx = set_baggage("test", "value")
+ ctx = set_baggage("test2", "value2", context=ctx)
+ self.assertEqual(get_baggage("test", context=ctx), "value")
+ self.assertEqual(get_baggage("test2", context=ctx), "value2")
+ self.assertEqual(
+ get_all(context=ctx),
+ {"test": "value", "test2": "value2"},
+ )
+
+ def test_modifying_baggage(self):
+ ctx = set_baggage("test", "value")
+ self.assertEqual(get_baggage("test", context=ctx), "value")
+ baggage_entries = get_all(context=ctx)
+ with self.assertRaises(TypeError):
+ baggage_entries["test"] = "mess-this-up"
+ self.assertEqual(get_baggage("test", context=ctx), "value")
+
+ def test_remove_baggage_entry(self):
+ self.assertEqual({}, get_all())
+
+ ctx = set_baggage("test", "value")
+ ctx = set_baggage("test2", "value2", context=ctx)
+ ctx = remove_baggage("test", context=ctx)
+ self.assertEqual(get_baggage("test", context=ctx), None)
+ self.assertEqual(get_baggage("test2", context=ctx), "value2")
+
+ def test_clear_baggage(self):
+ self.assertEqual({}, get_all())
+
+ ctx = set_baggage("test", "value")
+ self.assertEqual(get_baggage("test", context=ctx), "value")
+
+ ctx = clear(context=ctx)
+ self.assertEqual(get_all(context=ctx), {})
+
+ def test__is_valid_value(self):
+ self.assertTrue(_is_valid_value("GET%20%2Fapi%2F%2Freport"))
diff --git a/opentelemetry-api/tests/context/__init__.py b/opentelemetry-api/tests/context/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-api/tests/context/base_context.py b/opentelemetry-api/tests/context/base_context.py
new file mode 100644
index 0000000000..05acc95d89
--- /dev/null
+++ b/opentelemetry-api/tests/context/base_context.py
@@ -0,0 +1,77 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from logging import ERROR
+
+from opentelemetry import context
+
+
+def do_work() -> None:
+ context.attach(context.set_value("say", "bar"))
+
+
+class ContextTestCases:
+ class BaseTest(unittest.TestCase):
+ def setUp(self) -> None:
+ self.previous_context = context.get_current()
+
+ def tearDown(self) -> None:
+ context.attach(self.previous_context)
+
+ def test_context(self):
+ self.assertIsNone(context.get_value("say"))
+ empty = context.get_current()
+ second = context.set_value("say", "foo")
+
+ self.assertEqual(context.get_value("say", context=second), "foo")
+
+ do_work()
+ self.assertEqual(context.get_value("say"), "bar")
+ third = context.get_current()
+
+ self.assertIsNone(context.get_value("say", context=empty))
+ self.assertEqual(context.get_value("say", context=second), "foo")
+ self.assertEqual(context.get_value("say", context=third), "bar")
+
+ def test_set_value(self):
+ first = context.set_value("a", "yyy")
+ second = context.set_value("a", "zzz")
+ third = context.set_value("a", "---", first)
+ self.assertEqual("yyy", context.get_value("a", context=first))
+ self.assertEqual("zzz", context.get_value("a", context=second))
+ self.assertEqual("---", context.get_value("a", context=third))
+ self.assertEqual(None, context.get_value("a"))
+
+ def test_attach(self):
+ context.attach(context.set_value("a", "yyy"))
+
+ token = context.attach(context.set_value("a", "zzz"))
+ self.assertEqual("zzz", context.get_value("a"))
+
+ context.detach(token)
+ self.assertEqual("yyy", context.get_value("a"))
+
+ with self.assertLogs(level=ERROR):
+ context.detach("some garbage")
+
+ def test_detach_out_of_order(self):
+ t1 = context.attach(context.set_value("c", 1))
+ self.assertEqual(context.get_current(), {"c": 1})
+ t2 = context.attach(context.set_value("c", 2))
+ self.assertEqual(context.get_current(), {"c": 2})
+ context.detach(t1)
+ self.assertEqual(context.get_current(), {})
+ context.detach(t2)
+ self.assertEqual(context.get_current(), {"c": 1})
diff --git a/opentelemetry-api/tests/context/propagation/__init__.py b/opentelemetry-api/tests/context/propagation/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-api/tests/context/test_context.py b/opentelemetry-api/tests/context/test_context.py
new file mode 100644
index 0000000000..b90c011b99
--- /dev/null
+++ b/opentelemetry-api/tests/context/test_context.py
@@ -0,0 +1,76 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry import context
+from opentelemetry.context.context import Context
+
+
+def _do_work() -> str:
+ key = context.create_key("say")
+ context.attach(context.set_value(key, "bar"))
+ return key
+
+
+class TestContext(unittest.TestCase):
+ def setUp(self):
+ context.attach(Context())
+
+ def test_context_key(self):
+ key1 = context.create_key("say")
+ key2 = context.create_key("say")
+ self.assertNotEqual(key1, key2)
+ first = context.set_value(key1, "foo")
+ second = context.set_value(key2, "bar")
+ self.assertEqual(context.get_value(key1, context=first), "foo")
+ self.assertEqual(context.get_value(key2, context=second), "bar")
+
+ def test_context(self):
+ key1 = context.create_key("say")
+ self.assertIsNone(context.get_value(key1))
+ empty = context.get_current()
+ second = context.set_value(key1, "foo")
+ self.assertEqual(context.get_value(key1, context=second), "foo")
+
+ key2 = _do_work()
+ self.assertEqual(context.get_value(key2), "bar")
+ third = context.get_current()
+
+ self.assertIsNone(context.get_value(key1, context=empty))
+ self.assertEqual(context.get_value(key1, context=second), "foo")
+ self.assertEqual(context.get_value(key2, context=third), "bar")
+
+ def test_set_value(self):
+ first = context.set_value("a", "yyy")
+ second = context.set_value("a", "zzz")
+ third = context.set_value("a", "---", first)
+ self.assertEqual("yyy", context.get_value("a", context=first))
+ self.assertEqual("zzz", context.get_value("a", context=second))
+ self.assertEqual("---", context.get_value("a", context=third))
+ self.assertEqual(None, context.get_value("a"))
+
+ def test_context_is_immutable(self):
+ with self.assertRaises(ValueError):
+ # ensure a context
+ context.get_current()["test"] = "cant-change-immutable"
+
+ def test_set_current(self):
+ context.attach(context.set_value("a", "yyy"))
+
+ token = context.attach(context.set_value("a", "zzz"))
+ self.assertEqual("zzz", context.get_value("a"))
+
+ context.detach(token)
+ self.assertEqual("yyy", context.get_value("a"))
diff --git a/opentelemetry-api/tests/context/test_contextvars_context.py b/opentelemetry-api/tests/context/test_contextvars_context.py
new file mode 100644
index 0000000000..e9af3107d8
--- /dev/null
+++ b/opentelemetry-api/tests/context/test_contextvars_context.py
@@ -0,0 +1,38 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest.mock import patch
+
+from opentelemetry import context
+from opentelemetry.context.contextvars_context import ContextVarsRuntimeContext
+
+# pylint: disable=import-error,no-name-in-module
+from tests.context.base_context import ContextTestCases
+
+
+class TestContextVarsContext(ContextTestCases.BaseTest):
+ # pylint: disable=invalid-name
+ def setUp(self) -> None:
+ super().setUp()
+ self.mock_runtime = patch.object(
+ context,
+ "_RUNTIME_CONTEXT",
+ ContextVarsRuntimeContext(),
+ )
+ self.mock_runtime.start()
+
+ # pylint: disable=invalid-name
+ def tearDown(self) -> None:
+ super().tearDown()
+ self.mock_runtime.stop()
diff --git a/opentelemetry-api/tests/distributedcontext/__init__.py b/opentelemetry-api/tests/distributedcontext/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-api/tests/distributedcontext/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-api/tests/logs/test_log_record.py b/opentelemetry-api/tests/logs/test_log_record.py
new file mode 100644
index 0000000000..a06ed8dabf
--- /dev/null
+++ b/opentelemetry-api/tests/logs/test_log_record.py
@@ -0,0 +1,27 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from unittest.mock import patch
+
+from opentelemetry._logs import LogRecord
+
+OBSERVED_TIMESTAMP = "OBSERVED_TIMESTAMP"
+
+
+class TestLogRecord(unittest.TestCase):
+ @patch("opentelemetry._logs._internal.time_ns")
+ def test_log_record_observed_timestamp_default(self, time_ns_mock): # type: ignore
+ time_ns_mock.return_value = OBSERVED_TIMESTAMP
+ self.assertEqual(LogRecord().observed_timestamp, OBSERVED_TIMESTAMP)
diff --git a/opentelemetry-api/tests/logs/test_logger_provider.py b/opentelemetry-api/tests/logs/test_logger_provider.py
new file mode 100644
index 0000000000..5943924bd8
--- /dev/null
+++ b/opentelemetry-api/tests/logs/test_logger_provider.py
@@ -0,0 +1,62 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type:ignore
+import unittest
+from unittest.mock import Mock, patch
+
+import opentelemetry._logs._internal as logs_internal
+from opentelemetry._logs import get_logger_provider, set_logger_provider
+from opentelemetry.environment_variables import _OTEL_PYTHON_LOGGER_PROVIDER
+from opentelemetry.test.globals_test import reset_logging_globals
+
+
+class TestGlobals(unittest.TestCase):
+ def setUp(self):
+ super().tearDown()
+ reset_logging_globals()
+
+ def tearDown(self):
+ super().tearDown()
+ reset_logging_globals()
+
+ def test_set_logger_provider(self):
+ lp_mock = Mock()
+ # pylint: disable=protected-access
+ assert logs_internal._LOGGER_PROVIDER is None
+ set_logger_provider(lp_mock)
+ assert logs_internal._LOGGER_PROVIDER is lp_mock
+ assert get_logger_provider() is lp_mock
+
+ def test_get_logger_provider(self):
+ # pylint: disable=protected-access
+ assert logs_internal._LOGGER_PROVIDER is None
+
+ assert isinstance(
+ get_logger_provider(), logs_internal.NoOpLoggerProvider
+ )
+
+ logs_internal._LOGGER_PROVIDER = None
+
+ with patch.dict(
+ "os.environ",
+ {_OTEL_PYTHON_LOGGER_PROVIDER: "test_logger_provider"},
+ ):
+
+ with patch("opentelemetry._logs._internal._load_provider", Mock()):
+ with patch(
+ "opentelemetry._logs._internal.cast",
+ Mock(**{"return_value": "test_logger_provider"}),
+ ):
+ assert get_logger_provider() == "test_logger_provider"
diff --git a/opentelemetry-api/tests/metrics/test_instruments.py b/opentelemetry-api/tests/metrics/test_instruments.py
new file mode 100644
index 0000000000..e66460de35
--- /dev/null
+++ b/opentelemetry-api/tests/metrics/test_instruments.py
@@ -0,0 +1,681 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+from inspect import Signature, isabstract, signature
+from unittest import TestCase
+
+from opentelemetry.metrics import (
+ Counter,
+ Histogram,
+ Instrument,
+ Meter,
+ NoOpCounter,
+ NoOpHistogram,
+ NoOpMeter,
+ NoOpUpDownCounter,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+
+# FIXME Test that the instrument methods can be called concurrently safely.
+
+
+class ChildInstrument(Instrument):
+ def __init__(self, name, *args, unit="", description="", **kwargs):
+ super().__init__(
+ name, *args, unit=unit, description=description, **kwargs
+ )
+
+
+class TestCounter(TestCase):
+ def test_create_counter(self):
+ """
+ Test that the Counter can be created with create_counter.
+ """
+
+ self.assertTrue(
+ isinstance(NoOpMeter("name").create_counter("name"), Counter)
+ )
+
+ def test_api_counter_abstract(self):
+ """
+ Test that the API Counter is an abstract class.
+ """
+
+ self.assertTrue(isabstract(Counter))
+
+ def test_create_counter_api(self):
+ """
+ Test that the API for creating a counter accepts the name of the instrument.
+ Test that the API for creating a counter accepts the unit of the instrument.
+ Test that the API for creating a counter accepts the description of the
+ """
+
+ create_counter_signature = signature(Meter.create_counter)
+ self.assertIn("name", create_counter_signature.parameters.keys())
+ self.assertIs(
+ create_counter_signature.parameters["name"].default,
+ Signature.empty,
+ )
+
+ create_counter_signature = signature(Meter.create_counter)
+ self.assertIn("unit", create_counter_signature.parameters.keys())
+ self.assertIs(create_counter_signature.parameters["unit"].default, "")
+
+ create_counter_signature = signature(Meter.create_counter)
+ self.assertIn(
+ "description", create_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_counter_signature.parameters["description"].default, ""
+ )
+
+ def test_counter_add_method(self):
+ """
+ Test that the counter has an add method.
+ Test that the add method returns None.
+ Test that the add method accepts optional attributes.
+ Test that the add method accepts the increment amount.
+ Test that the add method accepts only positive amounts.
+ """
+
+ self.assertTrue(hasattr(Counter, "add"))
+
+ self.assertIsNone(NoOpCounter("name").add(1))
+
+ add_signature = signature(Counter.add)
+ self.assertIn("attributes", add_signature.parameters.keys())
+ self.assertIs(add_signature.parameters["attributes"].default, None)
+
+ self.assertIn("amount", add_signature.parameters.keys())
+ self.assertIs(
+ add_signature.parameters["amount"].default, Signature.empty
+ )
+
+
+class TestObservableCounter(TestCase):
+ def test_create_observable_counter(self):
+ """
+ Test that the ObservableCounter can be created with create_observable_counter.
+ """
+
+ def callback():
+ yield
+
+ self.assertTrue(
+ isinstance(
+ NoOpMeter("name").create_observable_counter(
+ "name", callbacks=[callback()]
+ ),
+ ObservableCounter,
+ )
+ )
+
+ def test_api_observable_counter_abstract(self):
+ """
+ Test that the API ObservableCounter is an abstract class.
+ """
+
+ self.assertTrue(isabstract(ObservableCounter))
+
+ def test_create_observable_counter_api(self):
+ """
+ Test that the API for creating a observable_counter accepts the name of the instrument.
+ Test that the API for creating a observable_counter accepts a sequence of callbacks.
+ Test that the API for creating a observable_counter accepts the unit of the instrument.
+ Test that the API for creating a observable_counter accepts the description of the instrument
+ """
+
+ create_observable_counter_signature = signature(
+ Meter.create_observable_counter
+ )
+ self.assertIn(
+ "name", create_observable_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_counter_signature.parameters["name"].default,
+ Signature.empty,
+ )
+ create_observable_counter_signature = signature(
+ Meter.create_observable_counter
+ )
+ self.assertIn(
+ "callbacks", create_observable_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_counter_signature.parameters[
+ "callbacks"
+ ].default,
+ None,
+ )
+ create_observable_counter_signature = signature(
+ Meter.create_observable_counter
+ )
+ self.assertIn(
+ "unit", create_observable_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_counter_signature.parameters["unit"].default, ""
+ )
+
+ create_observable_counter_signature = signature(
+ Meter.create_observable_counter
+ )
+ self.assertIn(
+ "description",
+ create_observable_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_counter_signature.parameters[
+ "description"
+ ].default,
+ "",
+ )
+
+ def test_observable_counter_generator(self):
+ """
+ Test that the API for creating a asynchronous counter accepts a generator.
+ Test that the generator function reports iterable of measurements.
+ Test that there is a way to pass state to the generator.
+ Test that the instrument accepts positive measurements.
+ Test that the instrument does not accept negative measurements.
+ """
+
+ create_observable_counter_signature = signature(
+ Meter.create_observable_counter
+ )
+ self.assertIn(
+ "callbacks", create_observable_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_counter_signature.parameters["name"].default,
+ Signature.empty,
+ )
+
+
+class TestHistogram(TestCase):
+ def test_create_histogram(self):
+ """
+ Test that the Histogram can be created with create_histogram.
+ """
+
+ self.assertTrue(
+ isinstance(NoOpMeter("name").create_histogram("name"), Histogram)
+ )
+
+ def test_api_histogram_abstract(self):
+ """
+ Test that the API Histogram is an abstract class.
+ """
+
+ self.assertTrue(isabstract(Histogram))
+
+ def test_create_histogram_api(self):
+ """
+ Test that the API for creating a histogram accepts the name of the instrument.
+ Test that the API for creating a histogram accepts the unit of the instrument.
+ Test that the API for creating a histogram accepts the description of the
+ """
+
+ create_histogram_signature = signature(Meter.create_histogram)
+ self.assertIn("name", create_histogram_signature.parameters.keys())
+ self.assertIs(
+ create_histogram_signature.parameters["name"].default,
+ Signature.empty,
+ )
+
+ create_histogram_signature = signature(Meter.create_histogram)
+ self.assertIn("unit", create_histogram_signature.parameters.keys())
+ self.assertIs(
+ create_histogram_signature.parameters["unit"].default, ""
+ )
+
+ create_histogram_signature = signature(Meter.create_histogram)
+ self.assertIn(
+ "description", create_histogram_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_histogram_signature.parameters["description"].default, ""
+ )
+
+ def test_histogram_record_method(self):
+ """
+ Test that the histogram has an record method.
+ Test that the record method returns None.
+ Test that the record method accepts optional attributes.
+ Test that the record method accepts the increment amount.
+ Test that the record method returns None.
+ """
+
+ self.assertTrue(hasattr(Histogram, "record"))
+
+ self.assertIsNone(NoOpHistogram("name").record(1))
+
+ record_signature = signature(Histogram.record)
+ self.assertIn("attributes", record_signature.parameters.keys())
+ self.assertIs(record_signature.parameters["attributes"].default, None)
+
+ self.assertIn("amount", record_signature.parameters.keys())
+ self.assertIs(
+ record_signature.parameters["amount"].default, Signature.empty
+ )
+
+ self.assertIsNone(NoOpHistogram("name").record(1))
+
+
+class TestObservableGauge(TestCase):
+ def test_create_observable_gauge(self):
+ """
+ Test that the ObservableGauge can be created with create_observable_gauge.
+ """
+
+ def callback():
+ yield
+
+ self.assertTrue(
+ isinstance(
+ NoOpMeter("name").create_observable_gauge(
+ "name", [callback()]
+ ),
+ ObservableGauge,
+ )
+ )
+
+ def test_api_observable_gauge_abstract(self):
+ """
+ Test that the API ObservableGauge is an abstract class.
+ """
+
+ self.assertTrue(isabstract(ObservableGauge))
+
+ def test_create_observable_gauge_api(self):
+ """
+ Test that the API for creating a observable_gauge accepts the name of the instrument.
+ Test that the API for creating a observable_gauge accepts a sequence of callbacks.
+ Test that the API for creating a observable_gauge accepts the unit of the instrument.
+ Test that the API for creating a observable_gauge accepts the description of the instrument
+ """
+
+ create_observable_gauge_signature = signature(
+ Meter.create_observable_gauge
+ )
+ self.assertIn(
+ "name", create_observable_gauge_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_gauge_signature.parameters["name"].default,
+ Signature.empty,
+ )
+ create_observable_gauge_signature = signature(
+ Meter.create_observable_gauge
+ )
+ self.assertIn(
+ "callbacks", create_observable_gauge_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_gauge_signature.parameters["callbacks"].default,
+ None,
+ )
+ create_observable_gauge_signature = signature(
+ Meter.create_observable_gauge
+ )
+ self.assertIn(
+ "unit", create_observable_gauge_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_gauge_signature.parameters["unit"].default, ""
+ )
+
+ create_observable_gauge_signature = signature(
+ Meter.create_observable_gauge
+ )
+ self.assertIn(
+ "description", create_observable_gauge_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_gauge_signature.parameters[
+ "description"
+ ].default,
+ "",
+ )
+
+ def test_observable_gauge_callback(self):
+ """
+ Test that the API for creating a asynchronous gauge accepts a sequence of callbacks.
+ Test that the callback function reports measurements.
+ Test that there is a way to pass state to the callback.
+ """
+
+ create_observable_gauge_signature = signature(
+ Meter.create_observable_gauge
+ )
+ self.assertIn(
+ "callbacks", create_observable_gauge_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_observable_gauge_signature.parameters["name"].default,
+ Signature.empty,
+ )
+
+
+class TestUpDownCounter(TestCase):
+ def test_create_up_down_counter(self):
+ """
+ Test that the UpDownCounter can be created with create_up_down_counter.
+ """
+
+ self.assertTrue(
+ isinstance(
+ NoOpMeter("name").create_up_down_counter("name"),
+ UpDownCounter,
+ )
+ )
+
+ def test_api_up_down_counter_abstract(self):
+ """
+ Test that the API UpDownCounter is an abstract class.
+ """
+
+ self.assertTrue(isabstract(UpDownCounter))
+
+ def test_create_up_down_counter_api(self):
+ """
+ Test that the API for creating a up_down_counter accepts the name of the instrument.
+ Test that the API for creating a up_down_counter accepts the unit of the instrument.
+ Test that the API for creating a up_down_counter accepts the description of the
+ """
+
+ create_up_down_counter_signature = signature(
+ Meter.create_up_down_counter
+ )
+ self.assertIn(
+ "name", create_up_down_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_up_down_counter_signature.parameters["name"].default,
+ Signature.empty,
+ )
+
+ create_up_down_counter_signature = signature(
+ Meter.create_up_down_counter
+ )
+ self.assertIn(
+ "unit", create_up_down_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_up_down_counter_signature.parameters["unit"].default, ""
+ )
+
+ create_up_down_counter_signature = signature(
+ Meter.create_up_down_counter
+ )
+ self.assertIn(
+ "description", create_up_down_counter_signature.parameters.keys()
+ )
+ self.assertIs(
+ create_up_down_counter_signature.parameters["description"].default,
+ "",
+ )
+
+ def test_up_down_counter_add_method(self):
+ """
+ Test that the up_down_counter has an add method.
+ Test that the add method returns None.
+ Test that the add method accepts optional attributes.
+ Test that the add method accepts the increment or decrement amount.
+ Test that the add method accepts positive and negative amounts.
+ """
+
+ self.assertTrue(hasattr(UpDownCounter, "add"))
+
+ self.assertIsNone(NoOpUpDownCounter("name").add(1))
+
+ add_signature = signature(UpDownCounter.add)
+ self.assertIn("attributes", add_signature.parameters.keys())
+ self.assertIs(add_signature.parameters["attributes"].default, None)
+
+ self.assertIn("amount", add_signature.parameters.keys())
+ self.assertIs(
+ add_signature.parameters["amount"].default, Signature.empty
+ )
+
+
+class TestObservableUpDownCounter(TestCase):
+ def test_create_observable_up_down_counter(self):
+ """
+ Test that the ObservableUpDownCounter can be created with create_observable_up_down_counter.
+ """
+
+ def callback():
+ yield
+
+ self.assertTrue(
+ isinstance(
+ NoOpMeter("name").create_observable_up_down_counter(
+ "name", [callback()]
+ ),
+ ObservableUpDownCounter,
+ )
+ )
+
+ def test_api_observable_up_down_counter_abstract(self):
+ """
+ Test that the API ObservableUpDownCounter is an abstract class.
+ """
+
+ self.assertTrue(isabstract(ObservableUpDownCounter))
+
+ def test_create_observable_up_down_counter_api(self):
+ """
+ Test that the API for creating a observable_up_down_counter accepts the name of the instrument.
+ Test that the API for creating a observable_up_down_counter accepts a sequence of callbacks.
+ Test that the API for creating a observable_up_down_counter accepts the unit of the instrument.
+ Test that the API for creating a observable_up_down_counter accepts the description of the instrument
+ """
+
+ create_observable_up_down_counter_signature = signature(
+ Meter.create_observable_up_down_counter
+ )
+ self.assertIn(
+ "name",
+ create_observable_up_down_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_up_down_counter_signature.parameters[
+ "name"
+ ].default,
+ Signature.empty,
+ )
+ create_observable_up_down_counter_signature = signature(
+ Meter.create_observable_up_down_counter
+ )
+ self.assertIn(
+ "callbacks",
+ create_observable_up_down_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_up_down_counter_signature.parameters[
+ "callbacks"
+ ].default,
+ None,
+ )
+ create_observable_up_down_counter_signature = signature(
+ Meter.create_observable_up_down_counter
+ )
+ self.assertIn(
+ "unit",
+ create_observable_up_down_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_up_down_counter_signature.parameters[
+ "unit"
+ ].default,
+ "",
+ )
+
+ create_observable_up_down_counter_signature = signature(
+ Meter.create_observable_up_down_counter
+ )
+ self.assertIn(
+ "description",
+ create_observable_up_down_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_up_down_counter_signature.parameters[
+ "description"
+ ].default,
+ "",
+ )
+
+ def test_observable_up_down_counter_callback(self):
+ """
+ Test that the API for creating a asynchronous up_down_counter accepts a sequence of callbacks.
+ Test that the callback function reports measurements.
+ Test that there is a way to pass state to the callback.
+ Test that the instrument accepts positive and negative values.
+ """
+
+ create_observable_up_down_counter_signature = signature(
+ Meter.create_observable_up_down_counter
+ )
+ self.assertIn(
+ "callbacks",
+ create_observable_up_down_counter_signature.parameters.keys(),
+ )
+ self.assertIs(
+ create_observable_up_down_counter_signature.parameters[
+ "name"
+ ].default,
+ Signature.empty,
+ )
+
+ def test_name_check(self):
+ instrument = ChildInstrument("name")
+
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "a" * 255, "unit", "description"
+ )["name"],
+ "a" * 255,
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "a.", "unit", "description"
+ )["name"],
+ "a.",
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "a-", "unit", "description"
+ )["name"],
+ "a-",
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "a_", "unit", "description"
+ )["name"],
+ "a_",
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "a/", "unit", "description"
+ )["name"],
+ "a/",
+ )
+
+ # the old max length
+ self.assertIsNotNone(
+ instrument._check_name_unit_description(
+ "a" * 64, "unit", "description"
+ )["name"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "a" * 256, "unit", "description"
+ )["name"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "Ñ", "unit", "description"
+ )["name"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "_a", "unit", "description"
+ )["name"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "1a", "unit", "description"
+ )["name"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description("", "unit", "description")[
+ "name"
+ ]
+ )
+
+ def test_unit_check(self):
+
+ instrument = ChildInstrument("name")
+
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "name", "a" * 63, "description"
+ )["unit"],
+ "a" * 63,
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "name", "{a}", "description"
+ )["unit"],
+ "{a}",
+ )
+
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "name", "a" * 64, "description"
+ )["unit"]
+ )
+ self.assertIsNone(
+ instrument._check_name_unit_description(
+ "name", "Ñ", "description"
+ )["unit"]
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "name", None, "description"
+ )["unit"],
+ "",
+ )
+
+ def test_description_check(self):
+
+ instrument = ChildInstrument("name")
+
+ self.assertEqual(
+ instrument._check_name_unit_description(
+ "name", "unit", "description"
+ )["description"],
+ "description",
+ )
+ self.assertEqual(
+ instrument._check_name_unit_description("name", "unit", None)[
+ "description"
+ ],
+ "",
+ )
diff --git a/opentelemetry-api/tests/metrics/test_meter.py b/opentelemetry-api/tests/metrics/test_meter.py
new file mode 100644
index 0000000000..44e81bdc8c
--- /dev/null
+++ b/opentelemetry-api/tests/metrics/test_meter.py
@@ -0,0 +1,143 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+from logging import WARNING
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.metrics import Meter, NoOpMeter
+
+# FIXME Test that the meter methods can be called concurrently safely.
+
+
+class ChildMeter(Meter):
+ def create_counter(self, name, unit="", description=""):
+ super().create_counter(name, unit=unit, description=description)
+
+ def create_up_down_counter(self, name, unit="", description=""):
+ super().create_up_down_counter(
+ name, unit=unit, description=description
+ )
+
+ def create_observable_counter(
+ self, name, callback, unit="", description=""
+ ):
+ super().create_observable_counter(
+ name, callback, unit=unit, description=description
+ )
+
+ def create_histogram(self, name, unit="", description=""):
+ super().create_histogram(name, unit=unit, description=description)
+
+ def create_observable_gauge(self, name, callback, unit="", description=""):
+ super().create_observable_gauge(
+ name, callback, unit=unit, description=description
+ )
+
+ def create_observable_up_down_counter(
+ self, name, callback, unit="", description=""
+ ):
+ super().create_observable_up_down_counter(
+ name, callback, unit=unit, description=description
+ )
+
+
+class TestMeter(TestCase):
+ def test_repeated_instrument_names(self):
+
+ try:
+ test_meter = NoOpMeter("name")
+
+ test_meter.create_counter("counter")
+ test_meter.create_up_down_counter("up_down_counter")
+ test_meter.create_observable_counter("observable_counter", Mock())
+ test_meter.create_histogram("histogram")
+ test_meter.create_observable_gauge("observable_gauge", Mock())
+ test_meter.create_observable_up_down_counter(
+ "observable_up_down_counter", Mock()
+ )
+ except Exception as error:
+ self.fail(f"Unexpected exception raised {error}")
+
+ for instrument_name in [
+ "counter",
+ "up_down_counter",
+ "histogram",
+ ]:
+ with self.assertLogs(level=WARNING):
+ getattr(test_meter, f"create_{instrument_name}")(
+ instrument_name
+ )
+
+ for instrument_name in [
+ "observable_counter",
+ "observable_gauge",
+ "observable_up_down_counter",
+ ]:
+ with self.assertLogs(level=WARNING):
+ getattr(test_meter, f"create_{instrument_name}")(
+ instrument_name, Mock()
+ )
+
+ def test_create_counter(self):
+ """
+ Test that the meter provides a function to create a new Counter
+ """
+
+ self.assertTrue(hasattr(Meter, "create_counter"))
+ self.assertTrue(Meter.create_counter.__isabstractmethod__)
+
+ def test_create_up_down_counter(self):
+ """
+ Test that the meter provides a function to create a new UpDownCounter
+ """
+
+ self.assertTrue(hasattr(Meter, "create_up_down_counter"))
+ self.assertTrue(Meter.create_up_down_counter.__isabstractmethod__)
+
+ def test_create_observable_counter(self):
+ """
+ Test that the meter provides a function to create a new ObservableCounter
+ """
+
+ self.assertTrue(hasattr(Meter, "create_observable_counter"))
+ self.assertTrue(Meter.create_observable_counter.__isabstractmethod__)
+
+ def test_create_histogram(self):
+ """
+ Test that the meter provides a function to create a new Histogram
+ """
+
+ self.assertTrue(hasattr(Meter, "create_histogram"))
+ self.assertTrue(Meter.create_histogram.__isabstractmethod__)
+
+ def test_create_observable_gauge(self):
+ """
+ Test that the meter provides a function to create a new ObservableGauge
+ """
+
+ self.assertTrue(hasattr(Meter, "create_observable_gauge"))
+ self.assertTrue(Meter.create_observable_gauge.__isabstractmethod__)
+
+ def test_create_observable_up_down_counter(self):
+ """
+ Test that the meter provides a function to create a new
+ ObservableUpDownCounter
+ """
+
+ self.assertTrue(hasattr(Meter, "create_observable_up_down_counter"))
+ self.assertTrue(
+ Meter.create_observable_up_down_counter.__isabstractmethod__
+ )
diff --git a/opentelemetry-api/tests/metrics/test_meter_provider.py b/opentelemetry-api/tests/metrics/test_meter_provider.py
new file mode 100644
index 0000000000..2fa9fe1e73
--- /dev/null
+++ b/opentelemetry-api/tests/metrics/test_meter_provider.py
@@ -0,0 +1,318 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from pytest import fixture
+
+import opentelemetry.metrics._internal as metrics_internal
+from opentelemetry import metrics
+from opentelemetry.environment_variables import OTEL_PYTHON_METER_PROVIDER
+from opentelemetry.metrics import (
+ NoOpMeter,
+ NoOpMeterProvider,
+ get_meter_provider,
+ set_meter_provider,
+)
+from opentelemetry.metrics._internal import _ProxyMeter, _ProxyMeterProvider
+from opentelemetry.metrics._internal.instrument import (
+ _ProxyCounter,
+ _ProxyHistogram,
+ _ProxyObservableCounter,
+ _ProxyObservableGauge,
+ _ProxyObservableUpDownCounter,
+ _ProxyUpDownCounter,
+)
+from opentelemetry.test.globals_test import (
+ MetricsGlobalsTest,
+ reset_metrics_globals,
+)
+
+# FIXME Test that the instrument methods can be called concurrently safely.
+
+
+@fixture
+def reset_meter_provider():
+ print(f"calling reset_metrics_globals() {reset_metrics_globals}")
+ reset_metrics_globals()
+ yield
+ print("teardown - calling reset_metrics_globals()")
+ reset_metrics_globals()
+
+
+def test_set_meter_provider(reset_meter_provider):
+ """
+ Test that the API provides a way to set a global default MeterProvider
+ """
+
+ mock = Mock()
+
+ assert metrics_internal._METER_PROVIDER is None
+
+ set_meter_provider(mock)
+
+ assert metrics_internal._METER_PROVIDER is mock
+ assert get_meter_provider() is mock
+
+
+def test_set_meter_provider_calls_proxy_provider(reset_meter_provider):
+ with patch(
+ "opentelemetry.metrics._internal._PROXY_METER_PROVIDER"
+ ) as mock_proxy_mp:
+ assert metrics_internal._PROXY_METER_PROVIDER is mock_proxy_mp
+ mock_real_mp = Mock()
+ set_meter_provider(mock_real_mp)
+ mock_proxy_mp.on_set_meter_provider.assert_called_once_with(
+ mock_real_mp
+ )
+
+
+def test_get_meter_provider(reset_meter_provider):
+ """
+ Test that the API provides a way to get a global default MeterProvider
+ """
+
+ assert metrics_internal._METER_PROVIDER is None
+
+ assert isinstance(get_meter_provider(), _ProxyMeterProvider)
+
+ metrics._METER_PROVIDER = None
+
+ with patch.dict(
+ "os.environ", {OTEL_PYTHON_METER_PROVIDER: "test_meter_provider"}
+ ):
+
+ with patch("opentelemetry.metrics._internal._load_provider", Mock()):
+ with patch(
+ "opentelemetry.metrics._internal.cast",
+ Mock(**{"return_value": "test_meter_provider"}),
+ ):
+ assert get_meter_provider() == "test_meter_provider"
+
+
+class TestGetMeter(TestCase):
+ def test_get_meter_parameters(self):
+ """
+ Test that get_meter accepts name, version and schema_url
+ """
+ try:
+ NoOpMeterProvider().get_meter(
+ "name", version="version", schema_url="schema_url"
+ )
+ except Exception as error:
+ self.fail(f"Unexpected exception raised: {error}")
+
+ def test_invalid_name(self):
+ """
+ Test that when an invalid name is specified a working meter
+ implementation is returned as a fallback.
+
+ Test that the fallback meter name property keeps its original invalid
+ value.
+
+ Test that a message is logged reporting the specified value for the
+ fallback meter is invalid.
+ """
+ meter = NoOpMeterProvider().get_meter("")
+
+ self.assertTrue(isinstance(meter, NoOpMeter))
+
+ self.assertEqual(meter.name, "")
+
+ meter = NoOpMeterProvider().get_meter(None)
+
+ self.assertTrue(isinstance(meter, NoOpMeter))
+
+ self.assertEqual(meter.name, None)
+
+
+class TestProxy(MetricsGlobalsTest, TestCase):
+ def test_global_proxy_meter_provider(self):
+ # Global get_meter_provider() should initially be a _ProxyMeterProvider
+ # singleton
+
+ proxy_meter_provider: _ProxyMeterProvider = get_meter_provider()
+ self.assertIsInstance(proxy_meter_provider, _ProxyMeterProvider)
+ self.assertIs(get_meter_provider(), proxy_meter_provider)
+
+ def test_proxy_provider(self):
+ proxy_meter_provider = _ProxyMeterProvider()
+
+ # Should return a proxy meter when no real MeterProvider is set
+ name = "foo"
+ version = "1.2"
+ schema_url = "schema_url"
+ proxy_meter: _ProxyMeter = proxy_meter_provider.get_meter(
+ name, version=version, schema_url=schema_url
+ )
+ self.assertIsInstance(proxy_meter, _ProxyMeter)
+
+ # After setting a real meter provider on the proxy, it should notify
+ # it's _ProxyMeters which should create their own real Meters
+ mock_real_mp = Mock()
+ proxy_meter_provider.on_set_meter_provider(mock_real_mp)
+ mock_real_mp.get_meter.assert_called_once_with(
+ name, version, schema_url
+ )
+
+ # After setting a real meter provider on the proxy, it should now return
+ # new meters directly from the set real meter
+ another_name = "bar"
+ meter2 = proxy_meter_provider.get_meter(another_name)
+ self.assertIsInstance(meter2, Mock)
+ mock_real_mp.get_meter.assert_called_with(another_name, None, None)
+
+ # pylint: disable=too-many-locals
+ def test_proxy_meter(self):
+ meter_name = "foo"
+ proxy_meter: _ProxyMeter = _ProxyMeterProvider().get_meter(meter_name)
+ self.assertIsInstance(proxy_meter, _ProxyMeter)
+
+ # Should be able to create proxy instruments
+ name = "foo"
+ unit = "s"
+ description = "Foobar"
+ callback = Mock()
+ proxy_counter = proxy_meter.create_counter(
+ name, unit=unit, description=description
+ )
+ proxy_updowncounter = proxy_meter.create_up_down_counter(
+ name, unit=unit, description=description
+ )
+ proxy_histogram = proxy_meter.create_histogram(
+ name, unit=unit, description=description
+ )
+ proxy_observable_counter = proxy_meter.create_observable_counter(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+ proxy_observable_updowncounter = (
+ proxy_meter.create_observable_up_down_counter(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+ )
+ proxy_overvable_gauge = proxy_meter.create_observable_gauge(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+ self.assertIsInstance(proxy_counter, _ProxyCounter)
+ self.assertIsInstance(proxy_updowncounter, _ProxyUpDownCounter)
+ self.assertIsInstance(proxy_histogram, _ProxyHistogram)
+ self.assertIsInstance(
+ proxy_observable_counter, _ProxyObservableCounter
+ )
+ self.assertIsInstance(
+ proxy_observable_updowncounter, _ProxyObservableUpDownCounter
+ )
+ self.assertIsInstance(proxy_overvable_gauge, _ProxyObservableGauge)
+
+ # Synchronous proxy instruments should be usable
+ amount = 12
+ attributes = {"foo": "bar"}
+ proxy_counter.add(amount, attributes=attributes)
+ proxy_updowncounter.add(amount, attributes=attributes)
+ proxy_histogram.record(amount, attributes=attributes)
+
+ # Calling _ProxyMeterProvider.on_set_meter_provider() should cascade down
+ # to the _ProxyInstruments which should create their own real instruments
+ # from the real Meter to back their calls
+ real_meter_provider = Mock()
+ proxy_meter.on_set_meter_provider(real_meter_provider)
+ real_meter_provider.get_meter.assert_called_once_with(
+ meter_name, None, None
+ )
+
+ real_meter: Mock = real_meter_provider.get_meter()
+ real_meter.create_counter.assert_called_once_with(
+ name, unit, description
+ )
+ real_meter.create_up_down_counter.assert_called_once_with(
+ name, unit, description
+ )
+ real_meter.create_histogram.assert_called_once_with(
+ name, unit, description
+ )
+ real_meter.create_observable_counter.assert_called_once_with(
+ name, [callback], unit, description
+ )
+ real_meter.create_observable_up_down_counter.assert_called_once_with(
+ name, [callback], unit, description
+ )
+ real_meter.create_observable_gauge.assert_called_once_with(
+ name, [callback], unit, description
+ )
+
+ # The synchronous instrument measurement methods should call through to
+ # the real instruments
+ real_counter: Mock = real_meter.create_counter()
+ real_updowncounter: Mock = real_meter.create_up_down_counter()
+ real_histogram: Mock = real_meter.create_histogram()
+ real_counter.assert_not_called()
+ real_updowncounter.assert_not_called()
+ real_histogram.assert_not_called()
+
+ proxy_counter.add(amount, attributes=attributes)
+ real_counter.add.assert_called_once_with(amount, attributes)
+ proxy_updowncounter.add(amount, attributes=attributes)
+ real_updowncounter.add.assert_called_once_with(amount, attributes)
+ proxy_histogram.record(amount, attributes=attributes)
+ real_histogram.record.assert_called_once_with(amount, attributes)
+
+ def test_proxy_meter_with_real_meter(self) -> None:
+ # Creating new instruments on the _ProxyMeter with a real meter set
+ # should create real instruments instead of proxies
+ meter_name = "foo"
+ proxy_meter: _ProxyMeter = _ProxyMeterProvider().get_meter(meter_name)
+ self.assertIsInstance(proxy_meter, _ProxyMeter)
+
+ real_meter_provider = Mock()
+ proxy_meter.on_set_meter_provider(real_meter_provider)
+
+ name = "foo"
+ unit = "s"
+ description = "Foobar"
+ callback = Mock()
+ counter = proxy_meter.create_counter(
+ name, unit=unit, description=description
+ )
+ updowncounter = proxy_meter.create_up_down_counter(
+ name, unit=unit, description=description
+ )
+ histogram = proxy_meter.create_histogram(
+ name, unit=unit, description=description
+ )
+ observable_counter = proxy_meter.create_observable_counter(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+ observable_updowncounter = (
+ proxy_meter.create_observable_up_down_counter(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+ )
+ observable_gauge = proxy_meter.create_observable_gauge(
+ name, callbacks=[callback], unit=unit, description=description
+ )
+
+ real_meter: Mock = real_meter_provider.get_meter()
+ self.assertIs(counter, real_meter.create_counter())
+ self.assertIs(updowncounter, real_meter.create_up_down_counter())
+ self.assertIs(histogram, real_meter.create_histogram())
+ self.assertIs(
+ observable_counter, real_meter.create_observable_counter()
+ )
+ self.assertIs(
+ observable_updowncounter,
+ real_meter.create_observable_up_down_counter(),
+ )
+ self.assertIs(observable_gauge, real_meter.create_observable_gauge())
diff --git a/opentelemetry-api/tests/metrics/test_observation.py b/opentelemetry-api/tests/metrics/test_observation.py
new file mode 100644
index 0000000000..0881f043b7
--- /dev/null
+++ b/opentelemetry-api/tests/metrics/test_observation.py
@@ -0,0 +1,46 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from opentelemetry.metrics import Observation
+
+
+class TestObservation(TestCase):
+ def test_measurement_init(self):
+ try:
+ # int
+ Observation(321, {"hello": "world"})
+
+ # float
+ Observation(321.321, {"hello": "world"})
+ except Exception: # pylint: disable=broad-except
+ self.fail(
+ "Unexpected exception raised when instantiating Observation"
+ )
+
+ def test_measurement_equality(self):
+ self.assertEqual(
+ Observation(321, {"hello": "world"}),
+ Observation(321, {"hello": "world"}),
+ )
+
+ self.assertNotEqual(
+ Observation(321, {"hello": "world"}),
+ Observation(321.321, {"hello": "world"}),
+ )
+ self.assertNotEqual(
+ Observation(321, {"baz": "world"}),
+ Observation(321, {"hello": "world"}),
+ )
diff --git a/opentelemetry-api/tests/mypysmoke.py b/opentelemetry-api/tests/mypysmoke.py
new file mode 100644
index 0000000000..ede4af74e0
--- /dev/null
+++ b/opentelemetry-api/tests/mypysmoke.py
@@ -0,0 +1,19 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import opentelemetry.trace
+
+
+def dummy_check_mypy_returntype() -> opentelemetry.trace.TracerProvider:
+ return opentelemetry.trace.get_tracer_provider()
diff --git a/opentelemetry-api/tests/propagators/test_composite.py b/opentelemetry-api/tests/propagators/test_composite.py
new file mode 100644
index 0000000000..14d1894153
--- /dev/null
+++ b/opentelemetry-api/tests/propagators/test_composite.py
@@ -0,0 +1,140 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import unittest
+from unittest.mock import Mock
+
+from opentelemetry.propagators.composite import CompositePropagator
+
+
+def get_as_list(dict_object, key):
+ value = dict_object.get(key)
+ return [value] if value is not None else []
+
+
+def mock_inject(name, value="data"):
+ def wrapped(carrier=None, context=None, setter=None):
+ carrier[name] = value
+ setter.set({}, f"inject_field_{name}_0", None)
+ setter.set({}, f"inject_field_{name}_1", None)
+
+ return wrapped
+
+
+def mock_extract(name, value="context"):
+ def wrapped(carrier=None, context=None, getter=None):
+ new_context = context.copy()
+ new_context[name] = value
+ return new_context
+
+ return wrapped
+
+
+def mock_fields(name):
+ return {f"inject_field_{name}_0", f"inject_field_{name}_1"}
+
+
+class TestCompositePropagator(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.mock_propagator_0 = Mock(
+ inject=mock_inject("mock-0"),
+ extract=mock_extract("mock-0"),
+ fields=mock_fields("mock-0"),
+ )
+ cls.mock_propagator_1 = Mock(
+ inject=mock_inject("mock-1"),
+ extract=mock_extract("mock-1"),
+ fields=mock_fields("mock-1"),
+ )
+ cls.mock_propagator_2 = Mock(
+ inject=mock_inject("mock-0", value="data2"),
+ extract=mock_extract("mock-0", value="context2"),
+ fields=mock_fields("mock-0"),
+ )
+
+ def test_no_propagators(self):
+ propagator = CompositePropagator([])
+ new_carrier = {}
+ propagator.inject(new_carrier)
+ self.assertEqual(new_carrier, {})
+
+ context = propagator.extract(
+ carrier=new_carrier, context={}, getter=get_as_list
+ )
+ self.assertEqual(context, {})
+
+ def test_single_propagator(self):
+ propagator = CompositePropagator([self.mock_propagator_0])
+
+ new_carrier = {}
+ propagator.inject(new_carrier)
+ self.assertEqual(new_carrier, {"mock-0": "data"})
+
+ context = propagator.extract(
+ carrier=new_carrier, context={}, getter=get_as_list
+ )
+ self.assertEqual(context, {"mock-0": "context"})
+
+ def test_multiple_propagators(self):
+ propagator = CompositePropagator(
+ [self.mock_propagator_0, self.mock_propagator_1]
+ )
+
+ new_carrier = {}
+ propagator.inject(new_carrier)
+ self.assertEqual(new_carrier, {"mock-0": "data", "mock-1": "data"})
+
+ context = propagator.extract(
+ carrier=new_carrier, context={}, getter=get_as_list
+ )
+ self.assertEqual(context, {"mock-0": "context", "mock-1": "context"})
+
+ def test_multiple_propagators_same_key(self):
+ # test that when multiple propagators extract/inject the same
+ # key, the later propagator values are extracted/injected
+ propagator = CompositePropagator(
+ [self.mock_propagator_0, self.mock_propagator_2]
+ )
+
+ new_carrier = {}
+ propagator.inject(new_carrier)
+ self.assertEqual(new_carrier, {"mock-0": "data2"})
+
+ context = propagator.extract(
+ carrier=new_carrier, context={}, getter=get_as_list
+ )
+ self.assertEqual(context, {"mock-0": "context2"})
+
+ def test_fields(self):
+ propagator = CompositePropagator(
+ [
+ self.mock_propagator_0,
+ self.mock_propagator_1,
+ self.mock_propagator_2,
+ ]
+ )
+
+ mock_setter = Mock()
+
+ propagator.inject({}, setter=mock_setter)
+
+ inject_fields = set()
+
+ for mock_call in mock_setter.mock_calls:
+ inject_fields.add(mock_call[1][1])
+
+ self.assertEqual(inject_fields, propagator.fields)
diff --git a/opentelemetry-api/tests/propagators/test_global_httptextformat.py b/opentelemetry-api/tests/propagators/test_global_httptextformat.py
new file mode 100644
index 0000000000..466ce6895f
--- /dev/null
+++ b/opentelemetry-api/tests/propagators/test_global_httptextformat.py
@@ -0,0 +1,62 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import unittest
+
+from opentelemetry import baggage, trace
+from opentelemetry.propagate import extract, inject
+from opentelemetry.trace import get_current_span, set_span_in_context
+from opentelemetry.trace.span import format_span_id, format_trace_id
+
+
+class TestDefaultGlobalPropagator(unittest.TestCase):
+ """Test ensures the default global composite propagator works as intended"""
+
+ TRACE_ID = int("12345678901234567890123456789012", 16) # type:int
+ SPAN_ID = int("1234567890123456", 16) # type:int
+
+ def test_propagation(self):
+ traceparent_value = "00-{trace_id}-{span_id}-00".format(
+ trace_id=format_trace_id(self.TRACE_ID),
+ span_id=format_span_id(self.SPAN_ID),
+ )
+ tracestate_value = "foo=1,bar=2,baz=3"
+ headers = {
+ "baggage": ["key1=val1,key2=val2"],
+ "traceparent": [traceparent_value],
+ "tracestate": [tracestate_value],
+ }
+ ctx = extract(headers)
+ baggage_entries = baggage.get_all(context=ctx)
+ expected = {"key1": "val1", "key2": "val2"}
+ self.assertEqual(baggage_entries, expected)
+ span_context = get_current_span(context=ctx).get_span_context()
+
+ self.assertEqual(span_context.trace_id, self.TRACE_ID)
+ self.assertEqual(span_context.span_id, self.SPAN_ID)
+
+ span = trace.NonRecordingSpan(span_context)
+ ctx = baggage.set_baggage("key3", "val3")
+ ctx = baggage.set_baggage("key4", "val4", context=ctx)
+ ctx = set_span_in_context(span, context=ctx)
+ output = {}
+ inject(output, context=ctx)
+ self.assertEqual(traceparent_value, output["traceparent"])
+ self.assertIn("key3=val3", output["baggage"])
+ self.assertIn("key4=val4", output["baggage"])
+ self.assertIn("foo=1", output["tracestate"])
+ self.assertIn("bar=2", output["tracestate"])
+ self.assertIn("baz=3", output["tracestate"])
diff --git a/opentelemetry-api/tests/propagators/test_propagators.py b/opentelemetry-api/tests/propagators/test_propagators.py
new file mode 100644
index 0000000000..29065b8cb3
--- /dev/null
+++ b/opentelemetry-api/tests/propagators/test_propagators.py
@@ -0,0 +1,262 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+from importlib import reload
+from os import environ
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from opentelemetry import trace
+from opentelemetry.baggage.propagation import W3CBaggagePropagator
+from opentelemetry.context.context import Context
+from opentelemetry.environment_variables import OTEL_PROPAGATORS
+from opentelemetry.trace.propagation.tracecontext import (
+ TraceContextTextMapPropagator,
+)
+
+
+class TestPropagators(TestCase):
+ @patch("opentelemetry.propagators.composite.CompositePropagator")
+ def test_default_composite_propagators(self, mock_compositehttppropagator):
+ def test_propagators(propagators):
+
+ propagators = {propagator.__class__ for propagator in propagators}
+
+ self.assertEqual(len(propagators), 2)
+ self.assertEqual(
+ propagators,
+ {TraceContextTextMapPropagator, W3CBaggagePropagator},
+ )
+
+ mock_compositehttppropagator.configure_mock(
+ **{"side_effect": test_propagators}
+ )
+
+ # pylint: disable=import-outside-toplevel
+ import opentelemetry.propagate
+
+ reload(opentelemetry.propagate)
+
+ @patch.dict(environ, {OTEL_PROPAGATORS: "a, b, c "})
+ @patch("opentelemetry.propagators.composite.CompositePropagator")
+ @patch("opentelemetry.util._importlib_metadata.entry_points")
+ def test_non_default_propagators(
+ self, mock_entry_points, mock_compositehttppropagator
+ ):
+
+ mock_entry_points.configure_mock(
+ **{
+ "side_effect": [
+ [
+ Mock(
+ **{
+ "load.return_value": Mock(
+ **{"return_value": "a"}
+ )
+ }
+ ),
+ ],
+ [
+ Mock(
+ **{
+ "load.return_value": Mock(
+ **{"return_value": "b"}
+ )
+ }
+ )
+ ],
+ [
+ Mock(
+ **{
+ "load.return_value": Mock(
+ **{"return_value": "c"}
+ )
+ }
+ )
+ ],
+ ]
+ }
+ )
+
+ def test_propagators(propagators):
+ self.assertEqual(propagators, ["a", "b", "c"])
+
+ mock_compositehttppropagator.configure_mock(
+ **{"side_effect": test_propagators}
+ )
+
+ # pylint: disable=import-outside-toplevel
+ import opentelemetry.propagate
+
+ reload(opentelemetry.propagate)
+
+ @patch.dict(
+ environ, {OTEL_PROPAGATORS: "tracecontext , unknown , baggage"}
+ )
+ def test_composite_propagators_error(self):
+
+ with self.assertRaises(ValueError) as cm:
+ # pylint: disable=import-outside-toplevel
+ import opentelemetry.propagate
+
+ reload(opentelemetry.propagate)
+
+ self.assertEqual(
+ str(cm.exception),
+ "Propagator unknown not found. It is either misspelled or not installed.",
+ )
+
+
+class TestTraceContextTextMapPropagator(TestCase):
+ def setUp(self):
+ self.propagator = TraceContextTextMapPropagator()
+
+ def traceparent_helper(
+ self,
+ carrier,
+ ):
+ # We purposefully start with an empty context so we can test later if anything is added to it.
+ initial_context = Context()
+
+ context = self.propagator.extract(carrier, context=initial_context)
+ self.assertIsNotNone(context)
+ self.assertIsInstance(context, Context)
+
+ return context
+
+ def traceparent_helper_generator(
+ self,
+ version=0x00,
+ trace_id=0x00000000000000000000000000000001,
+ span_id=0x0000000000000001,
+ trace_flags=0x00,
+ suffix="",
+ ):
+ traceparent = f"{version:02x}-{trace_id:032x}-{span_id:016x}-{trace_flags:02x}{suffix}"
+ carrier = {"traceparent": traceparent}
+ return self.traceparent_helper(carrier)
+
+ def valid_traceparent_helper(
+ self,
+ version=0x00,
+ trace_id=0x00000000000000000000000000000001,
+ span_id=0x0000000000000001,
+ trace_flags=0x00,
+ suffix="",
+ assert_context_msg="A valid traceparent was provided, so the context should be non-empty.",
+ ):
+ context = self.traceparent_helper_generator(
+ version=version,
+ trace_id=trace_id,
+ span_id=span_id,
+ trace_flags=trace_flags,
+ suffix=suffix,
+ )
+
+ self.assertNotEqual(
+ context,
+ Context(),
+ assert_context_msg,
+ )
+
+ span = trace.get_current_span(context)
+ self.assertIsNotNone(span)
+ self.assertIsInstance(span, trace.span.Span)
+
+ span_context = span.get_span_context()
+ self.assertIsNotNone(span_context)
+ self.assertIsInstance(span_context, trace.span.SpanContext)
+
+ # Note: No version in SpanContext, it is only used locally in TraceContextTextMapPropagator
+ self.assertEqual(span_context.trace_id, trace_id)
+ self.assertEqual(span_context.span_id, span_id)
+ self.assertEqual(span_context.trace_flags, trace_flags)
+
+ self.assertIsInstance(span_context.trace_state, trace.TraceState)
+ self.assertCountEqual(span_context.trace_state, [])
+ self.assertEqual(span_context.is_remote, True)
+
+ return context, span, span_context
+
+ def invalid_traceparent_helper(
+ self,
+ version=0x00,
+ trace_id=0x00000000000000000000000000000001,
+ span_id=0x0000000000000001,
+ trace_flags=0x00,
+ suffix="",
+ assert_context_msg="An invalid traceparent was provided, so the context should still be empty.",
+ ):
+ context = self.traceparent_helper_generator(
+ version=version,
+ trace_id=trace_id,
+ span_id=span_id,
+ trace_flags=trace_flags,
+ suffix=suffix,
+ )
+
+ self.assertEqual(
+ context,
+ Context(),
+ assert_context_msg,
+ )
+
+ return context
+
+ def test_extract_nothing(self):
+ context = self.traceparent_helper(carrier={})
+ self.assertEqual(
+ context,
+ {},
+ "We didn't provide a valid traceparent, so we should still have an empty Context.",
+ )
+
+ def test_extract_simple_traceparent(self):
+ self.valid_traceparent_helper()
+
+ # https://www.w3.org/TR/trace-context/#version
+ def test_extract_version_forbidden_ff(self):
+ self.invalid_traceparent_helper(
+ version=0xFF,
+ assert_context_msg="We provided ann invalid traceparent with a forbidden version=0xff, so the context should still be empty.",
+ )
+
+ # https://www.w3.org/TR/trace-context/#version-format
+ def test_extract_version_00_with_unsupported_suffix(self):
+ self.invalid_traceparent_helper(
+ suffix="-f00",
+ assert_context_msg="We provided an invalid traceparent with version=0x00 and suffix information which is not supported in this version, so the context should still be empty.",
+ )
+
+ # https://www.w3.org/TR/trace-context/#versioning-of-traceparent
+ # See the parsing of the sampled bit of flags.
+ def test_extract_future_version_with_future_suffix_data(self):
+ self.valid_traceparent_helper(
+ version=0x99,
+ suffix="-f00",
+ assert_context_msg="We provided a traceparent that is possibly valid in the future with version=0x99 and suffix information, so the context be non-empty.",
+ )
+
+ # https://www.w3.org/TR/trace-context/#trace-id
+ def test_extract_trace_id_invalid_all_zeros(self):
+ self.invalid_traceparent_helper(trace_id=0)
+
+ # https://www.w3.org/TR/trace-context/#parent-id
+ def test_extract_span_id_invalid_all_zeros(self):
+ self.invalid_traceparent_helper(span_id=0)
+
+ def test_extract_non_decimal_trace_flags(self):
+ self.valid_traceparent_helper(trace_flags=0xA0)
diff --git a/opentelemetry-api/tests/propagators/test_w3cbaggagepropagator.py b/opentelemetry-api/tests/propagators/test_w3cbaggagepropagator.py
new file mode 100644
index 0000000000..ccc4b3cb2d
--- /dev/null
+++ b/opentelemetry-api/tests/propagators/test_w3cbaggagepropagator.py
@@ -0,0 +1,264 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# type: ignore
+
+from logging import WARNING
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from opentelemetry.baggage import get_all, set_baggage
+from opentelemetry.baggage.propagation import (
+ W3CBaggagePropagator,
+ _format_baggage,
+)
+from opentelemetry.context import get_current
+
+
+class TestW3CBaggagePropagator(TestCase):
+ def setUp(self):
+ self.propagator = W3CBaggagePropagator()
+
+ def _extract(self, header_value):
+ """Test helper"""
+ header = {"baggage": [header_value]}
+ return get_all(self.propagator.extract(header))
+
+ def _inject(self, values):
+ """Test helper"""
+ ctx = get_current()
+ for k, v in values.items():
+ ctx = set_baggage(k, v, context=ctx)
+ output = {}
+ self.propagator.inject(output, context=ctx)
+ return output.get("baggage")
+
+ def test_no_context_header(self):
+ baggage_entries = get_all(self.propagator.extract({}))
+ self.assertEqual(baggage_entries, {})
+
+ def test_empty_context_header(self):
+ header = ""
+ self.assertEqual(self._extract(header), {})
+
+ def test_valid_header(self):
+ header = "key1=val1,key2=val2"
+ expected = {"key1": "val1", "key2": "val2"}
+ self.assertEqual(self._extract(header), expected)
+
+ def test_invalid_header_with_space(self):
+ header = "key1 = val1, key2 =val2 "
+ self.assertEqual(self._extract(header), {})
+
+ def test_valid_header_with_properties(self):
+ header = "key1=val1,key2=val2;prop=1;prop2;prop3=2"
+ expected = {"key1": "val1", "key2": "val2;prop=1;prop2;prop3=2"}
+ self.assertEqual(self._extract(header), expected)
+
+ def test_valid_header_with_url_escaped_values(self):
+ header = "key1=val1,key2=val2%3Aval3,key3=val4%40%23%24val5"
+ expected = {
+ "key1": "val1",
+ "key2": "val2:val3",
+ "key3": "val4@#$val5",
+ }
+ self.assertEqual(self._extract(header), expected)
+
+ def test_header_with_invalid_value(self):
+ header = "key1=val1,key2=val2,a,val3"
+ with self.assertLogs(level=WARNING) as warning:
+ self._extract(header)
+ self.assertIn(
+ "Baggage list-member `a` doesn't match the format",
+ warning.output[0],
+ )
+
+ def test_valid_header_with_empty_value(self):
+ header = "key1=,key2=val2"
+ expected = {"key1": "", "key2": "val2"}
+ self.assertEqual(self._extract(header), expected)
+
+ def test_invalid_header(self):
+ self.assertEqual(self._extract("header1"), {})
+ self.assertEqual(self._extract(" = "), {})
+
+ def test_header_too_long(self):
+ long_value = "s" * (W3CBaggagePropagator._MAX_HEADER_LENGTH + 1)
+ header = f"key1={long_value}"
+ expected = {}
+ self.assertEqual(self._extract(header), expected)
+
+ def test_header_contains_too_many_entries(self):
+ header = ",".join(
+ [f"key{k}=val" for k in range(W3CBaggagePropagator._MAX_PAIRS + 1)]
+ )
+ self.assertEqual(
+ len(self._extract(header)), W3CBaggagePropagator._MAX_PAIRS
+ )
+
+ def test_header_contains_pair_too_long(self):
+ long_value = "s" * (W3CBaggagePropagator._MAX_PAIR_LENGTH + 1)
+ header = f"key1=value1,key2={long_value},key3=value3"
+ expected = {"key1": "value1", "key3": "value3"}
+ with self.assertLogs(level=WARNING) as warning:
+ self.assertEqual(self._extract(header), expected)
+ self.assertIn(
+ "exceeded the maximum number of bytes per list-member",
+ warning.output[0],
+ )
+
+ def test_extract_unquote_plus(self):
+ self.assertEqual(
+ self._extract("keykey=value%5Evalue"), {"keykey": "value^value"}
+ )
+ self.assertEqual(
+ self._extract("key%23key=value%23value"),
+ {"key#key": "value#value"},
+ )
+
+ def test_header_max_entries_skip_invalid_entry(self):
+
+ with self.assertLogs(level=WARNING) as warning:
+ self.assertEqual(
+ self._extract(
+ ",".join(
+ [
+ f"key{index}=value{index}"
+ if index != 2
+ else (
+ f"key{index}="
+ f"value{'s' * (W3CBaggagePropagator._MAX_PAIR_LENGTH + 1)}"
+ )
+ for index in range(
+ W3CBaggagePropagator._MAX_PAIRS + 1
+ )
+ ]
+ )
+ ),
+ {
+ f"key{index}": f"value{index}"
+ for index in range(W3CBaggagePropagator._MAX_PAIRS + 1)
+ if index != 2
+ },
+ )
+ self.assertIn(
+ "exceeded the maximum number of list-members",
+ warning.output[0],
+ )
+
+ with self.assertLogs(level=WARNING) as warning:
+ self.assertEqual(
+ self._extract(
+ ",".join(
+ [
+ f"key{index}=value{index}"
+ if index != 2
+ else f"key{index}xvalue{index}"
+ for index in range(
+ W3CBaggagePropagator._MAX_PAIRS + 1
+ )
+ ]
+ )
+ ),
+ {
+ f"key{index}": f"value{index}"
+ for index in range(W3CBaggagePropagator._MAX_PAIRS + 1)
+ if index != 2
+ },
+ )
+ self.assertIn(
+ "exceeded the maximum number of list-members",
+ warning.output[0],
+ )
+
+ def test_inject_no_baggage_entries(self):
+ values = {}
+ output = self._inject(values)
+ self.assertEqual(None, output)
+
+ def test_inject_space_entries(self):
+ self.assertEqual("key=val+ue", self._inject({"key": "val ue"}))
+
+ def test_inject(self):
+ values = {
+ "key1": "val1",
+ "key2": "val2",
+ }
+ output = self._inject(values)
+ self.assertIn("key1=val1", output)
+ self.assertIn("key2=val2", output)
+
+ def test_inject_escaped_values(self):
+ values = {
+ "key1": "val1,val2",
+ "key2": "val3=4",
+ }
+ output = self._inject(values)
+ self.assertIn("key2=val3%3D4", output)
+
+ def test_inject_non_string_values(self):
+ values = {
+ "key1": True,
+ "key2": 123,
+ "key3": 123.567,
+ }
+ output = self._inject(values)
+ self.assertIn("key1=True", output)
+ self.assertIn("key2=123", output)
+ self.assertIn("key3=123.567", output)
+
+ @patch("opentelemetry.baggage.propagation.get_all")
+ @patch("opentelemetry.baggage.propagation._format_baggage")
+ def test_fields(self, mock_format_baggage, mock_baggage):
+
+ mock_setter = Mock()
+
+ self.propagator.inject({}, setter=mock_setter)
+
+ inject_fields = set()
+
+ for mock_call in mock_setter.mock_calls:
+ inject_fields.add(mock_call[1][1])
+
+ self.assertEqual(inject_fields, self.propagator.fields)
+
+ def test__format_baggage(self):
+ self.assertEqual(
+ _format_baggage({"key key": "value value"}), "key+key=value+value"
+ )
+ self.assertEqual(
+ _format_baggage({"key/key": "value/value"}),
+ "key%2Fkey=value%2Fvalue",
+ )
+
+ @patch("opentelemetry.baggage._BAGGAGE_KEY", new="abc")
+ def test_inject_extract(self):
+
+ carrier = {}
+
+ context = set_baggage(
+ "transaction", "string with spaces", context=get_current()
+ )
+
+ self.propagator.inject(carrier, context)
+
+ context = self.propagator.extract(carrier)
+
+ self.assertEqual(
+ carrier, {"baggage": "transaction=string+with+spaces"}
+ )
+
+ self.assertEqual(
+ context, {"abc": {"transaction": "string with spaces"}}
+ )
diff --git a/opentelemetry-api/tests/test_implementation.py b/opentelemetry-api/tests/test_implementation.py
new file mode 100644
index 0000000000..913efbffb3
--- /dev/null
+++ b/opentelemetry-api/tests/test_implementation.py
@@ -0,0 +1,59 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry import trace
+
+
+class TestAPIOnlyImplementation(unittest.TestCase):
+ """
+ This test is in place to ensure the API is returning values that
+ are valid. The same tests have been added to the SDK with
+ different expected results. See issue for more details:
+ https://github.com/open-telemetry/opentelemetry-python/issues/142
+ """
+
+ # TRACER
+
+ def test_tracer(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ trace.TracerProvider() # type:ignore
+
+ def test_default_tracer(self):
+ tracer_provider = trace.NoOpTracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+ with tracer.start_span("test") as span:
+ self.assertEqual(
+ span.get_span_context(), trace.INVALID_SPAN_CONTEXT
+ )
+ self.assertEqual(span, trace.INVALID_SPAN)
+ self.assertIs(span.is_recording(), False)
+ with tracer.start_span("test2") as span2:
+ self.assertEqual(
+ span2.get_span_context(), trace.INVALID_SPAN_CONTEXT
+ )
+ self.assertEqual(span2, trace.INVALID_SPAN)
+ self.assertIs(span2.is_recording(), False)
+
+ def test_span(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ trace.Span() # type:ignore
+
+ def test_default_span(self):
+ span = trace.NonRecordingSpan(trace.INVALID_SPAN_CONTEXT)
+ self.assertEqual(span.get_span_context(), trace.INVALID_SPAN_CONTEXT)
+ self.assertIs(span.is_recording(), False)
diff --git a/opentelemetry-api/tests/trace/__init__.py b/opentelemetry-api/tests/trace/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-api/tests/trace/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-api/tests/trace/propagation/test_textmap.py b/opentelemetry-api/tests/trace/propagation/test_textmap.py
new file mode 100644
index 0000000000..6b22d46f88
--- /dev/null
+++ b/opentelemetry-api/tests/trace/propagation/test_textmap.py
@@ -0,0 +1,44 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import unittest
+
+from opentelemetry.propagators.textmap import DefaultGetter
+
+
+class TestDefaultGetter(unittest.TestCase):
+ def test_get_none(self):
+ getter = DefaultGetter()
+ carrier = {}
+ val = getter.get(carrier, "test")
+ self.assertIsNone(val)
+
+ def test_get_str(self):
+ getter = DefaultGetter()
+ carrier = {"test": "val"}
+ val = getter.get(carrier, "test")
+ self.assertEqual(val, ["val"])
+
+ def test_get_iter(self):
+ getter = DefaultGetter()
+ carrier = {"test": ["val"]}
+ val = getter.get(carrier, "test")
+ self.assertEqual(val, ["val"])
+
+ def test_keys(self):
+ getter = DefaultGetter()
+ keys = getter.keys({"test": "val"})
+ self.assertEqual(keys, ["test"])
diff --git a/opentelemetry-api/tests/trace/propagation/test_tracecontexthttptextformat.py b/opentelemetry-api/tests/trace/propagation/test_tracecontexthttptextformat.py
new file mode 100644
index 0000000000..7fefd8dea6
--- /dev/null
+++ b/opentelemetry-api/tests/trace/propagation/test_tracecontexthttptextformat.py
@@ -0,0 +1,321 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import typing
+import unittest
+from unittest.mock import Mock, patch
+
+from opentelemetry import trace
+from opentelemetry.context import Context
+from opentelemetry.trace.propagation import tracecontext
+from opentelemetry.trace.span import TraceState
+
+FORMAT = tracecontext.TraceContextTextMapPropagator()
+
+
+class TestTraceContextFormat(unittest.TestCase):
+ TRACE_ID = int("12345678901234567890123456789012", 16) # type:int
+ SPAN_ID = int("1234567890123456", 16) # type:int
+
+ def test_no_traceparent_header(self):
+ """When tracecontext headers are not present, a new SpanContext
+ should be created.
+
+ RFC 4.2.2:
+
+ If no traceparent header is received, the vendor creates a new
+ trace-id and parent-id that represents the current request.
+ """
+ output: typing.Dict[str, typing.List[str]] = {}
+ span = trace.get_current_span(FORMAT.extract(output))
+ self.assertIsInstance(span.get_span_context(), trace.SpanContext)
+
+ def test_headers_with_tracestate(self):
+ """When there is a traceparent and tracestate header, data from
+ both should be added to the SpanContext.
+ """
+ traceparent_value = "00-{trace_id}-{span_id}-00".format(
+ trace_id=format(self.TRACE_ID, "032x"),
+ span_id=format(self.SPAN_ID, "016x"),
+ )
+ tracestate_value = "foo=1,bar=2,baz=3"
+ span_context = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [traceparent_value],
+ "tracestate": [tracestate_value],
+ },
+ )
+ ).get_span_context()
+ self.assertEqual(span_context.trace_id, self.TRACE_ID)
+ self.assertEqual(span_context.span_id, self.SPAN_ID)
+ self.assertEqual(
+ span_context.trace_state, {"foo": "1", "bar": "2", "baz": "3"}
+ )
+ self.assertTrue(span_context.is_remote)
+ output: typing.Dict[str, str] = {}
+ span = trace.NonRecordingSpan(span_context)
+
+ ctx = trace.set_span_in_context(span)
+ FORMAT.inject(output, context=ctx)
+ self.assertEqual(output["traceparent"], traceparent_value)
+ for pair in ["foo=1", "bar=2", "baz=3"]:
+ self.assertIn(pair, output["tracestate"])
+ self.assertEqual(output["tracestate"].count(","), 2)
+
+ def test_invalid_trace_id(self):
+ """If the trace id is invalid, we must ignore the full traceparent header,
+ and return a random, valid trace.
+
+ Also ignore any tracestate.
+
+ RFC 3.2.2.3
+
+ If the trace-id value is invalid (for example if it contains
+ non-allowed characters or all zeros), vendors MUST ignore the
+ traceparent.
+
+ RFC 3.3
+
+ If the vendor failed to parse traceparent, it MUST NOT attempt to
+ parse tracestate.
+ Note that the opposite is not true: failure to parse tracestate MUST
+ NOT affect the parsing of traceparent.
+ """
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-00000000000000000000000000000000-1234567890123456-00"
+ ],
+ "tracestate": ["foo=1,bar=2,foo=3"],
+ },
+ )
+ )
+ self.assertEqual(span.get_span_context(), trace.INVALID_SPAN_CONTEXT)
+
+ def test_invalid_parent_id(self):
+ """If the parent id is invalid, we must ignore the full traceparent
+ header.
+
+ Also ignore any tracestate.
+
+ RFC 3.2.2.3
+
+ Vendors MUST ignore the traceparent when the parent-id is invalid (for
+ example, if it contains non-lowercase hex characters).
+
+ RFC 3.3
+
+ If the vendor failed to parse traceparent, it MUST NOT attempt to parse
+ tracestate.
+ Note that the opposite is not true: failure to parse tracestate MUST
+ NOT affect the parsing of traceparent.
+ """
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-00000000000000000000000000000000-0000000000000000-00"
+ ],
+ "tracestate": ["foo=1,bar=2,foo=3"],
+ },
+ )
+ )
+ self.assertEqual(span.get_span_context(), trace.INVALID_SPAN_CONTEXT)
+
+ def test_no_send_empty_tracestate(self):
+ """If the tracestate is empty, do not set the header.
+
+ RFC 3.3.1.1
+
+ Empty and whitespace-only list members are allowed. Vendors MUST accept
+ empty tracestate headers but SHOULD avoid sending them.
+ """
+ output: typing.Dict[str, str] = {}
+ span = trace.NonRecordingSpan(
+ trace.SpanContext(self.TRACE_ID, self.SPAN_ID, is_remote=False)
+ )
+ ctx = trace.set_span_in_context(span)
+ FORMAT.inject(output, context=ctx)
+ self.assertTrue("traceparent" in output)
+ self.assertFalse("tracestate" in output)
+
+ def test_format_not_supported(self):
+ """If the traceparent does not adhere to the supported format, discard it and
+ create a new tracecontext.
+
+ RFC 4.3
+
+ If the version cannot be parsed, return an invalid trace header.
+ """
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-12345678901234567890123456789012-"
+ "1234567890123456-00-residue"
+ ],
+ "tracestate": ["foo=1,bar=2,foo=3"],
+ },
+ )
+ )
+ self.assertEqual(span.get_span_context(), trace.INVALID_SPAN_CONTEXT)
+
+ def test_propagate_invalid_context(self):
+ """Do not propagate invalid trace context."""
+ output: typing.Dict[str, str] = {}
+ ctx = trace.set_span_in_context(trace.INVALID_SPAN)
+ FORMAT.inject(output, context=ctx)
+ self.assertFalse("traceparent" in output)
+
+ def test_tracestate_empty_header(self):
+ """Test tracestate with an additional empty header (should be ignored)"""
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-12345678901234567890123456789012-1234567890123456-00"
+ ],
+ "tracestate": ["foo=1", ""],
+ },
+ )
+ )
+ self.assertEqual(span.get_span_context().trace_state["foo"], "1")
+
+ def test_tracestate_header_with_trailing_comma(self):
+ """Do not propagate invalid trace context."""
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-12345678901234567890123456789012-1234567890123456-00"
+ ],
+ "tracestate": ["foo=1,"],
+ },
+ )
+ )
+ self.assertEqual(span.get_span_context().trace_state["foo"], "1")
+
+ def test_tracestate_keys(self):
+ """Test for valid key patterns in the tracestate"""
+ tracestate_value = ",".join(
+ [
+ "1a-2f@foo=bar1",
+ "1a-_*/2b@foo=bar2",
+ "foo=bar3",
+ "foo-_*/bar=bar4",
+ ]
+ )
+ span = trace.get_current_span(
+ FORMAT.extract(
+ {
+ "traceparent": [
+ "00-12345678901234567890123456789012-"
+ "1234567890123456-00"
+ ],
+ "tracestate": [tracestate_value],
+ },
+ )
+ )
+ self.assertEqual(
+ span.get_span_context().trace_state["1a-2f@foo"], "bar1"
+ )
+ self.assertEqual(
+ span.get_span_context().trace_state["1a-_*/2b@foo"], "bar2"
+ )
+ self.assertEqual(span.get_span_context().trace_state["foo"], "bar3")
+ self.assertEqual(
+ span.get_span_context().trace_state["foo-_*/bar"], "bar4"
+ )
+
+ @patch("opentelemetry.trace.INVALID_SPAN_CONTEXT")
+ @patch("opentelemetry.trace.get_current_span")
+ def test_fields(self, mock_get_current_span, mock_invalid_span_context):
+
+ mock_get_current_span.configure_mock(
+ return_value=Mock(
+ **{
+ "get_span_context.return_value": Mock(
+ **{
+ "trace_id": 1,
+ "span_id": 2,
+ "trace_flags": 3,
+ "trace_state": TraceState([("a", "b")]),
+ }
+ )
+ }
+ )
+ )
+
+ mock_setter = Mock()
+
+ FORMAT.inject({}, setter=mock_setter)
+
+ inject_fields = set()
+
+ for mock_call in mock_setter.mock_calls:
+ inject_fields.add(mock_call[1][1])
+
+ self.assertEqual(inject_fields, FORMAT.fields)
+
+ def test_extract_no_trace_parent_to_explicit_ctx(self):
+ carrier = {"tracestate": ["foo=1"]}
+ orig_ctx = Context({"k1": "v1"})
+
+ ctx = FORMAT.extract(carrier, orig_ctx)
+ self.assertDictEqual(orig_ctx, ctx)
+
+ def test_extract_no_trace_parent_to_implicit_ctx(self):
+ carrier = {"tracestate": ["foo=1"]}
+
+ ctx = FORMAT.extract(carrier)
+ self.assertDictEqual(Context(), ctx)
+
+ def test_extract_invalid_trace_parent_to_explicit_ctx(self):
+ trace_parent_headers = [
+ "invalid",
+ "00-00000000000000000000000000000000-1234567890123456-00",
+ "00-12345678901234567890123456789012-0000000000000000-00",
+ "00-12345678901234567890123456789012-1234567890123456-00-residue",
+ ]
+ for trace_parent in trace_parent_headers:
+ with self.subTest(trace_parent=trace_parent):
+ carrier = {
+ "traceparent": [trace_parent],
+ "tracestate": ["foo=1"],
+ }
+ orig_ctx = Context({"k1": "v1"})
+
+ ctx = FORMAT.extract(carrier, orig_ctx)
+ self.assertDictEqual(orig_ctx, ctx)
+
+ def test_extract_invalid_trace_parent_to_implicit_ctx(self):
+ trace_parent_headers = [
+ "invalid",
+ "00-00000000000000000000000000000000-1234567890123456-00",
+ "00-12345678901234567890123456789012-0000000000000000-00",
+ "00-12345678901234567890123456789012-1234567890123456-00-residue",
+ ]
+ for trace_parent in trace_parent_headers:
+ with self.subTest(trace_parent=trace_parent):
+ carrier = {
+ "traceparent": [trace_parent],
+ "tracestate": ["foo=1"],
+ }
+
+ ctx = FORMAT.extract(carrier)
+ self.assertDictEqual(Context(), ctx)
diff --git a/opentelemetry-api/tests/trace/test_defaultspan.py b/opentelemetry-api/tests/trace/test_defaultspan.py
new file mode 100644
index 0000000000..fbd3c00774
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_defaultspan.py
@@ -0,0 +1,35 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry import trace
+
+
+class TestNonRecordingSpan(unittest.TestCase):
+ def test_ctor(self):
+ context = trace.SpanContext(
+ 1,
+ 1,
+ is_remote=False,
+ trace_flags=trace.DEFAULT_TRACE_OPTIONS,
+ trace_state=trace.DEFAULT_TRACE_STATE,
+ )
+ span = trace.NonRecordingSpan(context)
+ self.assertEqual(context, span.get_span_context())
+
+ def test_invalid_span(self):
+ self.assertIsNotNone(trace.INVALID_SPAN)
+ self.assertIsNotNone(trace.INVALID_SPAN.get_span_context())
+ self.assertFalse(trace.INVALID_SPAN.get_span_context().is_valid)
diff --git a/opentelemetry-api/tests/trace/test_globals.py b/opentelemetry-api/tests/trace/test_globals.py
new file mode 100644
index 0000000000..c2cc80db82
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_globals.py
@@ -0,0 +1,150 @@
+import unittest
+from unittest.mock import Mock, patch
+
+from opentelemetry import context, trace
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase, MockFunc
+from opentelemetry.test.globals_test import TraceGlobalsTest
+from opentelemetry.trace.status import Status, StatusCode
+
+
+class TestSpan(trace.NonRecordingSpan):
+ has_ended = False
+ recorded_exception = None
+ recorded_status = Status(status_code=StatusCode.UNSET)
+
+ def set_status(self, status, description=None):
+ self.recorded_status = status
+
+ def end(self, end_time=None):
+ self.has_ended = True
+
+ def is_recording(self):
+ return not self.has_ended
+
+ def record_exception(
+ self, exception, attributes=None, timestamp=None, escaped=False
+ ):
+ self.recorded_exception = exception
+
+
+class TestGlobals(TraceGlobalsTest, unittest.TestCase):
+ @staticmethod
+ @patch("opentelemetry.trace._TRACER_PROVIDER")
+ def test_get_tracer(mock_tracer_provider): # type: ignore
+ """trace.get_tracer should proxy to the global tracer provider."""
+ trace.get_tracer("foo", "var")
+ mock_tracer_provider.get_tracer.assert_called_with("foo", "var", None)
+ mock_provider = Mock()
+ trace.get_tracer("foo", "var", mock_provider)
+ mock_provider.get_tracer.assert_called_with("foo", "var", None)
+
+
+class TestGlobalsConcurrency(TraceGlobalsTest, ConcurrencyTestBase):
+ @patch("opentelemetry.trace.logger")
+ def test_set_tracer_provider_many_threads(self, mock_logger) -> None: # type: ignore
+ mock_logger.warning = MockFunc()
+
+ def do_concurrently() -> Mock:
+ # first get a proxy tracer
+ proxy_tracer = trace.ProxyTracerProvider().get_tracer("foo")
+
+ # try to set the global tracer provider
+ mock_tracer_provider = Mock(get_tracer=MockFunc())
+ trace.set_tracer_provider(mock_tracer_provider)
+
+ # start a span through the proxy which will call through to the mock provider
+ proxy_tracer.start_span("foo")
+
+ return mock_tracer_provider
+
+ num_threads = 100
+ mock_tracer_providers = self.run_with_many_threads(
+ do_concurrently,
+ num_threads=num_threads,
+ )
+
+ # despite trying to set tracer provider many times, only one of the
+ # mock_tracer_providers should have stuck and been called from
+ # proxy_tracer.start_span()
+ mock_tps_with_any_call = [
+ mock
+ for mock in mock_tracer_providers
+ if mock.get_tracer.call_count > 0
+ ]
+
+ self.assertEqual(len(mock_tps_with_any_call), 1)
+ self.assertEqual(
+ mock_tps_with_any_call[0].get_tracer.call_count, num_threads
+ )
+
+ # should have warned every time except for the successful set
+ self.assertEqual(mock_logger.warning.call_count, num_threads - 1)
+
+
+class TestTracer(unittest.TestCase):
+ def setUp(self):
+ self.tracer = trace.NoOpTracer()
+
+ def test_get_current_span(self):
+ """NoOpTracer's start_span will also
+ be retrievable via get_current_span
+ """
+ self.assertEqual(trace.get_current_span(), trace.INVALID_SPAN)
+ span = trace.NonRecordingSpan(trace.INVALID_SPAN_CONTEXT)
+ ctx = trace.set_span_in_context(span)
+ token = context.attach(ctx)
+ try:
+ self.assertIs(trace.get_current_span(), span)
+ finally:
+ context.detach(token)
+ self.assertEqual(trace.get_current_span(), trace.INVALID_SPAN)
+
+
+class TestUseTracer(unittest.TestCase):
+ def test_use_span(self):
+ self.assertEqual(trace.get_current_span(), trace.INVALID_SPAN)
+ span = trace.NonRecordingSpan(trace.INVALID_SPAN_CONTEXT)
+ with trace.use_span(span):
+ self.assertIs(trace.get_current_span(), span)
+ self.assertEqual(trace.get_current_span(), trace.INVALID_SPAN)
+
+ def test_use_span_end_on_exit(self):
+
+ test_span = TestSpan(trace.INVALID_SPAN_CONTEXT)
+
+ with trace.use_span(test_span):
+ pass
+ self.assertFalse(test_span.has_ended)
+
+ with trace.use_span(test_span, end_on_exit=True):
+ pass
+ self.assertTrue(test_span.has_ended)
+
+ def test_use_span_exception(self):
+ class TestUseSpanException(Exception):
+ pass
+
+ test_span = TestSpan(trace.INVALID_SPAN_CONTEXT)
+ exception = TestUseSpanException("test exception")
+ with self.assertRaises(TestUseSpanException):
+ with trace.use_span(test_span):
+ raise exception
+
+ self.assertEqual(test_span.recorded_exception, exception)
+
+ def test_use_span_set_status(self):
+ class TestUseSpanException(Exception):
+ pass
+
+ test_span = TestSpan(trace.INVALID_SPAN_CONTEXT)
+ with self.assertRaises(TestUseSpanException):
+ with trace.use_span(test_span):
+ raise TestUseSpanException("test error")
+
+ self.assertEqual(
+ test_span.recorded_status.status_code, StatusCode.ERROR
+ )
+ self.assertEqual(
+ test_span.recorded_status.description,
+ "TestUseSpanException: test error",
+ )
diff --git a/opentelemetry-api/tests/trace/test_immutablespancontext.py b/opentelemetry-api/tests/trace/test_immutablespancontext.py
new file mode 100644
index 0000000000..7e98470e13
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_immutablespancontext.py
@@ -0,0 +1,58 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry import trace
+from opentelemetry.trace import TraceFlags, TraceState
+
+
+class TestImmutableSpanContext(unittest.TestCase):
+ def test_ctor(self):
+ context = trace.SpanContext(
+ 1,
+ 1,
+ is_remote=False,
+ trace_flags=trace.DEFAULT_TRACE_OPTIONS,
+ trace_state=trace.DEFAULT_TRACE_STATE,
+ )
+
+ self.assertEqual(context.trace_id, 1)
+ self.assertEqual(context.span_id, 1)
+ self.assertEqual(context.is_remote, False)
+ self.assertEqual(context.trace_flags, trace.DEFAULT_TRACE_OPTIONS)
+ self.assertEqual(context.trace_state, trace.DEFAULT_TRACE_STATE)
+
+ def test_attempt_change_attributes(self):
+ context = trace.SpanContext(
+ 1,
+ 2,
+ is_remote=False,
+ trace_flags=trace.DEFAULT_TRACE_OPTIONS,
+ trace_state=trace.DEFAULT_TRACE_STATE,
+ )
+
+ # attempt to change the attribute values
+ context.trace_id = 2 # type: ignore
+ context.span_id = 3 # type: ignore
+ context.is_remote = True # type: ignore
+ context.trace_flags = TraceFlags(3) # type: ignore
+ context.trace_state = TraceState([("test", "test")]) # type: ignore
+
+ # check if attributes changed
+ self.assertEqual(context.trace_id, 1)
+ self.assertEqual(context.span_id, 2)
+ self.assertEqual(context.is_remote, False)
+ self.assertEqual(context.trace_flags, trace.DEFAULT_TRACE_OPTIONS)
+ self.assertEqual(context.trace_state, trace.DEFAULT_TRACE_STATE)
diff --git a/opentelemetry-api/tests/trace/test_proxy.py b/opentelemetry-api/tests/trace/test_proxy.py
new file mode 100644
index 0000000000..e48a2157ae
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_proxy.py
@@ -0,0 +1,103 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=W0212,W0222,W0221
+import typing
+import unittest
+from contextlib import contextmanager
+
+from opentelemetry import trace
+from opentelemetry.test.globals_test import TraceGlobalsTest
+from opentelemetry.trace.span import (
+ INVALID_SPAN_CONTEXT,
+ NonRecordingSpan,
+ Span,
+)
+
+
+class TestProvider(trace.NoOpTracerProvider):
+ def get_tracer(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> trace.Tracer:
+ return TestTracer()
+
+
+class TestTracer(trace.NoOpTracer):
+ def start_span(self, *args, **kwargs):
+ return TestSpan(INVALID_SPAN_CONTEXT)
+
+ @contextmanager
+ def start_as_current_span(self, *args, **kwargs): # type: ignore
+ with trace.use_span(self.start_span(*args, **kwargs)) as span: # type: ignore
+ yield span
+
+
+class TestSpan(NonRecordingSpan):
+ pass
+
+
+class TestProxy(TraceGlobalsTest, unittest.TestCase):
+ def test_proxy_tracer(self):
+ provider = trace.get_tracer_provider()
+ # proxy provider
+ self.assertIsInstance(provider, trace.ProxyTracerProvider)
+
+ # provider returns proxy tracer
+ tracer = provider.get_tracer("proxy-test")
+ self.assertIsInstance(tracer, trace.ProxyTracer)
+
+ with tracer.start_span("span1") as span:
+ self.assertIsInstance(span, trace.NonRecordingSpan)
+
+ with tracer.start_as_current_span("span2") as span:
+ self.assertIsInstance(span, trace.NonRecordingSpan)
+
+ # set a real provider
+ trace.set_tracer_provider(TestProvider())
+
+ # get_tracer_provider() now returns the real provider
+ self.assertIsInstance(trace.get_tracer_provider(), TestProvider)
+
+ # tracer provider now returns real instance
+ self.assertIsInstance(trace.get_tracer_provider(), TestProvider)
+
+ # references to the old provider still work but return real tracer now
+ real_tracer = provider.get_tracer("proxy-test")
+ self.assertIsInstance(real_tracer, TestTracer)
+
+ # reference to old proxy tracer now delegates to a real tracer and
+ # creates real spans
+ with tracer.start_span("") as span:
+ self.assertIsInstance(span, TestSpan)
+
+ def test_late_config(self):
+ # get a tracer and instrument a function as we would at the
+ # root of a module
+ tracer = trace.get_tracer("test")
+
+ @tracer.start_as_current_span("span")
+ def my_function() -> Span:
+ return trace.get_current_span()
+
+ # call function before configuring tracing provider, should
+ # return INVALID_SPAN from the NoOpTracer
+ self.assertEqual(my_function(), trace.INVALID_SPAN)
+
+ # configure tracing provider
+ trace.set_tracer_provider(TestProvider())
+ # call function again, we should now be getting a TestSpan
+ self.assertIsInstance(my_function(), TestSpan)
diff --git a/opentelemetry-api/tests/trace/test_span_context.py b/opentelemetry-api/tests/trace/test_span_context.py
new file mode 100644
index 0000000000..55abb0f559
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_span_context.py
@@ -0,0 +1,89 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pickle
+import unittest
+
+from opentelemetry import trace
+
+
+class TestSpanContext(unittest.TestCase):
+ def test_span_context_pickle(self):
+ """
+ SpanContext needs to be pickleable to support multiprocessing
+ so span can start as parent from the new spawned process
+ """
+ sc = trace.SpanContext(
+ 1,
+ 2,
+ is_remote=False,
+ trace_flags=trace.DEFAULT_TRACE_OPTIONS,
+ trace_state=trace.DEFAULT_TRACE_STATE,
+ )
+ pickle_sc = pickle.loads(pickle.dumps(sc))
+ self.assertEqual(sc.trace_id, pickle_sc.trace_id)
+ self.assertEqual(sc.span_id, pickle_sc.span_id)
+
+ invalid_sc = trace.SpanContext(
+ 9999999999999999999999999999999999999999999999999999999999999999999999999999,
+ 9,
+ is_remote=False,
+ trace_flags=trace.DEFAULT_TRACE_OPTIONS,
+ trace_state=trace.DEFAULT_TRACE_STATE,
+ )
+ self.assertFalse(invalid_sc.is_valid)
+
+ def test_trace_id_validity(self):
+ trace_id_max_value = int("f" * 32, 16)
+ span_id = 1
+
+ # valid trace IDs
+ sc = trace.SpanContext(trace_id_max_value, span_id, is_remote=False)
+ self.assertTrue(sc.is_valid)
+
+ sc = trace.SpanContext(1, span_id, is_remote=False)
+ self.assertTrue(sc.is_valid)
+
+ # invalid trace IDs
+ sc = trace.SpanContext(0, span_id, is_remote=False)
+ self.assertFalse(sc.is_valid)
+
+ sc = trace.SpanContext(-1, span_id, is_remote=False)
+ self.assertFalse(sc.is_valid)
+
+ sc = trace.SpanContext(
+ trace_id_max_value + 1, span_id, is_remote=False
+ )
+ self.assertFalse(sc.is_valid)
+
+ def test_span_id_validity(self):
+ span_id_max = int("f" * 16, 16)
+ trace_id = 1
+
+ # valid span IDs
+ sc = trace.SpanContext(trace_id, span_id_max, is_remote=False)
+ self.assertTrue(sc.is_valid)
+
+ sc = trace.SpanContext(trace_id, 1, is_remote=False)
+ self.assertTrue(sc.is_valid)
+
+ # invalid span IDs
+ sc = trace.SpanContext(trace_id, 0, is_remote=False)
+ self.assertFalse(sc.is_valid)
+
+ sc = trace.SpanContext(trace_id, -1, is_remote=False)
+ self.assertFalse(sc.is_valid)
+
+ sc = trace.SpanContext(trace_id, span_id_max + 1, is_remote=False)
+ self.assertFalse(sc.is_valid)
diff --git a/opentelemetry-api/tests/trace/test_status.py b/opentelemetry-api/tests/trace/test_status.py
new file mode 100644
index 0000000000..6388ae9804
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_status.py
@@ -0,0 +1,68 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from logging import WARNING
+
+from opentelemetry.trace.status import Status, StatusCode
+
+
+class TestStatus(unittest.TestCase):
+ def test_constructor(self):
+ status = Status()
+ self.assertIs(status.status_code, StatusCode.UNSET)
+ self.assertIsNone(status.description)
+
+ status = Status(StatusCode.ERROR, "unavailable")
+ self.assertIs(status.status_code, StatusCode.ERROR)
+ self.assertEqual(status.description, "unavailable")
+
+ def test_invalid_description(self):
+ with self.assertLogs(level=WARNING) as warning:
+ status = Status(status_code=StatusCode.ERROR, description={"test": "val"}) # type: ignore
+ self.assertIs(status.status_code, StatusCode.ERROR)
+ self.assertEqual(status.description, None)
+ self.assertIn(
+ "Invalid status description type, expected str",
+ warning.output[0], # type: ignore
+ )
+
+ def test_description_and_non_error_status(self):
+ with self.assertLogs(level=WARNING) as warning:
+ status = Status(
+ status_code=StatusCode.OK, description="status description"
+ )
+ self.assertIs(status.status_code, StatusCode.OK)
+ self.assertEqual(status.description, None)
+ self.assertIn(
+ "description should only be set when status_code is set to StatusCode.ERROR",
+ warning.output[0], # type: ignore
+ )
+
+ with self.assertLogs(level=WARNING) as warning:
+ status = Status(
+ status_code=StatusCode.UNSET, description="status description"
+ )
+ self.assertIs(status.status_code, StatusCode.UNSET)
+ self.assertEqual(status.description, None)
+ self.assertIn(
+ "description should only be set when status_code is set to StatusCode.ERROR",
+ warning.output[0], # type: ignore
+ )
+
+ status = Status(
+ status_code=StatusCode.ERROR, description="status description"
+ )
+ self.assertIs(status.status_code, StatusCode.ERROR)
+ self.assertEqual(status.description, "status description")
diff --git a/opentelemetry-api/tests/trace/test_tracer.py b/opentelemetry-api/tests/trace/test_tracer.py
new file mode 100644
index 0000000000..a7ad589ae6
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_tracer.py
@@ -0,0 +1,70 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from contextlib import contextmanager
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.trace import (
+ INVALID_SPAN,
+ NoOpTracer,
+ Span,
+ Tracer,
+ get_current_span,
+)
+
+
+class TestTracer(TestCase):
+ def setUp(self):
+ self.tracer = NoOpTracer()
+
+ def test_start_span(self):
+ with self.tracer.start_span("") as span:
+ self.assertIsInstance(span, Span)
+
+ def test_start_as_current_span_context_manager(self):
+ with self.tracer.start_as_current_span("") as span:
+ self.assertIsInstance(span, Span)
+
+ def test_start_as_current_span_decorator(self):
+
+ mock_call = Mock()
+
+ class MockTracer(Tracer):
+ def start_span(self, *args, **kwargs):
+ return INVALID_SPAN
+
+ @contextmanager
+ def start_as_current_span(self, *args, **kwargs): # type: ignore
+ mock_call()
+ yield INVALID_SPAN
+
+ mock_tracer = MockTracer()
+
+ @mock_tracer.start_as_current_span("name")
+ def function(): # type: ignore
+ pass
+
+ function() # type: ignore
+ function() # type: ignore
+ function() # type: ignore
+
+ self.assertEqual(mock_call.call_count, 3)
+
+ def test_get_current_span(self):
+ with self.tracer.start_as_current_span("test") as span:
+ get_current_span().set_attribute("test", "test")
+ self.assertEqual(span, INVALID_SPAN)
+ self.assertFalse(hasattr("span", "attributes"))
diff --git a/opentelemetry-api/tests/trace/test_tracestate.py b/opentelemetry-api/tests/trace/test_tracestate.py
new file mode 100644
index 0000000000..625b260d54
--- /dev/null
+++ b/opentelemetry-api/tests/trace/test_tracestate.py
@@ -0,0 +1,114 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# pylint: disable=no-member
+
+import unittest
+
+from opentelemetry.trace.span import TraceState
+
+
+class TestTraceContextFormat(unittest.TestCase):
+ def test_empty_tracestate(self):
+ state = TraceState()
+ self.assertEqual(len(state), 0)
+ self.assertEqual(state.to_header(), "")
+
+ def test_tracestate_valid_pairs(self):
+ pairs = [("1a-2f@foo", "bar1"), ("foo-_*/bar", "bar4")]
+ state = TraceState(pairs)
+ self.assertEqual(len(state), 2)
+ self.assertIsNotNone(state.get("foo-_*/bar"))
+ self.assertEqual(state.get("foo-_*/bar"), "bar4")
+ self.assertEqual(state.to_header(), "1a-2f@foo=bar1,foo-_*/bar=bar4")
+ self.assertIsNone(state.get("random"))
+
+ def test_tracestate_add_valid(self):
+ state = TraceState()
+ new_state = state.add("1a-2f@foo", "bar4")
+ self.assertEqual(len(new_state), 1)
+ self.assertEqual(new_state.get("1a-2f@foo"), "bar4")
+
+ def test_tracestate_add_invalid(self):
+ state = TraceState()
+ new_state = state.add("%%%nsasa", "val")
+ self.assertEqual(len(new_state), 0)
+ new_state = new_state.add("key", "====val====")
+ self.assertEqual(len(new_state), 0)
+ self.assertEqual(new_state.to_header(), "")
+
+ def test_tracestate_update_valid(self):
+ state = TraceState([("a", "1")])
+ new_state = state.update("a", "2")
+ self.assertEqual(new_state.get("a"), "2")
+ new_state = new_state.add("b", "3")
+ self.assertNotEqual(state, new_state)
+
+ def test_tracestate_update_invalid(self):
+ state = TraceState([("a", "1")])
+ new_state = state.update("a", "2=/")
+ self.assertNotEqual(new_state.get("a"), "2=/")
+ new_state = new_state.update("a", ",,2,,f")
+ self.assertNotEqual(new_state.get("a"), ",,2,,f")
+ self.assertEqual(new_state.get("a"), "1")
+
+ def test_tracestate_delete_preserved(self):
+ state = TraceState([("a", "1"), ("b", "2"), ("c", "3")])
+ new_state = state.delete("b")
+ self.assertIsNone(new_state.get("b"))
+ entries = list(new_state.items())
+ a_place = entries.index(("a", "1"))
+ c_place = entries.index(("c", "3"))
+ self.assertLessEqual(a_place, c_place)
+
+ def test_tracestate_from_header(self):
+ entries = [
+ "1a-2f@foo=bar1",
+ "1a-_*/2b@foo=bar2",
+ "foo=bar3",
+ "foo-_*/bar=bar4",
+ ]
+ header_list = [",".join(entries)]
+ state = TraceState.from_header(header_list)
+ self.assertEqual(state.to_header(), ",".join(entries))
+
+ def test_tracestate_order_changed(self):
+ entries = [
+ "1a-2f@foo=bar1",
+ "1a-_*/2b@foo=bar2",
+ "foo=bar3",
+ "foo-_*/bar=bar4",
+ ]
+ header_list = [",".join(entries)]
+ state = TraceState.from_header(header_list)
+ new_state = state.update("foo", "bar33")
+ entries = list(new_state.items()) # type: ignore
+ foo_place = entries.index(("foo", "bar33")) # type: ignore
+ prev_first_place = entries.index(("1a-2f@foo", "bar1")) # type: ignore
+ self.assertLessEqual(foo_place, prev_first_place)
+
+ def test_trace_contains(self):
+ entries = [
+ "1a-2f@foo=bar1",
+ "1a-_*/2b@foo=bar2",
+ "foo=bar3",
+ "foo-_*/bar=bar4",
+ ]
+ header_list = [",".join(entries)]
+ state = TraceState.from_header(header_list)
+
+ self.assertTrue("foo" in state)
+ self.assertFalse("bar" in state)
+ self.assertIsNone(state.get("bar"))
+ with self.assertRaises(KeyError):
+ state["bar"] # pylint:disable=W0104
diff --git a/opentelemetry-api/tests/util/test__importlib_metadata.py b/opentelemetry-api/tests/util/test__importlib_metadata.py
new file mode 100644
index 0000000000..92a4e7dd62
--- /dev/null
+++ b/opentelemetry-api/tests/util/test__importlib_metadata.py
@@ -0,0 +1,108 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from opentelemetry.metrics import MeterProvider
+from opentelemetry.util._importlib_metadata import EntryPoint, EntryPoints
+from opentelemetry.util._importlib_metadata import (
+ entry_points as importlib_metadata_entry_points,
+)
+
+
+class TestEntryPoints(TestCase):
+ def test_entry_points(self):
+
+ self.assertIsInstance(
+ next(
+ iter(
+ importlib_metadata_entry_points(
+ group="opentelemetry_meter_provider",
+ name="default_meter_provider",
+ )
+ )
+ ).load()(),
+ MeterProvider,
+ )
+
+ def test_uniform_behavior(self):
+ """
+ Test that entry_points behaves the same regardless of the Python
+ version.
+ """
+
+ entry_points = importlib_metadata_entry_points()
+
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_points = entry_points.select(group="opentelemetry_propagator")
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_points = entry_points.select(name="baggage")
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_point = next(iter(entry_points))
+ self.assertIsInstance(entry_point, EntryPoint)
+
+ self.assertEqual(entry_point.name, "baggage")
+ self.assertEqual(entry_point.group, "opentelemetry_propagator")
+ self.assertEqual(
+ entry_point.value,
+ "opentelemetry.baggage.propagation:W3CBaggagePropagator",
+ )
+
+ entry_points = importlib_metadata_entry_points(
+ group="opentelemetry_propagator"
+ )
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_points = entry_points.select(name="baggage")
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_point = next(iter(entry_points))
+ self.assertIsInstance(entry_point, EntryPoint)
+
+ self.assertEqual(entry_point.name, "baggage")
+ self.assertEqual(entry_point.group, "opentelemetry_propagator")
+ self.assertEqual(
+ entry_point.value,
+ "opentelemetry.baggage.propagation:W3CBaggagePropagator",
+ )
+
+ entry_points = importlib_metadata_entry_points(name="baggage")
+ self.assertIsInstance(entry_points, EntryPoints)
+
+ entry_point = next(iter(entry_points))
+ self.assertIsInstance(entry_point, EntryPoint)
+
+ self.assertEqual(entry_point.name, "baggage")
+ self.assertEqual(entry_point.group, "opentelemetry_propagator")
+ self.assertEqual(
+ entry_point.value,
+ "opentelemetry.baggage.propagation:W3CBaggagePropagator",
+ )
+
+ entry_points = importlib_metadata_entry_points(group="abc")
+ self.assertIsInstance(entry_points, EntryPoints)
+ self.assertEqual(len(entry_points), 0)
+
+ entry_points = importlib_metadata_entry_points(
+ group="opentelemetry_propagator", name="abc"
+ )
+ self.assertIsInstance(entry_points, EntryPoints)
+ self.assertEqual(len(entry_points), 0)
+
+ entry_points = importlib_metadata_entry_points(group="abc", name="abc")
+ self.assertIsInstance(entry_points, EntryPoints)
+ self.assertEqual(len(entry_points), 0)
diff --git a/opentelemetry-api/tests/util/test__providers.py b/opentelemetry-api/tests/util/test__providers.py
new file mode 100644
index 0000000000..f7b21ebacf
--- /dev/null
+++ b/opentelemetry-api/tests/util/test__providers.py
@@ -0,0 +1,56 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from importlib import reload
+from os import environ
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from opentelemetry.util import _providers
+
+
+class Test_Providers(TestCase):
+ @patch.dict(
+ environ,
+ { # type: ignore
+ "provider_environment_variable": "mock_provider_environment_variable"
+ },
+ )
+ @patch("opentelemetry.util._importlib_metadata.entry_points")
+ def test__providers(self, mock_entry_points):
+
+ reload(_providers)
+
+ mock_entry_points.configure_mock(
+ **{
+ "side_effect": [
+ [
+ Mock(
+ **{
+ "load.return_value": Mock(
+ **{"return_value": "a"}
+ )
+ }
+ ),
+ ],
+ ]
+ }
+ )
+
+ self.assertEqual(
+ _providers._load_provider(
+ "provider_environment_variable", "provider"
+ ),
+ "a",
+ )
diff --git a/opentelemetry-api/tests/util/test_once.py b/opentelemetry-api/tests/util/test_once.py
new file mode 100644
index 0000000000..ee94318d22
--- /dev/null
+++ b/opentelemetry-api/tests/util/test_once.py
@@ -0,0 +1,48 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase, MockFunc
+from opentelemetry.util._once import Once
+
+
+class TestOnce(ConcurrencyTestBase):
+ def test_once_single_thread(self):
+ once_func = MockFunc()
+ once = Once()
+
+ self.assertEqual(once_func.call_count, 0)
+
+ # first call should run
+ called = once.do_once(once_func)
+ self.assertTrue(called)
+ self.assertEqual(once_func.call_count, 1)
+
+ # subsequent calls do nothing
+ called = once.do_once(once_func)
+ self.assertFalse(called)
+ self.assertEqual(once_func.call_count, 1)
+
+ def test_once_many_threads(self):
+ once_func = MockFunc()
+ once = Once()
+
+ def run_concurrently() -> bool:
+ return once.do_once(once_func)
+
+ results = self.run_with_many_threads(run_concurrently, num_threads=100)
+
+ self.assertEqual(once_func.call_count, 1)
+
+ # check that only one of the threads got True
+ self.assertEqual(results.count(True), 1)
diff --git a/opentelemetry-api/tests/util/test_re.py b/opentelemetry-api/tests/util/test_re.py
new file mode 100644
index 0000000000..ea86f3e700
--- /dev/null
+++ b/opentelemetry-api/tests/util/test_re.py
@@ -0,0 +1,76 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+import unittest
+
+from opentelemetry.util.re import parse_env_headers
+
+
+class TestParseHeaders(unittest.TestCase):
+ def test_parse_env_headers(self):
+ inp = [
+ # invalid header name
+ ("=value", [], True),
+ ("}key=value", [], True),
+ ("@key()=value", [], True),
+ ("/key=value", [], True),
+ # invalid header value
+ ("name=\\", [], True),
+ ('name=value"', [], True),
+ ("name=;value", [], True),
+ # different header values
+ ("name=", [("name", "")], False),
+ ("name===value=", [("name", "==value=")], False),
+ # url-encoded headers
+ ("key=value%20with%20space", [("key", "value with space")], False),
+ ("key%21=value", [("key!", "value")], False),
+ ("%20key%20=%20value%20", [("key", "value")], False),
+ # header name case normalization
+ ("Key=Value", [("key", "Value")], False),
+ # mix of valid and invalid headers
+ (
+ "name1=value1,invalidName, name2 = value2 , name3=value3==",
+ [
+ (
+ "name1",
+ "value1",
+ ),
+ ("name2", "value2"),
+ ("name3", "value3=="),
+ ],
+ True,
+ ),
+ (
+ "=name=valu3; key1; key2, content = application, red=\tvelvet; cake",
+ [("content", "application")],
+ True,
+ ),
+ ]
+ for case_ in inp:
+ headers, expected, warn = case_
+ if warn:
+ with self.assertLogs(level="WARNING") as cm:
+ self.assertEqual(
+ parse_env_headers(headers), dict(expected)
+ )
+ self.assertTrue(
+ "Header format invalid! Header values in environment "
+ "variables must be URL encoded per the OpenTelemetry "
+ "Protocol Exporter specification:"
+ in cm.records[0].message,
+ )
+ else:
+ self.assertEqual(parse_env_headers(headers), dict(expected))
diff --git a/opentelemetry-proto/LICENSE b/opentelemetry-proto/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/opentelemetry-proto/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/opentelemetry-proto/README.rst b/opentelemetry-proto/README.rst
new file mode 100644
index 0000000000..555fbd70dc
--- /dev/null
+++ b/opentelemetry-proto/README.rst
@@ -0,0 +1,40 @@
+OpenTelemetry Python Proto
+==========================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-proto.svg
+ :target: https://pypi.org/project/opentelemetry-proto/
+
+This library contains the generated code for OpenTelemetry protobuf data model. The code in the current
+package was generated using the v0.17.0 release_ of opentelemetry-proto.
+
+.. _release: https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.17.0
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-proto
+
+Code Generation
+---------------
+
+These files were generated automatically from code in opentelemetry-proto_.
+To regenerate the code, run ``../scripts/proto_codegen.sh``.
+
+To build against a new release or specific commit of opentelemetry-proto_,
+update the ``PROTO_REPO_BRANCH_OR_COMMIT`` variable in
+``../scripts/proto_codegen.sh``. Then run the script and commit the changes
+as well as any fixes needed in the OTLP exporter.
+
+.. _opentelemetry-proto: https://github.com/open-telemetry/opentelemetry-proto
+
+
+References
+----------
+
+* `OpenTelemetry Project `_
+* `OpenTelemetry Proto `_
+* `proto_codegen.sh script `_
diff --git a/opentelemetry-proto/pyproject.toml b/opentelemetry-proto/pyproject.toml
new file mode 100644
index 0000000000..b44b61b50e
--- /dev/null
+++ b/opentelemetry-proto/pyproject.toml
@@ -0,0 +1,47 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-proto"
+dynamic = ["version"]
+description = "OpenTelemetry Python Proto"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+dependencies = [
+ "protobuf>=3.19, < 5.0",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-proto"
+
+[tool.hatch.version]
+path = "src/opentelemetry/proto/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/opentelemetry-proto/src/opentelemetry/proto/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/collector/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.py
new file mode 100644
index 0000000000..5e6ae0ef92
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.py
@@ -0,0 +1,59 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/collector/logs/v1/logs_service.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.logs.v1 import logs_pb2 as opentelemetry_dot_proto_dot_logs_dot_v1_dot_logs__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n8opentelemetry/proto/collector/logs/v1/logs_service.proto\x12%opentelemetry.proto.collector.logs.v1\x1a&opentelemetry/proto/logs/v1/logs.proto\"\\\n\x18\x45xportLogsServiceRequest\x12@\n\rresource_logs\x18\x01 \x03(\x0b\x32).opentelemetry.proto.logs.v1.ResourceLogs\"u\n\x19\x45xportLogsServiceResponse\x12X\n\x0fpartial_success\x18\x01 \x01(\x0b\x32?.opentelemetry.proto.collector.logs.v1.ExportLogsPartialSuccess\"O\n\x18\x45xportLogsPartialSuccess\x12\x1c\n\x14rejected_log_records\x18\x01 \x01(\x03\x12\x15\n\rerror_message\x18\x02 \x01(\t2\x9d\x01\n\x0bLogsService\x12\x8d\x01\n\x06\x45xport\x12?.opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest\x1a@.opentelemetry.proto.collector.logs.v1.ExportLogsServiceResponse\"\x00\x42\x98\x01\n(io.opentelemetry.proto.collector.logs.v1B\x10LogsServiceProtoP\x01Z0go.opentelemetry.io/proto/otlp/collector/logs/v1\xaa\x02%OpenTelemetry.Proto.Collector.Logs.V1b\x06proto3')
+
+
+
+_EXPORTLOGSSERVICEREQUEST = DESCRIPTOR.message_types_by_name['ExportLogsServiceRequest']
+_EXPORTLOGSSERVICERESPONSE = DESCRIPTOR.message_types_by_name['ExportLogsServiceResponse']
+_EXPORTLOGSPARTIALSUCCESS = DESCRIPTOR.message_types_by_name['ExportLogsPartialSuccess']
+ExportLogsServiceRequest = _reflection.GeneratedProtocolMessageType('ExportLogsServiceRequest', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTLOGSSERVICEREQUEST,
+ '__module__' : 'opentelemetry.proto.collector.logs.v1.logs_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest)
+ })
+_sym_db.RegisterMessage(ExportLogsServiceRequest)
+
+ExportLogsServiceResponse = _reflection.GeneratedProtocolMessageType('ExportLogsServiceResponse', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTLOGSSERVICERESPONSE,
+ '__module__' : 'opentelemetry.proto.collector.logs.v1.logs_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.logs.v1.ExportLogsServiceResponse)
+ })
+_sym_db.RegisterMessage(ExportLogsServiceResponse)
+
+ExportLogsPartialSuccess = _reflection.GeneratedProtocolMessageType('ExportLogsPartialSuccess', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTLOGSPARTIALSUCCESS,
+ '__module__' : 'opentelemetry.proto.collector.logs.v1.logs_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.logs.v1.ExportLogsPartialSuccess)
+ })
+_sym_db.RegisterMessage(ExportLogsPartialSuccess)
+
+_LOGSSERVICE = DESCRIPTOR.services_by_name['LogsService']
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n(io.opentelemetry.proto.collector.logs.v1B\020LogsServiceProtoP\001Z0go.opentelemetry.io/proto/otlp/collector/logs/v1\252\002%OpenTelemetry.Proto.Collector.Logs.V1'
+ _EXPORTLOGSSERVICEREQUEST._serialized_start=139
+ _EXPORTLOGSSERVICEREQUEST._serialized_end=231
+ _EXPORTLOGSSERVICERESPONSE._serialized_start=233
+ _EXPORTLOGSSERVICERESPONSE._serialized_end=350
+ _EXPORTLOGSPARTIALSUCCESS._serialized_start=352
+ _EXPORTLOGSPARTIALSUCCESS._serialized_end=431
+ _LOGSSERVICE._serialized_start=434
+ _LOGSSERVICE._serialized_end=591
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.pyi
new file mode 100644
index 0000000000..cdf57e9fa1
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2.pyi
@@ -0,0 +1,91 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.message
+import opentelemetry.proto.logs.v1.logs_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class ExportLogsServiceRequest(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_LOGS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_logs(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.logs.v1.logs_pb2.ResourceLogs]:
+ """An array of ResourceLogs.
+ For data coming from a single resource this array will typically contain one
+ element. Intermediary nodes (such as OpenTelemetry Collector) that receive
+ data from multiple origins typically batch the data before forwarding further and
+ in that case this array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_logs : typing.Optional[typing.Iterable[opentelemetry.proto.logs.v1.logs_pb2.ResourceLogs]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_logs",b"resource_logs"]) -> None: ...
+global___ExportLogsServiceRequest = ExportLogsServiceRequest
+
+class ExportLogsServiceResponse(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ PARTIAL_SUCCESS_FIELD_NUMBER: builtins.int
+ @property
+ def partial_success(self) -> global___ExportLogsPartialSuccess:
+ """The details of a partially successful export request.
+
+ If the request is only partially accepted
+ (i.e. when the server accepts only parts of the data and rejects the rest)
+ the server MUST initialize the `partial_success` field and MUST
+ set the `rejected_` with the number of items it rejected.
+
+ Servers MAY also make use of the `partial_success` field to convey
+ warnings/suggestions to senders even when the request was fully accepted.
+ In such cases, the `rejected_` MUST have a value of `0` and
+ the `error_message` MUST be non-empty.
+
+ A `partial_success` message with an empty value (rejected_ = 0 and
+ `error_message` = "") is equivalent to it not being set/present. Senders
+ SHOULD interpret it the same way as in the full success case.
+ """
+ pass
+ def __init__(self,
+ *,
+ partial_success : typing.Optional[global___ExportLogsPartialSuccess] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> None: ...
+global___ExportLogsServiceResponse = ExportLogsServiceResponse
+
+class ExportLogsPartialSuccess(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ REJECTED_LOG_RECORDS_FIELD_NUMBER: builtins.int
+ ERROR_MESSAGE_FIELD_NUMBER: builtins.int
+ rejected_log_records: builtins.int = ...
+ """The number of rejected log records.
+
+ A `rejected_` field holding a `0` value indicates that the
+ request was fully accepted.
+ """
+
+ error_message: typing.Text = ...
+ """A developer-facing human-readable message in English. It should be used
+ either to explain why the server rejected parts of the data during a partial
+ success or to convey warnings/suggestions during a full success. The message
+ should offer guidance on how users can address such issues.
+
+ error_message is an optional field. An error_message with an empty value
+ is equivalent to it not being set.
+ """
+
+ def __init__(self,
+ *,
+ rejected_log_records : builtins.int = ...,
+ error_message : typing.Text = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["error_message",b"error_message","rejected_log_records",b"rejected_log_records"]) -> None: ...
+global___ExportLogsPartialSuccess = ExportLogsPartialSuccess
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2_grpc.py b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2_grpc.py
new file mode 100644
index 0000000000..4d55e57778
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/logs/v1/logs_service_pb2_grpc.py
@@ -0,0 +1,77 @@
+# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
+"""Client and server classes corresponding to protobuf-defined services."""
+import grpc
+
+from opentelemetry.proto.collector.logs.v1 import logs_service_pb2 as opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2
+
+
+class LogsServiceStub(object):
+ """Service that can be used to push logs between one Application instrumented with
+ OpenTelemetry and an collector, or between an collector and a central collector (in this
+ case logs are sent/received to/from multiple Applications).
+ """
+
+ def __init__(self, channel):
+ """Constructor.
+
+ Args:
+ channel: A grpc.Channel.
+ """
+ self.Export = channel.unary_unary(
+ '/opentelemetry.proto.collector.logs.v1.LogsService/Export',
+ request_serializer=opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceRequest.SerializeToString,
+ response_deserializer=opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceResponse.FromString,
+ )
+
+
+class LogsServiceServicer(object):
+ """Service that can be used to push logs between one Application instrumented with
+ OpenTelemetry and an collector, or between an collector and a central collector (in this
+ case logs are sent/received to/from multiple Applications).
+ """
+
+ def Export(self, request, context):
+ """For performance reasons, it is recommended to keep this RPC
+ alive for the entire life of the application.
+ """
+ context.set_code(grpc.StatusCode.UNIMPLEMENTED)
+ context.set_details('Method not implemented!')
+ raise NotImplementedError('Method not implemented!')
+
+
+def add_LogsServiceServicer_to_server(servicer, server):
+ rpc_method_handlers = {
+ 'Export': grpc.unary_unary_rpc_method_handler(
+ servicer.Export,
+ request_deserializer=opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceRequest.FromString,
+ response_serializer=opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceResponse.SerializeToString,
+ ),
+ }
+ generic_handler = grpc.method_handlers_generic_handler(
+ 'opentelemetry.proto.collector.logs.v1.LogsService', rpc_method_handlers)
+ server.add_generic_rpc_handlers((generic_handler,))
+
+
+ # This class is part of an EXPERIMENTAL API.
+class LogsService(object):
+ """Service that can be used to push logs between one Application instrumented with
+ OpenTelemetry and an collector, or between an collector and a central collector (in this
+ case logs are sent/received to/from multiple Applications).
+ """
+
+ @staticmethod
+ def Export(request,
+ target,
+ options=(),
+ channel_credentials=None,
+ call_credentials=None,
+ insecure=False,
+ compression=None,
+ wait_for_ready=None,
+ timeout=None,
+ metadata=None):
+ return grpc.experimental.unary_unary(request, target, '/opentelemetry.proto.collector.logs.v1.LogsService/Export',
+ opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceRequest.SerializeToString,
+ opentelemetry_dot_proto_dot_collector_dot_logs_dot_v1_dot_logs__service__pb2.ExportLogsServiceResponse.FromString,
+ options, channel_credentials,
+ insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.py
new file mode 100644
index 0000000000..1d9021d702
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.py
@@ -0,0 +1,59 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/collector/metrics/v1/metrics_service.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.metrics.v1 import metrics_pb2 as opentelemetry_dot_proto_dot_metrics_dot_v1_dot_metrics__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n>opentelemetry/proto/collector/metrics/v1/metrics_service.proto\x12(opentelemetry.proto.collector.metrics.v1\x1a,opentelemetry/proto/metrics/v1/metrics.proto\"h\n\x1b\x45xportMetricsServiceRequest\x12I\n\x10resource_metrics\x18\x01 \x03(\x0b\x32/.opentelemetry.proto.metrics.v1.ResourceMetrics\"~\n\x1c\x45xportMetricsServiceResponse\x12^\n\x0fpartial_success\x18\x01 \x01(\x0b\x32\x45.opentelemetry.proto.collector.metrics.v1.ExportMetricsPartialSuccess\"R\n\x1b\x45xportMetricsPartialSuccess\x12\x1c\n\x14rejected_data_points\x18\x01 \x01(\x03\x12\x15\n\rerror_message\x18\x02 \x01(\t2\xac\x01\n\x0eMetricsService\x12\x99\x01\n\x06\x45xport\x12\x45.opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceRequest\x1a\x46.opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceResponse\"\x00\x42\xa4\x01\n+io.opentelemetry.proto.collector.metrics.v1B\x13MetricsServiceProtoP\x01Z3go.opentelemetry.io/proto/otlp/collector/metrics/v1\xaa\x02(OpenTelemetry.Proto.Collector.Metrics.V1b\x06proto3')
+
+
+
+_EXPORTMETRICSSERVICEREQUEST = DESCRIPTOR.message_types_by_name['ExportMetricsServiceRequest']
+_EXPORTMETRICSSERVICERESPONSE = DESCRIPTOR.message_types_by_name['ExportMetricsServiceResponse']
+_EXPORTMETRICSPARTIALSUCCESS = DESCRIPTOR.message_types_by_name['ExportMetricsPartialSuccess']
+ExportMetricsServiceRequest = _reflection.GeneratedProtocolMessageType('ExportMetricsServiceRequest', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTMETRICSSERVICEREQUEST,
+ '__module__' : 'opentelemetry.proto.collector.metrics.v1.metrics_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceRequest)
+ })
+_sym_db.RegisterMessage(ExportMetricsServiceRequest)
+
+ExportMetricsServiceResponse = _reflection.GeneratedProtocolMessageType('ExportMetricsServiceResponse', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTMETRICSSERVICERESPONSE,
+ '__module__' : 'opentelemetry.proto.collector.metrics.v1.metrics_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceResponse)
+ })
+_sym_db.RegisterMessage(ExportMetricsServiceResponse)
+
+ExportMetricsPartialSuccess = _reflection.GeneratedProtocolMessageType('ExportMetricsPartialSuccess', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTMETRICSPARTIALSUCCESS,
+ '__module__' : 'opentelemetry.proto.collector.metrics.v1.metrics_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.metrics.v1.ExportMetricsPartialSuccess)
+ })
+_sym_db.RegisterMessage(ExportMetricsPartialSuccess)
+
+_METRICSSERVICE = DESCRIPTOR.services_by_name['MetricsService']
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n+io.opentelemetry.proto.collector.metrics.v1B\023MetricsServiceProtoP\001Z3go.opentelemetry.io/proto/otlp/collector/metrics/v1\252\002(OpenTelemetry.Proto.Collector.Metrics.V1'
+ _EXPORTMETRICSSERVICEREQUEST._serialized_start=154
+ _EXPORTMETRICSSERVICEREQUEST._serialized_end=258
+ _EXPORTMETRICSSERVICERESPONSE._serialized_start=260
+ _EXPORTMETRICSSERVICERESPONSE._serialized_end=386
+ _EXPORTMETRICSPARTIALSUCCESS._serialized_start=388
+ _EXPORTMETRICSPARTIALSUCCESS._serialized_end=470
+ _METRICSSERVICE._serialized_start=473
+ _METRICSSERVICE._serialized_end=645
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.pyi
new file mode 100644
index 0000000000..ffd750bdf2
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2.pyi
@@ -0,0 +1,91 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.message
+import opentelemetry.proto.metrics.v1.metrics_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class ExportMetricsServiceRequest(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_METRICS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_metrics(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.metrics.v1.metrics_pb2.ResourceMetrics]:
+ """An array of ResourceMetrics.
+ For data coming from a single resource this array will typically contain one
+ element. Intermediary nodes (such as OpenTelemetry Collector) that receive
+ data from multiple origins typically batch the data before forwarding further and
+ in that case this array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_metrics : typing.Optional[typing.Iterable[opentelemetry.proto.metrics.v1.metrics_pb2.ResourceMetrics]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_metrics",b"resource_metrics"]) -> None: ...
+global___ExportMetricsServiceRequest = ExportMetricsServiceRequest
+
+class ExportMetricsServiceResponse(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ PARTIAL_SUCCESS_FIELD_NUMBER: builtins.int
+ @property
+ def partial_success(self) -> global___ExportMetricsPartialSuccess:
+ """The details of a partially successful export request.
+
+ If the request is only partially accepted
+ (i.e. when the server accepts only parts of the data and rejects the rest)
+ the server MUST initialize the `partial_success` field and MUST
+ set the `rejected_` with the number of items it rejected.
+
+ Servers MAY also make use of the `partial_success` field to convey
+ warnings/suggestions to senders even when the request was fully accepted.
+ In such cases, the `rejected_` MUST have a value of `0` and
+ the `error_message` MUST be non-empty.
+
+ A `partial_success` message with an empty value (rejected_ = 0 and
+ `error_message` = "") is equivalent to it not being set/present. Senders
+ SHOULD interpret it the same way as in the full success case.
+ """
+ pass
+ def __init__(self,
+ *,
+ partial_success : typing.Optional[global___ExportMetricsPartialSuccess] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> None: ...
+global___ExportMetricsServiceResponse = ExportMetricsServiceResponse
+
+class ExportMetricsPartialSuccess(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ REJECTED_DATA_POINTS_FIELD_NUMBER: builtins.int
+ ERROR_MESSAGE_FIELD_NUMBER: builtins.int
+ rejected_data_points: builtins.int = ...
+ """The number of rejected data points.
+
+ A `rejected_` field holding a `0` value indicates that the
+ request was fully accepted.
+ """
+
+ error_message: typing.Text = ...
+ """A developer-facing human-readable message in English. It should be used
+ either to explain why the server rejected parts of the data during a partial
+ success or to convey warnings/suggestions during a full success. The message
+ should offer guidance on how users can address such issues.
+
+ error_message is an optional field. An error_message with an empty value
+ is equivalent to it not being set.
+ """
+
+ def __init__(self,
+ *,
+ rejected_data_points : builtins.int = ...,
+ error_message : typing.Text = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["error_message",b"error_message","rejected_data_points",b"rejected_data_points"]) -> None: ...
+global___ExportMetricsPartialSuccess = ExportMetricsPartialSuccess
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2_grpc.py b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2_grpc.py
new file mode 100644
index 0000000000..c181c44641
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/metrics/v1/metrics_service_pb2_grpc.py
@@ -0,0 +1,77 @@
+# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
+"""Client and server classes corresponding to protobuf-defined services."""
+import grpc
+
+from opentelemetry.proto.collector.metrics.v1 import metrics_service_pb2 as opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2
+
+
+class MetricsServiceStub(object):
+ """Service that can be used to push metrics between one Application
+ instrumented with OpenTelemetry and a collector, or between a collector and a
+ central collector.
+ """
+
+ def __init__(self, channel):
+ """Constructor.
+
+ Args:
+ channel: A grpc.Channel.
+ """
+ self.Export = channel.unary_unary(
+ '/opentelemetry.proto.collector.metrics.v1.MetricsService/Export',
+ request_serializer=opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceRequest.SerializeToString,
+ response_deserializer=opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceResponse.FromString,
+ )
+
+
+class MetricsServiceServicer(object):
+ """Service that can be used to push metrics between one Application
+ instrumented with OpenTelemetry and a collector, or between a collector and a
+ central collector.
+ """
+
+ def Export(self, request, context):
+ """For performance reasons, it is recommended to keep this RPC
+ alive for the entire life of the application.
+ """
+ context.set_code(grpc.StatusCode.UNIMPLEMENTED)
+ context.set_details('Method not implemented!')
+ raise NotImplementedError('Method not implemented!')
+
+
+def add_MetricsServiceServicer_to_server(servicer, server):
+ rpc_method_handlers = {
+ 'Export': grpc.unary_unary_rpc_method_handler(
+ servicer.Export,
+ request_deserializer=opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceRequest.FromString,
+ response_serializer=opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceResponse.SerializeToString,
+ ),
+ }
+ generic_handler = grpc.method_handlers_generic_handler(
+ 'opentelemetry.proto.collector.metrics.v1.MetricsService', rpc_method_handlers)
+ server.add_generic_rpc_handlers((generic_handler,))
+
+
+ # This class is part of an EXPERIMENTAL API.
+class MetricsService(object):
+ """Service that can be used to push metrics between one Application
+ instrumented with OpenTelemetry and a collector, or between a collector and a
+ central collector.
+ """
+
+ @staticmethod
+ def Export(request,
+ target,
+ options=(),
+ channel_credentials=None,
+ call_credentials=None,
+ insecure=False,
+ compression=None,
+ wait_for_ready=None,
+ timeout=None,
+ metadata=None):
+ return grpc.experimental.unary_unary(request, target, '/opentelemetry.proto.collector.metrics.v1.MetricsService/Export',
+ opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceRequest.SerializeToString,
+ opentelemetry_dot_proto_dot_collector_dot_metrics_dot_v1_dot_metrics__service__pb2.ExportMetricsServiceResponse.FromString,
+ options, channel_credentials,
+ insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/trace/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.py
new file mode 100644
index 0000000000..fff65da1b7
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.py
@@ -0,0 +1,59 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/collector/trace/v1/trace_service.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.trace.v1 import trace_pb2 as opentelemetry_dot_proto_dot_trace_dot_v1_dot_trace__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n:opentelemetry/proto/collector/trace/v1/trace_service.proto\x12&opentelemetry.proto.collector.trace.v1\x1a(opentelemetry/proto/trace/v1/trace.proto\"`\n\x19\x45xportTraceServiceRequest\x12\x43\n\x0eresource_spans\x18\x01 \x03(\x0b\x32+.opentelemetry.proto.trace.v1.ResourceSpans\"x\n\x1a\x45xportTraceServiceResponse\x12Z\n\x0fpartial_success\x18\x01 \x01(\x0b\x32\x41.opentelemetry.proto.collector.trace.v1.ExportTracePartialSuccess\"J\n\x19\x45xportTracePartialSuccess\x12\x16\n\x0erejected_spans\x18\x01 \x01(\x03\x12\x15\n\rerror_message\x18\x02 \x01(\t2\xa2\x01\n\x0cTraceService\x12\x91\x01\n\x06\x45xport\x12\x41.opentelemetry.proto.collector.trace.v1.ExportTraceServiceRequest\x1a\x42.opentelemetry.proto.collector.trace.v1.ExportTraceServiceResponse\"\x00\x42\x9c\x01\n)io.opentelemetry.proto.collector.trace.v1B\x11TraceServiceProtoP\x01Z1go.opentelemetry.io/proto/otlp/collector/trace/v1\xaa\x02&OpenTelemetry.Proto.Collector.Trace.V1b\x06proto3')
+
+
+
+_EXPORTTRACESERVICEREQUEST = DESCRIPTOR.message_types_by_name['ExportTraceServiceRequest']
+_EXPORTTRACESERVICERESPONSE = DESCRIPTOR.message_types_by_name['ExportTraceServiceResponse']
+_EXPORTTRACEPARTIALSUCCESS = DESCRIPTOR.message_types_by_name['ExportTracePartialSuccess']
+ExportTraceServiceRequest = _reflection.GeneratedProtocolMessageType('ExportTraceServiceRequest', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTTRACESERVICEREQUEST,
+ '__module__' : 'opentelemetry.proto.collector.trace.v1.trace_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.trace.v1.ExportTraceServiceRequest)
+ })
+_sym_db.RegisterMessage(ExportTraceServiceRequest)
+
+ExportTraceServiceResponse = _reflection.GeneratedProtocolMessageType('ExportTraceServiceResponse', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTTRACESERVICERESPONSE,
+ '__module__' : 'opentelemetry.proto.collector.trace.v1.trace_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.trace.v1.ExportTraceServiceResponse)
+ })
+_sym_db.RegisterMessage(ExportTraceServiceResponse)
+
+ExportTracePartialSuccess = _reflection.GeneratedProtocolMessageType('ExportTracePartialSuccess', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPORTTRACEPARTIALSUCCESS,
+ '__module__' : 'opentelemetry.proto.collector.trace.v1.trace_service_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.collector.trace.v1.ExportTracePartialSuccess)
+ })
+_sym_db.RegisterMessage(ExportTracePartialSuccess)
+
+_TRACESERVICE = DESCRIPTOR.services_by_name['TraceService']
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n)io.opentelemetry.proto.collector.trace.v1B\021TraceServiceProtoP\001Z1go.opentelemetry.io/proto/otlp/collector/trace/v1\252\002&OpenTelemetry.Proto.Collector.Trace.V1'
+ _EXPORTTRACESERVICEREQUEST._serialized_start=144
+ _EXPORTTRACESERVICEREQUEST._serialized_end=240
+ _EXPORTTRACESERVICERESPONSE._serialized_start=242
+ _EXPORTTRACESERVICERESPONSE._serialized_end=362
+ _EXPORTTRACEPARTIALSUCCESS._serialized_start=364
+ _EXPORTTRACEPARTIALSUCCESS._serialized_end=438
+ _TRACESERVICE._serialized_start=441
+ _TRACESERVICE._serialized_end=603
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.pyi
new file mode 100644
index 0000000000..4e2d064ee7
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2.pyi
@@ -0,0 +1,91 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.message
+import opentelemetry.proto.trace.v1.trace_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class ExportTraceServiceRequest(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_SPANS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_spans(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.trace.v1.trace_pb2.ResourceSpans]:
+ """An array of ResourceSpans.
+ For data coming from a single resource this array will typically contain one
+ element. Intermediary nodes (such as OpenTelemetry Collector) that receive
+ data from multiple origins typically batch the data before forwarding further and
+ in that case this array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_spans : typing.Optional[typing.Iterable[opentelemetry.proto.trace.v1.trace_pb2.ResourceSpans]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_spans",b"resource_spans"]) -> None: ...
+global___ExportTraceServiceRequest = ExportTraceServiceRequest
+
+class ExportTraceServiceResponse(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ PARTIAL_SUCCESS_FIELD_NUMBER: builtins.int
+ @property
+ def partial_success(self) -> global___ExportTracePartialSuccess:
+ """The details of a partially successful export request.
+
+ If the request is only partially accepted
+ (i.e. when the server accepts only parts of the data and rejects the rest)
+ the server MUST initialize the `partial_success` field and MUST
+ set the `rejected_` with the number of items it rejected.
+
+ Servers MAY also make use of the `partial_success` field to convey
+ warnings/suggestions to senders even when the request was fully accepted.
+ In such cases, the `rejected_` MUST have a value of `0` and
+ the `error_message` MUST be non-empty.
+
+ A `partial_success` message with an empty value (rejected_ = 0 and
+ `error_message` = "") is equivalent to it not being set/present. Senders
+ SHOULD interpret it the same way as in the full success case.
+ """
+ pass
+ def __init__(self,
+ *,
+ partial_success : typing.Optional[global___ExportTracePartialSuccess] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["partial_success",b"partial_success"]) -> None: ...
+global___ExportTraceServiceResponse = ExportTraceServiceResponse
+
+class ExportTracePartialSuccess(google.protobuf.message.Message):
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ REJECTED_SPANS_FIELD_NUMBER: builtins.int
+ ERROR_MESSAGE_FIELD_NUMBER: builtins.int
+ rejected_spans: builtins.int = ...
+ """The number of rejected spans.
+
+ A `rejected_` field holding a `0` value indicates that the
+ request was fully accepted.
+ """
+
+ error_message: typing.Text = ...
+ """A developer-facing human-readable message in English. It should be used
+ either to explain why the server rejected parts of the data during a partial
+ success or to convey warnings/suggestions during a full success. The message
+ should offer guidance on how users can address such issues.
+
+ error_message is an optional field. An error_message with an empty value
+ is equivalent to it not being set.
+ """
+
+ def __init__(self,
+ *,
+ rejected_spans : builtins.int = ...,
+ error_message : typing.Text = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["error_message",b"error_message","rejected_spans",b"rejected_spans"]) -> None: ...
+global___ExportTracePartialSuccess = ExportTracePartialSuccess
diff --git a/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2_grpc.py b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2_grpc.py
new file mode 100644
index 0000000000..81dbbe59f3
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/collector/trace/v1/trace_service_pb2_grpc.py
@@ -0,0 +1,77 @@
+# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
+"""Client and server classes corresponding to protobuf-defined services."""
+import grpc
+
+from opentelemetry.proto.collector.trace.v1 import trace_service_pb2 as opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2
+
+
+class TraceServiceStub(object):
+ """Service that can be used to push spans between one Application instrumented with
+ OpenTelemetry and a collector, or between a collector and a central collector (in this
+ case spans are sent/received to/from multiple Applications).
+ """
+
+ def __init__(self, channel):
+ """Constructor.
+
+ Args:
+ channel: A grpc.Channel.
+ """
+ self.Export = channel.unary_unary(
+ '/opentelemetry.proto.collector.trace.v1.TraceService/Export',
+ request_serializer=opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceRequest.SerializeToString,
+ response_deserializer=opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceResponse.FromString,
+ )
+
+
+class TraceServiceServicer(object):
+ """Service that can be used to push spans between one Application instrumented with
+ OpenTelemetry and a collector, or between a collector and a central collector (in this
+ case spans are sent/received to/from multiple Applications).
+ """
+
+ def Export(self, request, context):
+ """For performance reasons, it is recommended to keep this RPC
+ alive for the entire life of the application.
+ """
+ context.set_code(grpc.StatusCode.UNIMPLEMENTED)
+ context.set_details('Method not implemented!')
+ raise NotImplementedError('Method not implemented!')
+
+
+def add_TraceServiceServicer_to_server(servicer, server):
+ rpc_method_handlers = {
+ 'Export': grpc.unary_unary_rpc_method_handler(
+ servicer.Export,
+ request_deserializer=opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceRequest.FromString,
+ response_serializer=opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceResponse.SerializeToString,
+ ),
+ }
+ generic_handler = grpc.method_handlers_generic_handler(
+ 'opentelemetry.proto.collector.trace.v1.TraceService', rpc_method_handlers)
+ server.add_generic_rpc_handlers((generic_handler,))
+
+
+ # This class is part of an EXPERIMENTAL API.
+class TraceService(object):
+ """Service that can be used to push spans between one Application instrumented with
+ OpenTelemetry and a collector, or between a collector and a central collector (in this
+ case spans are sent/received to/from multiple Applications).
+ """
+
+ @staticmethod
+ def Export(request,
+ target,
+ options=(),
+ channel_credentials=None,
+ call_credentials=None,
+ insecure=False,
+ compression=None,
+ wait_for_ready=None,
+ timeout=None,
+ metadata=None):
+ return grpc.experimental.unary_unary(request, target, '/opentelemetry.proto.collector.trace.v1.TraceService/Export',
+ opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceRequest.SerializeToString,
+ opentelemetry_dot_proto_dot_collector_dot_trace_dot_v1_dot_trace__service__pb2.ExportTraceServiceResponse.FromString,
+ options, channel_credentials,
+ insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/common/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/common/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/common/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/common/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.py
new file mode 100644
index 0000000000..bec37ab230
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.py
@@ -0,0 +1,75 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/common/v1/common.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*opentelemetry/proto/common/v1/common.proto\x12\x1dopentelemetry.proto.common.v1\"\x8c\x02\n\x08\x41nyValue\x12\x16\n\x0cstring_value\x18\x01 \x01(\tH\x00\x12\x14\n\nbool_value\x18\x02 \x01(\x08H\x00\x12\x13\n\tint_value\x18\x03 \x01(\x03H\x00\x12\x16\n\x0c\x64ouble_value\x18\x04 \x01(\x01H\x00\x12@\n\x0b\x61rray_value\x18\x05 \x01(\x0b\x32).opentelemetry.proto.common.v1.ArrayValueH\x00\x12\x43\n\x0ckvlist_value\x18\x06 \x01(\x0b\x32+.opentelemetry.proto.common.v1.KeyValueListH\x00\x12\x15\n\x0b\x62ytes_value\x18\x07 \x01(\x0cH\x00\x42\x07\n\x05value\"E\n\nArrayValue\x12\x37\n\x06values\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.AnyValue\"G\n\x0cKeyValueList\x12\x37\n\x06values\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\"O\n\x08KeyValue\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x36\n\x05value\x18\x02 \x01(\x0b\x32\'.opentelemetry.proto.common.v1.AnyValue\"\x94\x01\n\x14InstrumentationScope\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12;\n\nattributes\x18\x03 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x04 \x01(\rB{\n io.opentelemetry.proto.common.v1B\x0b\x43ommonProtoP\x01Z(go.opentelemetry.io/proto/otlp/common/v1\xaa\x02\x1dOpenTelemetry.Proto.Common.V1b\x06proto3')
+
+
+
+_ANYVALUE = DESCRIPTOR.message_types_by_name['AnyValue']
+_ARRAYVALUE = DESCRIPTOR.message_types_by_name['ArrayValue']
+_KEYVALUELIST = DESCRIPTOR.message_types_by_name['KeyValueList']
+_KEYVALUE = DESCRIPTOR.message_types_by_name['KeyValue']
+_INSTRUMENTATIONSCOPE = DESCRIPTOR.message_types_by_name['InstrumentationScope']
+AnyValue = _reflection.GeneratedProtocolMessageType('AnyValue', (_message.Message,), {
+ 'DESCRIPTOR' : _ANYVALUE,
+ '__module__' : 'opentelemetry.proto.common.v1.common_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.common.v1.AnyValue)
+ })
+_sym_db.RegisterMessage(AnyValue)
+
+ArrayValue = _reflection.GeneratedProtocolMessageType('ArrayValue', (_message.Message,), {
+ 'DESCRIPTOR' : _ARRAYVALUE,
+ '__module__' : 'opentelemetry.proto.common.v1.common_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.common.v1.ArrayValue)
+ })
+_sym_db.RegisterMessage(ArrayValue)
+
+KeyValueList = _reflection.GeneratedProtocolMessageType('KeyValueList', (_message.Message,), {
+ 'DESCRIPTOR' : _KEYVALUELIST,
+ '__module__' : 'opentelemetry.proto.common.v1.common_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.common.v1.KeyValueList)
+ })
+_sym_db.RegisterMessage(KeyValueList)
+
+KeyValue = _reflection.GeneratedProtocolMessageType('KeyValue', (_message.Message,), {
+ 'DESCRIPTOR' : _KEYVALUE,
+ '__module__' : 'opentelemetry.proto.common.v1.common_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.common.v1.KeyValue)
+ })
+_sym_db.RegisterMessage(KeyValue)
+
+InstrumentationScope = _reflection.GeneratedProtocolMessageType('InstrumentationScope', (_message.Message,), {
+ 'DESCRIPTOR' : _INSTRUMENTATIONSCOPE,
+ '__module__' : 'opentelemetry.proto.common.v1.common_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.common.v1.InstrumentationScope)
+ })
+_sym_db.RegisterMessage(InstrumentationScope)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n io.opentelemetry.proto.common.v1B\013CommonProtoP\001Z(go.opentelemetry.io/proto/otlp/common/v1\252\002\035OpenTelemetry.Proto.Common.V1'
+ _ANYVALUE._serialized_start=78
+ _ANYVALUE._serialized_end=346
+ _ARRAYVALUE._serialized_start=348
+ _ARRAYVALUE._serialized_end=417
+ _KEYVALUELIST._serialized_start=419
+ _KEYVALUELIST._serialized_end=490
+ _KEYVALUE._serialized_start=492
+ _KEYVALUE._serialized_end=571
+ _INSTRUMENTATIONSCOPE._serialized_start=574
+ _INSTRUMENTATIONSCOPE._serialized_end=722
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.pyi
new file mode 100644
index 0000000000..304feec5ab
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/common/v1/common_pb2.pyi
@@ -0,0 +1,140 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.message
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class AnyValue(google.protobuf.message.Message):
+ """AnyValue is used to represent any type of attribute value. AnyValue may contain a
+ primitive value such as a string or integer or it may contain an arbitrary nested
+ object containing arrays, key-value lists and primitives.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ STRING_VALUE_FIELD_NUMBER: builtins.int
+ BOOL_VALUE_FIELD_NUMBER: builtins.int
+ INT_VALUE_FIELD_NUMBER: builtins.int
+ DOUBLE_VALUE_FIELD_NUMBER: builtins.int
+ ARRAY_VALUE_FIELD_NUMBER: builtins.int
+ KVLIST_VALUE_FIELD_NUMBER: builtins.int
+ BYTES_VALUE_FIELD_NUMBER: builtins.int
+ string_value: typing.Text = ...
+ bool_value: builtins.bool = ...
+ int_value: builtins.int = ...
+ double_value: builtins.float = ...
+ @property
+ def array_value(self) -> global___ArrayValue: ...
+ @property
+ def kvlist_value(self) -> global___KeyValueList: ...
+ bytes_value: builtins.bytes = ...
+ def __init__(self,
+ *,
+ string_value : typing.Text = ...,
+ bool_value : builtins.bool = ...,
+ int_value : builtins.int = ...,
+ double_value : builtins.float = ...,
+ array_value : typing.Optional[global___ArrayValue] = ...,
+ kvlist_value : typing.Optional[global___KeyValueList] = ...,
+ bytes_value : builtins.bytes = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["array_value",b"array_value","bool_value",b"bool_value","bytes_value",b"bytes_value","double_value",b"double_value","int_value",b"int_value","kvlist_value",b"kvlist_value","string_value",b"string_value","value",b"value"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["array_value",b"array_value","bool_value",b"bool_value","bytes_value",b"bytes_value","double_value",b"double_value","int_value",b"int_value","kvlist_value",b"kvlist_value","string_value",b"string_value","value",b"value"]) -> None: ...
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["value",b"value"]) -> typing.Optional[typing_extensions.Literal["string_value","bool_value","int_value","double_value","array_value","kvlist_value","bytes_value"]]: ...
+global___AnyValue = AnyValue
+
+class ArrayValue(google.protobuf.message.Message):
+ """ArrayValue is a list of AnyValue messages. We need ArrayValue as a message
+ since oneof in AnyValue does not allow repeated fields.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ VALUES_FIELD_NUMBER: builtins.int
+ @property
+ def values(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___AnyValue]:
+ """Array of values. The array may be empty (contain 0 elements)."""
+ pass
+ def __init__(self,
+ *,
+ values : typing.Optional[typing.Iterable[global___AnyValue]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["values",b"values"]) -> None: ...
+global___ArrayValue = ArrayValue
+
+class KeyValueList(google.protobuf.message.Message):
+ """KeyValueList is a list of KeyValue messages. We need KeyValueList as a message
+ since `oneof` in AnyValue does not allow repeated fields. Everywhere else where we need
+ a list of KeyValue messages (e.g. in Span) we use `repeated KeyValue` directly to
+ avoid unnecessary extra wrapping (which slows down the protocol). The 2 approaches
+ are semantically equivalent.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ VALUES_FIELD_NUMBER: builtins.int
+ @property
+ def values(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___KeyValue]:
+ """A collection of key/value pairs of key-value pairs. The list may be empty (may
+ contain 0 elements).
+ The keys MUST be unique (it is not allowed to have more than one
+ value with the same key).
+ """
+ pass
+ def __init__(self,
+ *,
+ values : typing.Optional[typing.Iterable[global___KeyValue]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["values",b"values"]) -> None: ...
+global___KeyValueList = KeyValueList
+
+class KeyValue(google.protobuf.message.Message):
+ """KeyValue is a key-value pair that is used to store Span attributes, Link
+ attributes, etc.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ KEY_FIELD_NUMBER: builtins.int
+ VALUE_FIELD_NUMBER: builtins.int
+ key: typing.Text = ...
+ @property
+ def value(self) -> global___AnyValue: ...
+ def __init__(self,
+ *,
+ key : typing.Text = ...,
+ value : typing.Optional[global___AnyValue] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["value",b"value"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["key",b"key","value",b"value"]) -> None: ...
+global___KeyValue = KeyValue
+
+class InstrumentationScope(google.protobuf.message.Message):
+ """InstrumentationScope is a message representing the instrumentation scope information
+ such as the fully qualified name and version.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ NAME_FIELD_NUMBER: builtins.int
+ VERSION_FIELD_NUMBER: builtins.int
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ name: typing.Text = ...
+ """An empty instrumentation scope name means the name is unknown."""
+
+ version: typing.Text = ...
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___KeyValue]:
+ """Additional attributes that describe the scope. [Optional].
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ def __init__(self,
+ *,
+ name : typing.Text = ...,
+ version : typing.Text = ...,
+ attributes : typing.Optional[typing.Iterable[global___KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","dropped_attributes_count",b"dropped_attributes_count","name",b"name","version",b"version"]) -> None: ...
+global___InstrumentationScope = InstrumentationScope
diff --git a/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.py
new file mode 100644
index 0000000000..90b7187155
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.py
@@ -0,0 +1,103 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/logs/v1/logs.proto
+"""Generated protocol buffer code."""
+from google.protobuf.internal import enum_type_wrapper
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.common.v1 import common_pb2 as opentelemetry_dot_proto_dot_common_dot_v1_dot_common__pb2
+from opentelemetry.proto.resource.v1 import resource_pb2 as opentelemetry_dot_proto_dot_resource_dot_v1_dot_resource__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n&opentelemetry/proto/logs/v1/logs.proto\x12\x1bopentelemetry.proto.logs.v1\x1a*opentelemetry/proto/common/v1/common.proto\x1a.opentelemetry/proto/resource/v1/resource.proto\"L\n\x08LogsData\x12@\n\rresource_logs\x18\x01 \x03(\x0b\x32).opentelemetry.proto.logs.v1.ResourceLogs\"\xa3\x01\n\x0cResourceLogs\x12;\n\x08resource\x18\x01 \x01(\x0b\x32).opentelemetry.proto.resource.v1.Resource\x12:\n\nscope_logs\x18\x02 \x03(\x0b\x32&.opentelemetry.proto.logs.v1.ScopeLogs\x12\x12\n\nschema_url\x18\x03 \x01(\tJ\x06\x08\xe8\x07\x10\xe9\x07\"\xa0\x01\n\tScopeLogs\x12\x42\n\x05scope\x18\x01 \x01(\x0b\x32\x33.opentelemetry.proto.common.v1.InstrumentationScope\x12;\n\x0blog_records\x18\x02 \x03(\x0b\x32&.opentelemetry.proto.logs.v1.LogRecord\x12\x12\n\nschema_url\x18\x03 \x01(\t\"\xef\x02\n\tLogRecord\x12\x16\n\x0etime_unix_nano\x18\x01 \x01(\x06\x12\x1f\n\x17observed_time_unix_nano\x18\x0b \x01(\x06\x12\x44\n\x0fseverity_number\x18\x02 \x01(\x0e\x32+.opentelemetry.proto.logs.v1.SeverityNumber\x12\x15\n\rseverity_text\x18\x03 \x01(\t\x12\x35\n\x04\x62ody\x18\x05 \x01(\x0b\x32\'.opentelemetry.proto.common.v1.AnyValue\x12;\n\nattributes\x18\x06 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x07 \x01(\r\x12\r\n\x05\x66lags\x18\x08 \x01(\x07\x12\x10\n\x08trace_id\x18\t \x01(\x0c\x12\x0f\n\x07span_id\x18\n \x01(\x0cJ\x04\x08\x04\x10\x05*\xc3\x05\n\x0eSeverityNumber\x12\x1f\n\x1bSEVERITY_NUMBER_UNSPECIFIED\x10\x00\x12\x19\n\x15SEVERITY_NUMBER_TRACE\x10\x01\x12\x1a\n\x16SEVERITY_NUMBER_TRACE2\x10\x02\x12\x1a\n\x16SEVERITY_NUMBER_TRACE3\x10\x03\x12\x1a\n\x16SEVERITY_NUMBER_TRACE4\x10\x04\x12\x19\n\x15SEVERITY_NUMBER_DEBUG\x10\x05\x12\x1a\n\x16SEVERITY_NUMBER_DEBUG2\x10\x06\x12\x1a\n\x16SEVERITY_NUMBER_DEBUG3\x10\x07\x12\x1a\n\x16SEVERITY_NUMBER_DEBUG4\x10\x08\x12\x18\n\x14SEVERITY_NUMBER_INFO\x10\t\x12\x19\n\x15SEVERITY_NUMBER_INFO2\x10\n\x12\x19\n\x15SEVERITY_NUMBER_INFO3\x10\x0b\x12\x19\n\x15SEVERITY_NUMBER_INFO4\x10\x0c\x12\x18\n\x14SEVERITY_NUMBER_WARN\x10\r\x12\x19\n\x15SEVERITY_NUMBER_WARN2\x10\x0e\x12\x19\n\x15SEVERITY_NUMBER_WARN3\x10\x0f\x12\x19\n\x15SEVERITY_NUMBER_WARN4\x10\x10\x12\x19\n\x15SEVERITY_NUMBER_ERROR\x10\x11\x12\x1a\n\x16SEVERITY_NUMBER_ERROR2\x10\x12\x12\x1a\n\x16SEVERITY_NUMBER_ERROR3\x10\x13\x12\x1a\n\x16SEVERITY_NUMBER_ERROR4\x10\x14\x12\x19\n\x15SEVERITY_NUMBER_FATAL\x10\x15\x12\x1a\n\x16SEVERITY_NUMBER_FATAL2\x10\x16\x12\x1a\n\x16SEVERITY_NUMBER_FATAL3\x10\x17\x12\x1a\n\x16SEVERITY_NUMBER_FATAL4\x10\x18*Y\n\x0eLogRecordFlags\x12\x1f\n\x1bLOG_RECORD_FLAGS_DO_NOT_USE\x10\x00\x12&\n!LOG_RECORD_FLAGS_TRACE_FLAGS_MASK\x10\xff\x01\x42s\n\x1eio.opentelemetry.proto.logs.v1B\tLogsProtoP\x01Z&go.opentelemetry.io/proto/otlp/logs/v1\xaa\x02\x1bOpenTelemetry.Proto.Logs.V1b\x06proto3')
+
+_SEVERITYNUMBER = DESCRIPTOR.enum_types_by_name['SeverityNumber']
+SeverityNumber = enum_type_wrapper.EnumTypeWrapper(_SEVERITYNUMBER)
+_LOGRECORDFLAGS = DESCRIPTOR.enum_types_by_name['LogRecordFlags']
+LogRecordFlags = enum_type_wrapper.EnumTypeWrapper(_LOGRECORDFLAGS)
+SEVERITY_NUMBER_UNSPECIFIED = 0
+SEVERITY_NUMBER_TRACE = 1
+SEVERITY_NUMBER_TRACE2 = 2
+SEVERITY_NUMBER_TRACE3 = 3
+SEVERITY_NUMBER_TRACE4 = 4
+SEVERITY_NUMBER_DEBUG = 5
+SEVERITY_NUMBER_DEBUG2 = 6
+SEVERITY_NUMBER_DEBUG3 = 7
+SEVERITY_NUMBER_DEBUG4 = 8
+SEVERITY_NUMBER_INFO = 9
+SEVERITY_NUMBER_INFO2 = 10
+SEVERITY_NUMBER_INFO3 = 11
+SEVERITY_NUMBER_INFO4 = 12
+SEVERITY_NUMBER_WARN = 13
+SEVERITY_NUMBER_WARN2 = 14
+SEVERITY_NUMBER_WARN3 = 15
+SEVERITY_NUMBER_WARN4 = 16
+SEVERITY_NUMBER_ERROR = 17
+SEVERITY_NUMBER_ERROR2 = 18
+SEVERITY_NUMBER_ERROR3 = 19
+SEVERITY_NUMBER_ERROR4 = 20
+SEVERITY_NUMBER_FATAL = 21
+SEVERITY_NUMBER_FATAL2 = 22
+SEVERITY_NUMBER_FATAL3 = 23
+SEVERITY_NUMBER_FATAL4 = 24
+LOG_RECORD_FLAGS_DO_NOT_USE = 0
+LOG_RECORD_FLAGS_TRACE_FLAGS_MASK = 255
+
+
+_LOGSDATA = DESCRIPTOR.message_types_by_name['LogsData']
+_RESOURCELOGS = DESCRIPTOR.message_types_by_name['ResourceLogs']
+_SCOPELOGS = DESCRIPTOR.message_types_by_name['ScopeLogs']
+_LOGRECORD = DESCRIPTOR.message_types_by_name['LogRecord']
+LogsData = _reflection.GeneratedProtocolMessageType('LogsData', (_message.Message,), {
+ 'DESCRIPTOR' : _LOGSDATA,
+ '__module__' : 'opentelemetry.proto.logs.v1.logs_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.logs.v1.LogsData)
+ })
+_sym_db.RegisterMessage(LogsData)
+
+ResourceLogs = _reflection.GeneratedProtocolMessageType('ResourceLogs', (_message.Message,), {
+ 'DESCRIPTOR' : _RESOURCELOGS,
+ '__module__' : 'opentelemetry.proto.logs.v1.logs_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.logs.v1.ResourceLogs)
+ })
+_sym_db.RegisterMessage(ResourceLogs)
+
+ScopeLogs = _reflection.GeneratedProtocolMessageType('ScopeLogs', (_message.Message,), {
+ 'DESCRIPTOR' : _SCOPELOGS,
+ '__module__' : 'opentelemetry.proto.logs.v1.logs_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.logs.v1.ScopeLogs)
+ })
+_sym_db.RegisterMessage(ScopeLogs)
+
+LogRecord = _reflection.GeneratedProtocolMessageType('LogRecord', (_message.Message,), {
+ 'DESCRIPTOR' : _LOGRECORD,
+ '__module__' : 'opentelemetry.proto.logs.v1.logs_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.logs.v1.LogRecord)
+ })
+_sym_db.RegisterMessage(LogRecord)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n\036io.opentelemetry.proto.logs.v1B\tLogsProtoP\001Z&go.opentelemetry.io/proto/otlp/logs/v1\252\002\033OpenTelemetry.Proto.Logs.V1'
+ _SEVERITYNUMBER._serialized_start=941
+ _SEVERITYNUMBER._serialized_end=1648
+ _LOGRECORDFLAGS._serialized_start=1650
+ _LOGRECORDFLAGS._serialized_end=1739
+ _LOGSDATA._serialized_start=163
+ _LOGSDATA._serialized_end=239
+ _RESOURCELOGS._serialized_start=242
+ _RESOURCELOGS._serialized_end=405
+ _SCOPELOGS._serialized_start=408
+ _SCOPELOGS._serialized_end=568
+ _LOGRECORD._serialized_start=571
+ _LOGRECORD._serialized_end=938
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.pyi
new file mode 100644
index 0000000000..98b8974390
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/logs/v1/logs_pb2.pyi
@@ -0,0 +1,321 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.internal.enum_type_wrapper
+import google.protobuf.message
+import opentelemetry.proto.common.v1.common_pb2
+import opentelemetry.proto.resource.v1.resource_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class SeverityNumber(_SeverityNumber, metaclass=_SeverityNumberEnumTypeWrapper):
+ """Possible values for LogRecord.SeverityNumber."""
+ pass
+class _SeverityNumber:
+ V = typing.NewType('V', builtins.int)
+class _SeverityNumberEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_SeverityNumber.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ SEVERITY_NUMBER_UNSPECIFIED = SeverityNumber.V(0)
+ """UNSPECIFIED is the default SeverityNumber, it MUST NOT be used."""
+
+ SEVERITY_NUMBER_TRACE = SeverityNumber.V(1)
+ SEVERITY_NUMBER_TRACE2 = SeverityNumber.V(2)
+ SEVERITY_NUMBER_TRACE3 = SeverityNumber.V(3)
+ SEVERITY_NUMBER_TRACE4 = SeverityNumber.V(4)
+ SEVERITY_NUMBER_DEBUG = SeverityNumber.V(5)
+ SEVERITY_NUMBER_DEBUG2 = SeverityNumber.V(6)
+ SEVERITY_NUMBER_DEBUG3 = SeverityNumber.V(7)
+ SEVERITY_NUMBER_DEBUG4 = SeverityNumber.V(8)
+ SEVERITY_NUMBER_INFO = SeverityNumber.V(9)
+ SEVERITY_NUMBER_INFO2 = SeverityNumber.V(10)
+ SEVERITY_NUMBER_INFO3 = SeverityNumber.V(11)
+ SEVERITY_NUMBER_INFO4 = SeverityNumber.V(12)
+ SEVERITY_NUMBER_WARN = SeverityNumber.V(13)
+ SEVERITY_NUMBER_WARN2 = SeverityNumber.V(14)
+ SEVERITY_NUMBER_WARN3 = SeverityNumber.V(15)
+ SEVERITY_NUMBER_WARN4 = SeverityNumber.V(16)
+ SEVERITY_NUMBER_ERROR = SeverityNumber.V(17)
+ SEVERITY_NUMBER_ERROR2 = SeverityNumber.V(18)
+ SEVERITY_NUMBER_ERROR3 = SeverityNumber.V(19)
+ SEVERITY_NUMBER_ERROR4 = SeverityNumber.V(20)
+ SEVERITY_NUMBER_FATAL = SeverityNumber.V(21)
+ SEVERITY_NUMBER_FATAL2 = SeverityNumber.V(22)
+ SEVERITY_NUMBER_FATAL3 = SeverityNumber.V(23)
+ SEVERITY_NUMBER_FATAL4 = SeverityNumber.V(24)
+
+SEVERITY_NUMBER_UNSPECIFIED = SeverityNumber.V(0)
+"""UNSPECIFIED is the default SeverityNumber, it MUST NOT be used."""
+
+SEVERITY_NUMBER_TRACE = SeverityNumber.V(1)
+SEVERITY_NUMBER_TRACE2 = SeverityNumber.V(2)
+SEVERITY_NUMBER_TRACE3 = SeverityNumber.V(3)
+SEVERITY_NUMBER_TRACE4 = SeverityNumber.V(4)
+SEVERITY_NUMBER_DEBUG = SeverityNumber.V(5)
+SEVERITY_NUMBER_DEBUG2 = SeverityNumber.V(6)
+SEVERITY_NUMBER_DEBUG3 = SeverityNumber.V(7)
+SEVERITY_NUMBER_DEBUG4 = SeverityNumber.V(8)
+SEVERITY_NUMBER_INFO = SeverityNumber.V(9)
+SEVERITY_NUMBER_INFO2 = SeverityNumber.V(10)
+SEVERITY_NUMBER_INFO3 = SeverityNumber.V(11)
+SEVERITY_NUMBER_INFO4 = SeverityNumber.V(12)
+SEVERITY_NUMBER_WARN = SeverityNumber.V(13)
+SEVERITY_NUMBER_WARN2 = SeverityNumber.V(14)
+SEVERITY_NUMBER_WARN3 = SeverityNumber.V(15)
+SEVERITY_NUMBER_WARN4 = SeverityNumber.V(16)
+SEVERITY_NUMBER_ERROR = SeverityNumber.V(17)
+SEVERITY_NUMBER_ERROR2 = SeverityNumber.V(18)
+SEVERITY_NUMBER_ERROR3 = SeverityNumber.V(19)
+SEVERITY_NUMBER_ERROR4 = SeverityNumber.V(20)
+SEVERITY_NUMBER_FATAL = SeverityNumber.V(21)
+SEVERITY_NUMBER_FATAL2 = SeverityNumber.V(22)
+SEVERITY_NUMBER_FATAL3 = SeverityNumber.V(23)
+SEVERITY_NUMBER_FATAL4 = SeverityNumber.V(24)
+global___SeverityNumber = SeverityNumber
+
+
+class LogRecordFlags(_LogRecordFlags, metaclass=_LogRecordFlagsEnumTypeWrapper):
+ """LogRecordFlags is defined as a protobuf 'uint32' type and is to be used as
+ bit-fields. Each non-zero value defined in this enum is a bit-mask.
+ To extract the bit-field, for example, use an expression like:
+
+ (logRecord.flags & LOG_RECORD_FLAGS_TRACE_FLAGS_MASK)
+ """
+ pass
+class _LogRecordFlags:
+ V = typing.NewType('V', builtins.int)
+class _LogRecordFlagsEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_LogRecordFlags.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ LOG_RECORD_FLAGS_DO_NOT_USE = LogRecordFlags.V(0)
+ """The zero value for the enum. Should not be used for comparisons.
+ Instead use bitwise "and" with the appropriate mask as shown above.
+ """
+
+ LOG_RECORD_FLAGS_TRACE_FLAGS_MASK = LogRecordFlags.V(255)
+ """Bits 0-7 are used for trace flags."""
+
+
+LOG_RECORD_FLAGS_DO_NOT_USE = LogRecordFlags.V(0)
+"""The zero value for the enum. Should not be used for comparisons.
+Instead use bitwise "and" with the appropriate mask as shown above.
+"""
+
+LOG_RECORD_FLAGS_TRACE_FLAGS_MASK = LogRecordFlags.V(255)
+"""Bits 0-7 are used for trace flags."""
+
+global___LogRecordFlags = LogRecordFlags
+
+
+class LogsData(google.protobuf.message.Message):
+ """LogsData represents the logs data that can be stored in a persistent storage,
+ OR can be embedded by other protocols that transfer OTLP logs data but do not
+ implement the OTLP protocol.
+
+ The main difference between this message and collector protocol is that
+ in this message there will not be any "control" or "metadata" specific to
+ OTLP protocol.
+
+ When new fields are added into this message, the OTLP request MUST be updated
+ as well.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_LOGS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_logs(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ResourceLogs]:
+ """An array of ResourceLogs.
+ For data coming from a single resource this array will typically contain
+ one element. Intermediary nodes that receive data from multiple origins
+ typically batch the data before forwarding further and in that case this
+ array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_logs : typing.Optional[typing.Iterable[global___ResourceLogs]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_logs",b"resource_logs"]) -> None: ...
+global___LogsData = LogsData
+
+class ResourceLogs(google.protobuf.message.Message):
+ """A collection of ScopeLogs from a Resource."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_FIELD_NUMBER: builtins.int
+ SCOPE_LOGS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def resource(self) -> opentelemetry.proto.resource.v1.resource_pb2.Resource:
+ """The resource for the logs in this message.
+ If this field is not set then resource info is unknown.
+ """
+ pass
+ @property
+ def scope_logs(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ScopeLogs]:
+ """A list of ScopeLogs that originate from a resource."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to the data in the "resource" field. It does not apply
+ to the data in the "scope_logs" field which have their own schema_url field.
+ """
+
+ def __init__(self,
+ *,
+ resource : typing.Optional[opentelemetry.proto.resource.v1.resource_pb2.Resource] = ...,
+ scope_logs : typing.Optional[typing.Iterable[global___ScopeLogs]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["resource",b"resource"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource",b"resource","schema_url",b"schema_url","scope_logs",b"scope_logs"]) -> None: ...
+global___ResourceLogs = ResourceLogs
+
+class ScopeLogs(google.protobuf.message.Message):
+ """A collection of Logs produced by a Scope."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ SCOPE_FIELD_NUMBER: builtins.int
+ LOG_RECORDS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def scope(self) -> opentelemetry.proto.common.v1.common_pb2.InstrumentationScope:
+ """The instrumentation scope information for the logs in this message.
+ Semantically when InstrumentationScope isn't set, it is equivalent with
+ an empty instrumentation scope name (unknown).
+ """
+ pass
+ @property
+ def log_records(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___LogRecord]:
+ """A list of log records."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to all logs in the "logs" field."""
+
+ def __init__(self,
+ *,
+ scope : typing.Optional[opentelemetry.proto.common.v1.common_pb2.InstrumentationScope] = ...,
+ log_records : typing.Optional[typing.Iterable[global___LogRecord]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["scope",b"scope"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["log_records",b"log_records","schema_url",b"schema_url","scope",b"scope"]) -> None: ...
+global___ScopeLogs = ScopeLogs
+
+class LogRecord(google.protobuf.message.Message):
+ """A log record according to OpenTelemetry Log Data Model:
+ https://github.com/open-telemetry/oteps/blob/main/text/logs/0097-log-data-model.md
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ OBSERVED_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ SEVERITY_NUMBER_FIELD_NUMBER: builtins.int
+ SEVERITY_TEXT_FIELD_NUMBER: builtins.int
+ BODY_FIELD_NUMBER: builtins.int
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ FLAGS_FIELD_NUMBER: builtins.int
+ TRACE_ID_FIELD_NUMBER: builtins.int
+ SPAN_ID_FIELD_NUMBER: builtins.int
+ time_unix_nano: builtins.int = ...
+ """time_unix_nano is the time when the event occurred.
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
+ Value of 0 indicates unknown or missing timestamp.
+ """
+
+ observed_time_unix_nano: builtins.int = ...
+ """Time when the event was observed by the collection system.
+ For events that originate in OpenTelemetry (e.g. using OpenTelemetry Logging SDK)
+ this timestamp is typically set at the generation time and is equal to Timestamp.
+ For events originating externally and collected by OpenTelemetry (e.g. using
+ Collector) this is the time when OpenTelemetry's code observed the event measured
+ by the clock of the OpenTelemetry code. This field MUST be set once the event is
+ observed by OpenTelemetry.
+
+ For converting OpenTelemetry log data to formats that support only one timestamp or
+ when receiving OpenTelemetry log data by recipients that support only one timestamp
+ internally the following logic is recommended:
+ - Use time_unix_nano if it is present, otherwise use observed_time_unix_nano.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
+ Value of 0 indicates unknown or missing timestamp.
+ """
+
+ severity_number: global___SeverityNumber.V = ...
+ """Numerical value of the severity, normalized to values described in Log Data Model.
+ [Optional].
+ """
+
+ severity_text: typing.Text = ...
+ """The severity text (also known as log level). The original string representation as
+ it is known at the source. [Optional].
+ """
+
+ @property
+ def body(self) -> opentelemetry.proto.common.v1.common_pb2.AnyValue:
+ """A value containing the body of the log record. Can be for example a human-readable
+ string message (including multi-line) describing the event in a free form or it can
+ be a structured data composed of arrays and maps of other values. [Optional].
+ """
+ pass
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """Additional attributes that describe the specific event occurrence. [Optional].
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ flags: builtins.int = ...
+ """Flags, a bit field. 8 least significant bits are the trace flags as
+ defined in W3C Trace Context specification. 24 most significant bits are reserved
+ and must be set to 0. Readers must not assume that 24 most significant bits
+ will be zero and must correctly mask the bits when reading 8-bit trace flag (use
+ flags & LOG_RECORD_FLAGS_TRACE_FLAGS_MASK). [Optional].
+ """
+
+ trace_id: builtins.bytes = ...
+ """A unique identifier for a trace. All logs from the same trace share
+ the same `trace_id`. The ID is a 16-byte array. An ID with all zeroes OR
+ of length other than 16 bytes is considered invalid (empty string in OTLP/JSON
+ is zero-length and thus is also invalid).
+
+ This field is optional.
+
+ The receivers SHOULD assume that the log record is not associated with a
+ trace if any of the following is true:
+ - the field is not present,
+ - the field contains an invalid value.
+ """
+
+ span_id: builtins.bytes = ...
+ """A unique identifier for a span within a trace, assigned when the span
+ is created. The ID is an 8-byte array. An ID with all zeroes OR of length
+ other than 8 bytes is considered invalid (empty string in OTLP/JSON
+ is zero-length and thus is also invalid).
+
+ This field is optional. If the sender specifies a valid span_id then it SHOULD also
+ specify a valid trace_id.
+
+ The receivers SHOULD assume that the log record is not associated with a
+ span if any of the following is true:
+ - the field is not present,
+ - the field contains an invalid value.
+ """
+
+ def __init__(self,
+ *,
+ time_unix_nano : builtins.int = ...,
+ observed_time_unix_nano : builtins.int = ...,
+ severity_number : global___SeverityNumber.V = ...,
+ severity_text : typing.Text = ...,
+ body : typing.Optional[opentelemetry.proto.common.v1.common_pb2.AnyValue] = ...,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ flags : builtins.int = ...,
+ trace_id : builtins.bytes = ...,
+ span_id : builtins.bytes = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["body",b"body"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","body",b"body","dropped_attributes_count",b"dropped_attributes_count","flags",b"flags","observed_time_unix_nano",b"observed_time_unix_nano","severity_number",b"severity_number","severity_text",b"severity_text","span_id",b"span_id","time_unix_nano",b"time_unix_nano","trace_id",b"trace_id"]) -> None: ...
+global___LogRecord = LogRecord
diff --git a/opentelemetry-proto/src/opentelemetry/proto/metrics/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/metrics/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.py
new file mode 100644
index 0000000000..4b938c2146
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.py
@@ -0,0 +1,203 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/metrics/v1/metrics.proto
+"""Generated protocol buffer code."""
+from google.protobuf.internal import enum_type_wrapper
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.common.v1 import common_pb2 as opentelemetry_dot_proto_dot_common_dot_v1_dot_common__pb2
+from opentelemetry.proto.resource.v1 import resource_pb2 as opentelemetry_dot_proto_dot_resource_dot_v1_dot_resource__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n,opentelemetry/proto/metrics/v1/metrics.proto\x12\x1eopentelemetry.proto.metrics.v1\x1a*opentelemetry/proto/common/v1/common.proto\x1a.opentelemetry/proto/resource/v1/resource.proto\"X\n\x0bMetricsData\x12I\n\x10resource_metrics\x18\x01 \x03(\x0b\x32/.opentelemetry.proto.metrics.v1.ResourceMetrics\"\xaf\x01\n\x0fResourceMetrics\x12;\n\x08resource\x18\x01 \x01(\x0b\x32).opentelemetry.proto.resource.v1.Resource\x12\x43\n\rscope_metrics\x18\x02 \x03(\x0b\x32,.opentelemetry.proto.metrics.v1.ScopeMetrics\x12\x12\n\nschema_url\x18\x03 \x01(\tJ\x06\x08\xe8\x07\x10\xe9\x07\"\x9f\x01\n\x0cScopeMetrics\x12\x42\n\x05scope\x18\x01 \x01(\x0b\x32\x33.opentelemetry.proto.common.v1.InstrumentationScope\x12\x37\n\x07metrics\x18\x02 \x03(\x0b\x32&.opentelemetry.proto.metrics.v1.Metric\x12\x12\n\nschema_url\x18\x03 \x01(\t\"\x92\x03\n\x06Metric\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x0c\n\x04unit\x18\x03 \x01(\t\x12\x36\n\x05gauge\x18\x05 \x01(\x0b\x32%.opentelemetry.proto.metrics.v1.GaugeH\x00\x12\x32\n\x03sum\x18\x07 \x01(\x0b\x32#.opentelemetry.proto.metrics.v1.SumH\x00\x12>\n\thistogram\x18\t \x01(\x0b\x32).opentelemetry.proto.metrics.v1.HistogramH\x00\x12U\n\x15\x65xponential_histogram\x18\n \x01(\x0b\x32\x34.opentelemetry.proto.metrics.v1.ExponentialHistogramH\x00\x12:\n\x07summary\x18\x0b \x01(\x0b\x32\'.opentelemetry.proto.metrics.v1.SummaryH\x00\x42\x06\n\x04\x64\x61taJ\x04\x08\x04\x10\x05J\x04\x08\x06\x10\x07J\x04\x08\x08\x10\t\"M\n\x05Gauge\x12\x44\n\x0b\x64\x61ta_points\x18\x01 \x03(\x0b\x32/.opentelemetry.proto.metrics.v1.NumberDataPoint\"\xba\x01\n\x03Sum\x12\x44\n\x0b\x64\x61ta_points\x18\x01 \x03(\x0b\x32/.opentelemetry.proto.metrics.v1.NumberDataPoint\x12W\n\x17\x61ggregation_temporality\x18\x02 \x01(\x0e\x32\x36.opentelemetry.proto.metrics.v1.AggregationTemporality\x12\x14\n\x0cis_monotonic\x18\x03 \x01(\x08\"\xad\x01\n\tHistogram\x12G\n\x0b\x64\x61ta_points\x18\x01 \x03(\x0b\x32\x32.opentelemetry.proto.metrics.v1.HistogramDataPoint\x12W\n\x17\x61ggregation_temporality\x18\x02 \x01(\x0e\x32\x36.opentelemetry.proto.metrics.v1.AggregationTemporality\"\xc3\x01\n\x14\x45xponentialHistogram\x12R\n\x0b\x64\x61ta_points\x18\x01 \x03(\x0b\x32=.opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint\x12W\n\x17\x61ggregation_temporality\x18\x02 \x01(\x0e\x32\x36.opentelemetry.proto.metrics.v1.AggregationTemporality\"P\n\x07Summary\x12\x45\n\x0b\x64\x61ta_points\x18\x01 \x03(\x0b\x32\x30.opentelemetry.proto.metrics.v1.SummaryDataPoint\"\x86\x02\n\x0fNumberDataPoint\x12;\n\nattributes\x18\x07 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12\x1c\n\x14start_time_unix_nano\x18\x02 \x01(\x06\x12\x16\n\x0etime_unix_nano\x18\x03 \x01(\x06\x12\x13\n\tas_double\x18\x04 \x01(\x01H\x00\x12\x10\n\x06\x61s_int\x18\x06 \x01(\x10H\x00\x12;\n\texemplars\x18\x05 \x03(\x0b\x32(.opentelemetry.proto.metrics.v1.Exemplar\x12\r\n\x05\x66lags\x18\x08 \x01(\rB\x07\n\x05valueJ\x04\x08\x01\x10\x02\"\xe6\x02\n\x12HistogramDataPoint\x12;\n\nattributes\x18\t \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12\x1c\n\x14start_time_unix_nano\x18\x02 \x01(\x06\x12\x16\n\x0etime_unix_nano\x18\x03 \x01(\x06\x12\r\n\x05\x63ount\x18\x04 \x01(\x06\x12\x10\n\x03sum\x18\x05 \x01(\x01H\x00\x88\x01\x01\x12\x15\n\rbucket_counts\x18\x06 \x03(\x06\x12\x17\n\x0f\x65xplicit_bounds\x18\x07 \x03(\x01\x12;\n\texemplars\x18\x08 \x03(\x0b\x32(.opentelemetry.proto.metrics.v1.Exemplar\x12\r\n\x05\x66lags\x18\n \x01(\r\x12\x10\n\x03min\x18\x0b \x01(\x01H\x01\x88\x01\x01\x12\x10\n\x03max\x18\x0c \x01(\x01H\x02\x88\x01\x01\x42\x06\n\x04_sumB\x06\n\x04_minB\x06\n\x04_maxJ\x04\x08\x01\x10\x02\"\xda\x04\n\x1d\x45xponentialHistogramDataPoint\x12;\n\nattributes\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12\x1c\n\x14start_time_unix_nano\x18\x02 \x01(\x06\x12\x16\n\x0etime_unix_nano\x18\x03 \x01(\x06\x12\r\n\x05\x63ount\x18\x04 \x01(\x06\x12\x10\n\x03sum\x18\x05 \x01(\x01H\x00\x88\x01\x01\x12\r\n\x05scale\x18\x06 \x01(\x11\x12\x12\n\nzero_count\x18\x07 \x01(\x06\x12W\n\x08positive\x18\x08 \x01(\x0b\x32\x45.opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint.Buckets\x12W\n\x08negative\x18\t \x01(\x0b\x32\x45.opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint.Buckets\x12\r\n\x05\x66lags\x18\n \x01(\r\x12;\n\texemplars\x18\x0b \x03(\x0b\x32(.opentelemetry.proto.metrics.v1.Exemplar\x12\x10\n\x03min\x18\x0c \x01(\x01H\x01\x88\x01\x01\x12\x10\n\x03max\x18\r \x01(\x01H\x02\x88\x01\x01\x12\x16\n\x0ezero_threshold\x18\x0e \x01(\x01\x1a\x30\n\x07\x42uckets\x12\x0e\n\x06offset\x18\x01 \x01(\x11\x12\x15\n\rbucket_counts\x18\x02 \x03(\x04\x42\x06\n\x04_sumB\x06\n\x04_minB\x06\n\x04_max\"\xc5\x02\n\x10SummaryDataPoint\x12;\n\nattributes\x18\x07 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12\x1c\n\x14start_time_unix_nano\x18\x02 \x01(\x06\x12\x16\n\x0etime_unix_nano\x18\x03 \x01(\x06\x12\r\n\x05\x63ount\x18\x04 \x01(\x06\x12\x0b\n\x03sum\x18\x05 \x01(\x01\x12Y\n\x0fquantile_values\x18\x06 \x03(\x0b\x32@.opentelemetry.proto.metrics.v1.SummaryDataPoint.ValueAtQuantile\x12\r\n\x05\x66lags\x18\x08 \x01(\r\x1a\x32\n\x0fValueAtQuantile\x12\x10\n\x08quantile\x18\x01 \x01(\x01\x12\r\n\x05value\x18\x02 \x01(\x01J\x04\x08\x01\x10\x02\"\xc1\x01\n\x08\x45xemplar\x12\x44\n\x13\x66iltered_attributes\x18\x07 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12\x16\n\x0etime_unix_nano\x18\x02 \x01(\x06\x12\x13\n\tas_double\x18\x03 \x01(\x01H\x00\x12\x10\n\x06\x61s_int\x18\x06 \x01(\x10H\x00\x12\x0f\n\x07span_id\x18\x04 \x01(\x0c\x12\x10\n\x08trace_id\x18\x05 \x01(\x0c\x42\x07\n\x05valueJ\x04\x08\x01\x10\x02*\x8c\x01\n\x16\x41ggregationTemporality\x12\'\n#AGGREGATION_TEMPORALITY_UNSPECIFIED\x10\x00\x12!\n\x1d\x41GGREGATION_TEMPORALITY_DELTA\x10\x01\x12&\n\"AGGREGATION_TEMPORALITY_CUMULATIVE\x10\x02*^\n\x0e\x44\x61taPointFlags\x12\x1f\n\x1b\x44\x41TA_POINT_FLAGS_DO_NOT_USE\x10\x00\x12+\n\'DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK\x10\x01\x42\x7f\n!io.opentelemetry.proto.metrics.v1B\x0cMetricsProtoP\x01Z)go.opentelemetry.io/proto/otlp/metrics/v1\xaa\x02\x1eOpenTelemetry.Proto.Metrics.V1b\x06proto3')
+
+_AGGREGATIONTEMPORALITY = DESCRIPTOR.enum_types_by_name['AggregationTemporality']
+AggregationTemporality = enum_type_wrapper.EnumTypeWrapper(_AGGREGATIONTEMPORALITY)
+_DATAPOINTFLAGS = DESCRIPTOR.enum_types_by_name['DataPointFlags']
+DataPointFlags = enum_type_wrapper.EnumTypeWrapper(_DATAPOINTFLAGS)
+AGGREGATION_TEMPORALITY_UNSPECIFIED = 0
+AGGREGATION_TEMPORALITY_DELTA = 1
+AGGREGATION_TEMPORALITY_CUMULATIVE = 2
+DATA_POINT_FLAGS_DO_NOT_USE = 0
+DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK = 1
+
+
+_METRICSDATA = DESCRIPTOR.message_types_by_name['MetricsData']
+_RESOURCEMETRICS = DESCRIPTOR.message_types_by_name['ResourceMetrics']
+_SCOPEMETRICS = DESCRIPTOR.message_types_by_name['ScopeMetrics']
+_METRIC = DESCRIPTOR.message_types_by_name['Metric']
+_GAUGE = DESCRIPTOR.message_types_by_name['Gauge']
+_SUM = DESCRIPTOR.message_types_by_name['Sum']
+_HISTOGRAM = DESCRIPTOR.message_types_by_name['Histogram']
+_EXPONENTIALHISTOGRAM = DESCRIPTOR.message_types_by_name['ExponentialHistogram']
+_SUMMARY = DESCRIPTOR.message_types_by_name['Summary']
+_NUMBERDATAPOINT = DESCRIPTOR.message_types_by_name['NumberDataPoint']
+_HISTOGRAMDATAPOINT = DESCRIPTOR.message_types_by_name['HistogramDataPoint']
+_EXPONENTIALHISTOGRAMDATAPOINT = DESCRIPTOR.message_types_by_name['ExponentialHistogramDataPoint']
+_EXPONENTIALHISTOGRAMDATAPOINT_BUCKETS = _EXPONENTIALHISTOGRAMDATAPOINT.nested_types_by_name['Buckets']
+_SUMMARYDATAPOINT = DESCRIPTOR.message_types_by_name['SummaryDataPoint']
+_SUMMARYDATAPOINT_VALUEATQUANTILE = _SUMMARYDATAPOINT.nested_types_by_name['ValueAtQuantile']
+_EXEMPLAR = DESCRIPTOR.message_types_by_name['Exemplar']
+MetricsData = _reflection.GeneratedProtocolMessageType('MetricsData', (_message.Message,), {
+ 'DESCRIPTOR' : _METRICSDATA,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.MetricsData)
+ })
+_sym_db.RegisterMessage(MetricsData)
+
+ResourceMetrics = _reflection.GeneratedProtocolMessageType('ResourceMetrics', (_message.Message,), {
+ 'DESCRIPTOR' : _RESOURCEMETRICS,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.ResourceMetrics)
+ })
+_sym_db.RegisterMessage(ResourceMetrics)
+
+ScopeMetrics = _reflection.GeneratedProtocolMessageType('ScopeMetrics', (_message.Message,), {
+ 'DESCRIPTOR' : _SCOPEMETRICS,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.ScopeMetrics)
+ })
+_sym_db.RegisterMessage(ScopeMetrics)
+
+Metric = _reflection.GeneratedProtocolMessageType('Metric', (_message.Message,), {
+ 'DESCRIPTOR' : _METRIC,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Metric)
+ })
+_sym_db.RegisterMessage(Metric)
+
+Gauge = _reflection.GeneratedProtocolMessageType('Gauge', (_message.Message,), {
+ 'DESCRIPTOR' : _GAUGE,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Gauge)
+ })
+_sym_db.RegisterMessage(Gauge)
+
+Sum = _reflection.GeneratedProtocolMessageType('Sum', (_message.Message,), {
+ 'DESCRIPTOR' : _SUM,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Sum)
+ })
+_sym_db.RegisterMessage(Sum)
+
+Histogram = _reflection.GeneratedProtocolMessageType('Histogram', (_message.Message,), {
+ 'DESCRIPTOR' : _HISTOGRAM,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Histogram)
+ })
+_sym_db.RegisterMessage(Histogram)
+
+ExponentialHistogram = _reflection.GeneratedProtocolMessageType('ExponentialHistogram', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPONENTIALHISTOGRAM,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.ExponentialHistogram)
+ })
+_sym_db.RegisterMessage(ExponentialHistogram)
+
+Summary = _reflection.GeneratedProtocolMessageType('Summary', (_message.Message,), {
+ 'DESCRIPTOR' : _SUMMARY,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Summary)
+ })
+_sym_db.RegisterMessage(Summary)
+
+NumberDataPoint = _reflection.GeneratedProtocolMessageType('NumberDataPoint', (_message.Message,), {
+ 'DESCRIPTOR' : _NUMBERDATAPOINT,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.NumberDataPoint)
+ })
+_sym_db.RegisterMessage(NumberDataPoint)
+
+HistogramDataPoint = _reflection.GeneratedProtocolMessageType('HistogramDataPoint', (_message.Message,), {
+ 'DESCRIPTOR' : _HISTOGRAMDATAPOINT,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.HistogramDataPoint)
+ })
+_sym_db.RegisterMessage(HistogramDataPoint)
+
+ExponentialHistogramDataPoint = _reflection.GeneratedProtocolMessageType('ExponentialHistogramDataPoint', (_message.Message,), {
+
+ 'Buckets' : _reflection.GeneratedProtocolMessageType('Buckets', (_message.Message,), {
+ 'DESCRIPTOR' : _EXPONENTIALHISTOGRAMDATAPOINT_BUCKETS,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint.Buckets)
+ })
+ ,
+ 'DESCRIPTOR' : _EXPONENTIALHISTOGRAMDATAPOINT,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint)
+ })
+_sym_db.RegisterMessage(ExponentialHistogramDataPoint)
+_sym_db.RegisterMessage(ExponentialHistogramDataPoint.Buckets)
+
+SummaryDataPoint = _reflection.GeneratedProtocolMessageType('SummaryDataPoint', (_message.Message,), {
+
+ 'ValueAtQuantile' : _reflection.GeneratedProtocolMessageType('ValueAtQuantile', (_message.Message,), {
+ 'DESCRIPTOR' : _SUMMARYDATAPOINT_VALUEATQUANTILE,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.SummaryDataPoint.ValueAtQuantile)
+ })
+ ,
+ 'DESCRIPTOR' : _SUMMARYDATAPOINT,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.SummaryDataPoint)
+ })
+_sym_db.RegisterMessage(SummaryDataPoint)
+_sym_db.RegisterMessage(SummaryDataPoint.ValueAtQuantile)
+
+Exemplar = _reflection.GeneratedProtocolMessageType('Exemplar', (_message.Message,), {
+ 'DESCRIPTOR' : _EXEMPLAR,
+ '__module__' : 'opentelemetry.proto.metrics.v1.metrics_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.metrics.v1.Exemplar)
+ })
+_sym_db.RegisterMessage(Exemplar)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n!io.opentelemetry.proto.metrics.v1B\014MetricsProtoP\001Z)go.opentelemetry.io/proto/otlp/metrics/v1\252\002\036OpenTelemetry.Proto.Metrics.V1'
+ _AGGREGATIONTEMPORALITY._serialized_start=3487
+ _AGGREGATIONTEMPORALITY._serialized_end=3627
+ _DATAPOINTFLAGS._serialized_start=3629
+ _DATAPOINTFLAGS._serialized_end=3723
+ _METRICSDATA._serialized_start=172
+ _METRICSDATA._serialized_end=260
+ _RESOURCEMETRICS._serialized_start=263
+ _RESOURCEMETRICS._serialized_end=438
+ _SCOPEMETRICS._serialized_start=441
+ _SCOPEMETRICS._serialized_end=600
+ _METRIC._serialized_start=603
+ _METRIC._serialized_end=1005
+ _GAUGE._serialized_start=1007
+ _GAUGE._serialized_end=1084
+ _SUM._serialized_start=1087
+ _SUM._serialized_end=1273
+ _HISTOGRAM._serialized_start=1276
+ _HISTOGRAM._serialized_end=1449
+ _EXPONENTIALHISTOGRAM._serialized_start=1452
+ _EXPONENTIALHISTOGRAM._serialized_end=1647
+ _SUMMARY._serialized_start=1649
+ _SUMMARY._serialized_end=1729
+ _NUMBERDATAPOINT._serialized_start=1732
+ _NUMBERDATAPOINT._serialized_end=1994
+ _HISTOGRAMDATAPOINT._serialized_start=1997
+ _HISTOGRAMDATAPOINT._serialized_end=2355
+ _EXPONENTIALHISTOGRAMDATAPOINT._serialized_start=2358
+ _EXPONENTIALHISTOGRAMDATAPOINT._serialized_end=2960
+ _EXPONENTIALHISTOGRAMDATAPOINT_BUCKETS._serialized_start=2888
+ _EXPONENTIALHISTOGRAMDATAPOINT_BUCKETS._serialized_end=2936
+ _SUMMARYDATAPOINT._serialized_start=2963
+ _SUMMARYDATAPOINT._serialized_end=3288
+ _SUMMARYDATAPOINT_VALUEATQUANTILE._serialized_start=3232
+ _SUMMARYDATAPOINT_VALUEATQUANTILE._serialized_end=3282
+ _EXEMPLAR._serialized_start=3291
+ _EXEMPLAR._serialized_end=3484
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.pyi
new file mode 100644
index 0000000000..ccbbb35cfb
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/metrics/v1/metrics_pb2.pyi
@@ -0,0 +1,1079 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.internal.enum_type_wrapper
+import google.protobuf.message
+import opentelemetry.proto.common.v1.common_pb2
+import opentelemetry.proto.resource.v1.resource_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class AggregationTemporality(_AggregationTemporality, metaclass=_AggregationTemporalityEnumTypeWrapper):
+ """AggregationTemporality defines how a metric aggregator reports aggregated
+ values. It describes how those values relate to the time interval over
+ which they are aggregated.
+ """
+ pass
+class _AggregationTemporality:
+ V = typing.NewType('V', builtins.int)
+class _AggregationTemporalityEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_AggregationTemporality.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ AGGREGATION_TEMPORALITY_UNSPECIFIED = AggregationTemporality.V(0)
+ """UNSPECIFIED is the default AggregationTemporality, it MUST not be used."""
+
+ AGGREGATION_TEMPORALITY_DELTA = AggregationTemporality.V(1)
+ """DELTA is an AggregationTemporality for a metric aggregator which reports
+ changes since last report time. Successive metrics contain aggregation of
+ values from continuous and non-overlapping intervals.
+
+ The values for a DELTA metric are based only on the time interval
+ associated with one measurement cycle. There is no dependency on
+ previous measurements like is the case for CUMULATIVE metrics.
+
+ For example, consider a system measuring the number of requests that
+ it receives and reports the sum of these requests every second as a
+ DELTA metric:
+
+ 1. The system starts receiving at time=t_0.
+ 2. A request is received, the system measures 1 request.
+ 3. A request is received, the system measures 1 request.
+ 4. A request is received, the system measures 1 request.
+ 5. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+1 with a value of 3.
+ 6. A request is received, the system measures 1 request.
+ 7. A request is received, the system measures 1 request.
+ 8. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0+1 to
+ t_0+2 with a value of 2.
+ """
+
+ AGGREGATION_TEMPORALITY_CUMULATIVE = AggregationTemporality.V(2)
+ """CUMULATIVE is an AggregationTemporality for a metric aggregator which
+ reports changes since a fixed start time. This means that current values
+ of a CUMULATIVE metric depend on all previous measurements since the
+ start time. Because of this, the sender is required to retain this state
+ in some form. If this state is lost or invalidated, the CUMULATIVE metric
+ values MUST be reset and a new fixed start time following the last
+ reported measurement time sent MUST be used.
+
+ For example, consider a system measuring the number of requests that
+ it receives and reports the sum of these requests every second as a
+ CUMULATIVE metric:
+
+ 1. The system starts receiving at time=t_0.
+ 2. A request is received, the system measures 1 request.
+ 3. A request is received, the system measures 1 request.
+ 4. A request is received, the system measures 1 request.
+ 5. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+1 with a value of 3.
+ 6. A request is received, the system measures 1 request.
+ 7. A request is received, the system measures 1 request.
+ 8. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+2 with a value of 5.
+ 9. The system experiences a fault and loses state.
+ 10. The system recovers and resumes receiving at time=t_1.
+ 11. A request is received, the system measures 1 request.
+ 12. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_1 to
+ t_0+1 with a value of 1.
+
+ Note: Even though, when reporting changes since last report time, using
+ CUMULATIVE is valid, it is not recommended. This may cause problems for
+ systems that do not use start_time to determine when the aggregation
+ value was reset (e.g. Prometheus).
+ """
+
+
+AGGREGATION_TEMPORALITY_UNSPECIFIED = AggregationTemporality.V(0)
+"""UNSPECIFIED is the default AggregationTemporality, it MUST not be used."""
+
+AGGREGATION_TEMPORALITY_DELTA = AggregationTemporality.V(1)
+"""DELTA is an AggregationTemporality for a metric aggregator which reports
+changes since last report time. Successive metrics contain aggregation of
+values from continuous and non-overlapping intervals.
+
+The values for a DELTA metric are based only on the time interval
+associated with one measurement cycle. There is no dependency on
+previous measurements like is the case for CUMULATIVE metrics.
+
+For example, consider a system measuring the number of requests that
+it receives and reports the sum of these requests every second as a
+DELTA metric:
+
+ 1. The system starts receiving at time=t_0.
+ 2. A request is received, the system measures 1 request.
+ 3. A request is received, the system measures 1 request.
+ 4. A request is received, the system measures 1 request.
+ 5. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+1 with a value of 3.
+ 6. A request is received, the system measures 1 request.
+ 7. A request is received, the system measures 1 request.
+ 8. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0+1 to
+ t_0+2 with a value of 2.
+"""
+
+AGGREGATION_TEMPORALITY_CUMULATIVE = AggregationTemporality.V(2)
+"""CUMULATIVE is an AggregationTemporality for a metric aggregator which
+reports changes since a fixed start time. This means that current values
+of a CUMULATIVE metric depend on all previous measurements since the
+start time. Because of this, the sender is required to retain this state
+in some form. If this state is lost or invalidated, the CUMULATIVE metric
+values MUST be reset and a new fixed start time following the last
+reported measurement time sent MUST be used.
+
+For example, consider a system measuring the number of requests that
+it receives and reports the sum of these requests every second as a
+CUMULATIVE metric:
+
+ 1. The system starts receiving at time=t_0.
+ 2. A request is received, the system measures 1 request.
+ 3. A request is received, the system measures 1 request.
+ 4. A request is received, the system measures 1 request.
+ 5. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+1 with a value of 3.
+ 6. A request is received, the system measures 1 request.
+ 7. A request is received, the system measures 1 request.
+ 8. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_0 to
+ t_0+2 with a value of 5.
+ 9. The system experiences a fault and loses state.
+ 10. The system recovers and resumes receiving at time=t_1.
+ 11. A request is received, the system measures 1 request.
+ 12. The 1 second collection cycle ends. A metric is exported for the
+ number of requests received over the interval of time t_1 to
+ t_0+1 with a value of 1.
+
+Note: Even though, when reporting changes since last report time, using
+CUMULATIVE is valid, it is not recommended. This may cause problems for
+systems that do not use start_time to determine when the aggregation
+value was reset (e.g. Prometheus).
+"""
+
+global___AggregationTemporality = AggregationTemporality
+
+
+class DataPointFlags(_DataPointFlags, metaclass=_DataPointFlagsEnumTypeWrapper):
+ """DataPointFlags is defined as a protobuf 'uint32' type and is to be used as a
+ bit-field representing 32 distinct boolean flags. Each flag defined in this
+ enum is a bit-mask. To test the presence of a single flag in the flags of
+ a data point, for example, use an expression like:
+
+ (point.flags & DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK) == DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK
+ """
+ pass
+class _DataPointFlags:
+ V = typing.NewType('V', builtins.int)
+class _DataPointFlagsEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_DataPointFlags.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ DATA_POINT_FLAGS_DO_NOT_USE = DataPointFlags.V(0)
+ """The zero value for the enum. Should not be used for comparisons.
+ Instead use bitwise "and" with the appropriate mask as shown above.
+ """
+
+ DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK = DataPointFlags.V(1)
+ """This DataPoint is valid but has no recorded value. This value
+ SHOULD be used to reflect explicitly missing data in a series, as
+ for an equivalent to the Prometheus "staleness marker".
+ """
+
+
+DATA_POINT_FLAGS_DO_NOT_USE = DataPointFlags.V(0)
+"""The zero value for the enum. Should not be used for comparisons.
+Instead use bitwise "and" with the appropriate mask as shown above.
+"""
+
+DATA_POINT_FLAGS_NO_RECORDED_VALUE_MASK = DataPointFlags.V(1)
+"""This DataPoint is valid but has no recorded value. This value
+SHOULD be used to reflect explicitly missing data in a series, as
+for an equivalent to the Prometheus "staleness marker".
+"""
+
+global___DataPointFlags = DataPointFlags
+
+
+class MetricsData(google.protobuf.message.Message):
+ """MetricsData represents the metrics data that can be stored in a persistent
+ storage, OR can be embedded by other protocols that transfer OTLP metrics
+ data but do not implement the OTLP protocol.
+
+ The main difference between this message and collector protocol is that
+ in this message there will not be any "control" or "metadata" specific to
+ OTLP protocol.
+
+ When new fields are added into this message, the OTLP request MUST be updated
+ as well.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_METRICS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_metrics(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ResourceMetrics]:
+ """An array of ResourceMetrics.
+ For data coming from a single resource this array will typically contain
+ one element. Intermediary nodes that receive data from multiple origins
+ typically batch the data before forwarding further and in that case this
+ array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_metrics : typing.Optional[typing.Iterable[global___ResourceMetrics]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_metrics",b"resource_metrics"]) -> None: ...
+global___MetricsData = MetricsData
+
+class ResourceMetrics(google.protobuf.message.Message):
+ """A collection of ScopeMetrics from a Resource."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_FIELD_NUMBER: builtins.int
+ SCOPE_METRICS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def resource(self) -> opentelemetry.proto.resource.v1.resource_pb2.Resource:
+ """The resource for the metrics in this message.
+ If this field is not set then no resource info is known.
+ """
+ pass
+ @property
+ def scope_metrics(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ScopeMetrics]:
+ """A list of metrics that originate from a resource."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to the data in the "resource" field. It does not apply
+ to the data in the "scope_metrics" field which have their own schema_url field.
+ """
+
+ def __init__(self,
+ *,
+ resource : typing.Optional[opentelemetry.proto.resource.v1.resource_pb2.Resource] = ...,
+ scope_metrics : typing.Optional[typing.Iterable[global___ScopeMetrics]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["resource",b"resource"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource",b"resource","schema_url",b"schema_url","scope_metrics",b"scope_metrics"]) -> None: ...
+global___ResourceMetrics = ResourceMetrics
+
+class ScopeMetrics(google.protobuf.message.Message):
+ """A collection of Metrics produced by an Scope."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ SCOPE_FIELD_NUMBER: builtins.int
+ METRICS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def scope(self) -> opentelemetry.proto.common.v1.common_pb2.InstrumentationScope:
+ """The instrumentation scope information for the metrics in this message.
+ Semantically when InstrumentationScope isn't set, it is equivalent with
+ an empty instrumentation scope name (unknown).
+ """
+ pass
+ @property
+ def metrics(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Metric]:
+ """A list of metrics that originate from an instrumentation library."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to all metrics in the "metrics" field."""
+
+ def __init__(self,
+ *,
+ scope : typing.Optional[opentelemetry.proto.common.v1.common_pb2.InstrumentationScope] = ...,
+ metrics : typing.Optional[typing.Iterable[global___Metric]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["scope",b"scope"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["metrics",b"metrics","schema_url",b"schema_url","scope",b"scope"]) -> None: ...
+global___ScopeMetrics = ScopeMetrics
+
+class Metric(google.protobuf.message.Message):
+ """Defines a Metric which has one or more timeseries. The following is a
+ brief summary of the Metric data model. For more details, see:
+
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/data-model.md
+
+
+ The data model and relation between entities is shown in the
+ diagram below. Here, "DataPoint" is the term used to refer to any
+ one of the specific data point value types, and "points" is the term used
+ to refer to any one of the lists of points contained in the Metric.
+
+ - Metric is composed of a metadata and data.
+ - Metadata part contains a name, description, unit.
+ - Data is one of the possible types (Sum, Gauge, Histogram, Summary).
+ - DataPoint contains timestamps, attributes, and one of the possible value type
+ fields.
+
+ Metric
+ +------------+
+ |name |
+ |description |
+ |unit | +------------------------------------+
+ |data |---> |Gauge, Sum, Histogram, Summary, ... |
+ +------------+ +------------------------------------+
+
+ Data [One of Gauge, Sum, Histogram, Summary, ...]
+ +-----------+
+ |... | // Metadata about the Data.
+ |points |--+
+ +-----------+ |
+ | +---------------------------+
+ | |DataPoint 1 |
+ v |+------+------+ +------+ |
+ +-----+ ||label |label |...|label | |
+ | 1 |-->||value1|value2|...|valueN| |
+ +-----+ |+------+------+ +------+ |
+ | . | |+-----+ |
+ | . | ||value| |
+ | . | |+-----+ |
+ | . | +---------------------------+
+ | . | .
+ | . | .
+ | . | .
+ | . | +---------------------------+
+ | . | |DataPoint M |
+ +-----+ |+------+------+ +------+ |
+ | M |-->||label |label |...|label | |
+ +-----+ ||value1|value2|...|valueN| |
+ |+------+------+ +------+ |
+ |+-----+ |
+ ||value| |
+ |+-----+ |
+ +---------------------------+
+
+ Each distinct type of DataPoint represents the output of a specific
+ aggregation function, the result of applying the DataPoint's
+ associated function of to one or more measurements.
+
+ All DataPoint types have three common fields:
+ - Attributes includes key-value pairs associated with the data point
+ - TimeUnixNano is required, set to the end time of the aggregation
+ - StartTimeUnixNano is optional, but strongly encouraged for DataPoints
+ having an AggregationTemporality field, as discussed below.
+
+ Both TimeUnixNano and StartTimeUnixNano values are expressed as
+ UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
+
+ # TimeUnixNano
+
+ This field is required, having consistent interpretation across
+ DataPoint types. TimeUnixNano is the moment corresponding to when
+ the data point's aggregate value was captured.
+
+ Data points with the 0 value for TimeUnixNano SHOULD be rejected
+ by consumers.
+
+ # StartTimeUnixNano
+
+ StartTimeUnixNano in general allows detecting when a sequence of
+ observations is unbroken. This field indicates to consumers the
+ start time for points with cumulative and delta
+ AggregationTemporality, and it should be included whenever possible
+ to support correct rate calculation. Although it may be omitted
+ when the start time is truly unknown, setting StartTimeUnixNano is
+ strongly encouraged.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ NAME_FIELD_NUMBER: builtins.int
+ DESCRIPTION_FIELD_NUMBER: builtins.int
+ UNIT_FIELD_NUMBER: builtins.int
+ GAUGE_FIELD_NUMBER: builtins.int
+ SUM_FIELD_NUMBER: builtins.int
+ HISTOGRAM_FIELD_NUMBER: builtins.int
+ EXPONENTIAL_HISTOGRAM_FIELD_NUMBER: builtins.int
+ SUMMARY_FIELD_NUMBER: builtins.int
+ name: typing.Text = ...
+ """name of the metric, including its DNS name prefix. It must be unique."""
+
+ description: typing.Text = ...
+ """description of the metric, which can be used in documentation."""
+
+ unit: typing.Text = ...
+ """unit in which the metric value is reported. Follows the format
+ described by http://unitsofmeasure.org/ucum.html.
+ """
+
+ @property
+ def gauge(self) -> global___Gauge: ...
+ @property
+ def sum(self) -> global___Sum: ...
+ @property
+ def histogram(self) -> global___Histogram: ...
+ @property
+ def exponential_histogram(self) -> global___ExponentialHistogram: ...
+ @property
+ def summary(self) -> global___Summary: ...
+ def __init__(self,
+ *,
+ name : typing.Text = ...,
+ description : typing.Text = ...,
+ unit : typing.Text = ...,
+ gauge : typing.Optional[global___Gauge] = ...,
+ sum : typing.Optional[global___Sum] = ...,
+ histogram : typing.Optional[global___Histogram] = ...,
+ exponential_histogram : typing.Optional[global___ExponentialHistogram] = ...,
+ summary : typing.Optional[global___Summary] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["data",b"data","exponential_histogram",b"exponential_histogram","gauge",b"gauge","histogram",b"histogram","sum",b"sum","summary",b"summary"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["data",b"data","description",b"description","exponential_histogram",b"exponential_histogram","gauge",b"gauge","histogram",b"histogram","name",b"name","sum",b"sum","summary",b"summary","unit",b"unit"]) -> None: ...
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["data",b"data"]) -> typing.Optional[typing_extensions.Literal["gauge","sum","histogram","exponential_histogram","summary"]]: ...
+global___Metric = Metric
+
+class Gauge(google.protobuf.message.Message):
+ """Gauge represents the type of a scalar metric that always exports the
+ "current value" for every data point. It should be used for an "unknown"
+ aggregation.
+
+ A Gauge does not support different aggregation temporalities. Given the
+ aggregation is unknown, points cannot be combined using the same
+ aggregation, regardless of aggregation temporalities. Therefore,
+ AggregationTemporality is not included. Consequently, this also means
+ "StartTimeUnixNano" is ignored for all data points.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ DATA_POINTS_FIELD_NUMBER: builtins.int
+ @property
+ def data_points(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___NumberDataPoint]: ...
+ def __init__(self,
+ *,
+ data_points : typing.Optional[typing.Iterable[global___NumberDataPoint]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["data_points",b"data_points"]) -> None: ...
+global___Gauge = Gauge
+
+class Sum(google.protobuf.message.Message):
+ """Sum represents the type of a scalar metric that is calculated as a sum of all
+ reported measurements over a time interval.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ DATA_POINTS_FIELD_NUMBER: builtins.int
+ AGGREGATION_TEMPORALITY_FIELD_NUMBER: builtins.int
+ IS_MONOTONIC_FIELD_NUMBER: builtins.int
+ @property
+ def data_points(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___NumberDataPoint]: ...
+ aggregation_temporality: global___AggregationTemporality.V = ...
+ """aggregation_temporality describes if the aggregator reports delta changes
+ since last report time, or cumulative changes since a fixed start time.
+ """
+
+ is_monotonic: builtins.bool = ...
+ """If "true" means that the sum is monotonic."""
+
+ def __init__(self,
+ *,
+ data_points : typing.Optional[typing.Iterable[global___NumberDataPoint]] = ...,
+ aggregation_temporality : global___AggregationTemporality.V = ...,
+ is_monotonic : builtins.bool = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["aggregation_temporality",b"aggregation_temporality","data_points",b"data_points","is_monotonic",b"is_monotonic"]) -> None: ...
+global___Sum = Sum
+
+class Histogram(google.protobuf.message.Message):
+ """Histogram represents the type of a metric that is calculated by aggregating
+ as a Histogram of all reported measurements over a time interval.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ DATA_POINTS_FIELD_NUMBER: builtins.int
+ AGGREGATION_TEMPORALITY_FIELD_NUMBER: builtins.int
+ @property
+ def data_points(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___HistogramDataPoint]: ...
+ aggregation_temporality: global___AggregationTemporality.V = ...
+ """aggregation_temporality describes if the aggregator reports delta changes
+ since last report time, or cumulative changes since a fixed start time.
+ """
+
+ def __init__(self,
+ *,
+ data_points : typing.Optional[typing.Iterable[global___HistogramDataPoint]] = ...,
+ aggregation_temporality : global___AggregationTemporality.V = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["aggregation_temporality",b"aggregation_temporality","data_points",b"data_points"]) -> None: ...
+global___Histogram = Histogram
+
+class ExponentialHistogram(google.protobuf.message.Message):
+ """ExponentialHistogram represents the type of a metric that is calculated by aggregating
+ as a ExponentialHistogram of all reported double measurements over a time interval.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ DATA_POINTS_FIELD_NUMBER: builtins.int
+ AGGREGATION_TEMPORALITY_FIELD_NUMBER: builtins.int
+ @property
+ def data_points(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ExponentialHistogramDataPoint]: ...
+ aggregation_temporality: global___AggregationTemporality.V = ...
+ """aggregation_temporality describes if the aggregator reports delta changes
+ since last report time, or cumulative changes since a fixed start time.
+ """
+
+ def __init__(self,
+ *,
+ data_points : typing.Optional[typing.Iterable[global___ExponentialHistogramDataPoint]] = ...,
+ aggregation_temporality : global___AggregationTemporality.V = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["aggregation_temporality",b"aggregation_temporality","data_points",b"data_points"]) -> None: ...
+global___ExponentialHistogram = ExponentialHistogram
+
+class Summary(google.protobuf.message.Message):
+ """Summary metric data are used to convey quantile summaries,
+ a Prometheus (see: https://prometheus.io/docs/concepts/metric_types/#summary)
+ and OpenMetrics (see: https://github.com/OpenObservability/OpenMetrics/blob/4dbf6075567ab43296eed941037c12951faafb92/protos/prometheus.proto#L45)
+ data type. These data points cannot always be merged in a meaningful way.
+ While they can be useful in some applications, histogram data points are
+ recommended for new applications.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ DATA_POINTS_FIELD_NUMBER: builtins.int
+ @property
+ def data_points(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___SummaryDataPoint]: ...
+ def __init__(self,
+ *,
+ data_points : typing.Optional[typing.Iterable[global___SummaryDataPoint]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["data_points",b"data_points"]) -> None: ...
+global___Summary = Summary
+
+class NumberDataPoint(google.protobuf.message.Message):
+ """NumberDataPoint is a single data point in a timeseries that describes the
+ time-varying scalar value of a metric.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ START_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ AS_DOUBLE_FIELD_NUMBER: builtins.int
+ AS_INT_FIELD_NUMBER: builtins.int
+ EXEMPLARS_FIELD_NUMBER: builtins.int
+ FLAGS_FIELD_NUMBER: builtins.int
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """The set of key/value pairs that uniquely identify the timeseries from
+ where this point belongs. The list may be empty (may contain 0 elements).
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ start_time_unix_nano: builtins.int = ...
+ """StartTimeUnixNano is optional but strongly encouraged, see the
+ the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ time_unix_nano: builtins.int = ...
+ """TimeUnixNano is required, see the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ as_double: builtins.float = ...
+ as_int: builtins.int = ...
+ @property
+ def exemplars(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Exemplar]:
+ """(Optional) List of exemplars collected from
+ measurements that were used to form the data point
+ """
+ pass
+ flags: builtins.int = ...
+ """Flags that apply to this specific data point. See DataPointFlags
+ for the available flags and their meaning.
+ """
+
+ def __init__(self,
+ *,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ start_time_unix_nano : builtins.int = ...,
+ time_unix_nano : builtins.int = ...,
+ as_double : builtins.float = ...,
+ as_int : builtins.int = ...,
+ exemplars : typing.Optional[typing.Iterable[global___Exemplar]] = ...,
+ flags : builtins.int = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["as_double",b"as_double","as_int",b"as_int","value",b"value"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["as_double",b"as_double","as_int",b"as_int","attributes",b"attributes","exemplars",b"exemplars","flags",b"flags","start_time_unix_nano",b"start_time_unix_nano","time_unix_nano",b"time_unix_nano","value",b"value"]) -> None: ...
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["value",b"value"]) -> typing.Optional[typing_extensions.Literal["as_double","as_int"]]: ...
+global___NumberDataPoint = NumberDataPoint
+
+class HistogramDataPoint(google.protobuf.message.Message):
+ """HistogramDataPoint is a single data point in a timeseries that describes the
+ time-varying values of a Histogram. A Histogram contains summary statistics
+ for a population of values, it may optionally contain the distribution of
+ those values across a set of buckets.
+
+ If the histogram contains the distribution of values, then both
+ "explicit_bounds" and "bucket counts" fields must be defined.
+ If the histogram does not contain the distribution of values, then both
+ "explicit_bounds" and "bucket_counts" must be omitted and only "count" and
+ "sum" are known.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ START_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ COUNT_FIELD_NUMBER: builtins.int
+ SUM_FIELD_NUMBER: builtins.int
+ BUCKET_COUNTS_FIELD_NUMBER: builtins.int
+ EXPLICIT_BOUNDS_FIELD_NUMBER: builtins.int
+ EXEMPLARS_FIELD_NUMBER: builtins.int
+ FLAGS_FIELD_NUMBER: builtins.int
+ MIN_FIELD_NUMBER: builtins.int
+ MAX_FIELD_NUMBER: builtins.int
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """The set of key/value pairs that uniquely identify the timeseries from
+ where this point belongs. The list may be empty (may contain 0 elements).
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ start_time_unix_nano: builtins.int = ...
+ """StartTimeUnixNano is optional but strongly encouraged, see the
+ the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ time_unix_nano: builtins.int = ...
+ """TimeUnixNano is required, see the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ count: builtins.int = ...
+ """count is the number of values in the population. Must be non-negative. This
+ value must be equal to the sum of the "count" fields in buckets if a
+ histogram is provided.
+ """
+
+ sum: builtins.float = ...
+ """sum of the values in the population. If count is zero then this field
+ must be zero.
+
+ Note: Sum should only be filled out when measuring non-negative discrete
+ events, and is assumed to be monotonic over the values of these events.
+ Negative events *can* be recorded, but sum should not be filled out when
+ doing so. This is specifically to enforce compatibility w/ OpenMetrics,
+ see: https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#histogram
+ """
+
+ @property
+ def bucket_counts(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.int]:
+ """bucket_counts is an optional field contains the count values of histogram
+ for each bucket.
+
+ The sum of the bucket_counts must equal the value in the count field.
+
+ The number of elements in bucket_counts array must be by one greater than
+ the number of elements in explicit_bounds array.
+ """
+ pass
+ @property
+ def explicit_bounds(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.float]:
+ """explicit_bounds specifies buckets with explicitly defined bounds for values.
+
+ The boundaries for bucket at index i are:
+
+ (-infinity, explicit_bounds[i]] for i == 0
+ (explicit_bounds[i-1], explicit_bounds[i]] for 0 < i < size(explicit_bounds)
+ (explicit_bounds[i-1], +infinity) for i == size(explicit_bounds)
+
+ The values in the explicit_bounds array must be strictly increasing.
+
+ Histogram buckets are inclusive of their upper boundary, except the last
+ bucket where the boundary is at infinity. This format is intentionally
+ compatible with the OpenMetrics histogram definition.
+ """
+ pass
+ @property
+ def exemplars(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Exemplar]:
+ """(Optional) List of exemplars collected from
+ measurements that were used to form the data point
+ """
+ pass
+ flags: builtins.int = ...
+ """Flags that apply to this specific data point. See DataPointFlags
+ for the available flags and their meaning.
+ """
+
+ min: builtins.float = ...
+ """min is the minimum value over (start_time, end_time]."""
+
+ max: builtins.float = ...
+ """max is the maximum value over (start_time, end_time]."""
+
+ def __init__(self,
+ *,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ start_time_unix_nano : builtins.int = ...,
+ time_unix_nano : builtins.int = ...,
+ count : builtins.int = ...,
+ sum : builtins.float = ...,
+ bucket_counts : typing.Optional[typing.Iterable[builtins.int]] = ...,
+ explicit_bounds : typing.Optional[typing.Iterable[builtins.float]] = ...,
+ exemplars : typing.Optional[typing.Iterable[global___Exemplar]] = ...,
+ flags : builtins.int = ...,
+ min : builtins.float = ...,
+ max : builtins.float = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["_max",b"_max","_min",b"_min","_sum",b"_sum","max",b"max","min",b"min","sum",b"sum"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["_max",b"_max","_min",b"_min","_sum",b"_sum","attributes",b"attributes","bucket_counts",b"bucket_counts","count",b"count","exemplars",b"exemplars","explicit_bounds",b"explicit_bounds","flags",b"flags","max",b"max","min",b"min","start_time_unix_nano",b"start_time_unix_nano","sum",b"sum","time_unix_nano",b"time_unix_nano"]) -> None: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_max",b"_max"]) -> typing.Optional[typing_extensions.Literal["max"]]: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_min",b"_min"]) -> typing.Optional[typing_extensions.Literal["min"]]: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_sum",b"_sum"]) -> typing.Optional[typing_extensions.Literal["sum"]]: ...
+global___HistogramDataPoint = HistogramDataPoint
+
+class ExponentialHistogramDataPoint(google.protobuf.message.Message):
+ """ExponentialHistogramDataPoint is a single data point in a timeseries that describes the
+ time-varying values of a ExponentialHistogram of double values. A ExponentialHistogram contains
+ summary statistics for a population of values, it may optionally contain the
+ distribution of those values across a set of buckets.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ class Buckets(google.protobuf.message.Message):
+ """Buckets are a set of bucket counts, encoded in a contiguous array
+ of counts.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ OFFSET_FIELD_NUMBER: builtins.int
+ BUCKET_COUNTS_FIELD_NUMBER: builtins.int
+ offset: builtins.int = ...
+ """Offset is the bucket index of the first entry in the bucket_counts array.
+
+ Note: This uses a varint encoding as a simple form of compression.
+ """
+
+ @property
+ def bucket_counts(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.int]:
+ """bucket_counts is an array of count values, where bucket_counts[i] carries
+ the count of the bucket at index (offset+i). bucket_counts[i] is the count
+ of values greater than base^(offset+i) and less than or equal to
+ base^(offset+i+1).
+
+ Note: By contrast, the explicit HistogramDataPoint uses
+ fixed64. This field is expected to have many buckets,
+ especially zeros, so uint64 has been selected to ensure
+ varint encoding.
+ """
+ pass
+ def __init__(self,
+ *,
+ offset : builtins.int = ...,
+ bucket_counts : typing.Optional[typing.Iterable[builtins.int]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["bucket_counts",b"bucket_counts","offset",b"offset"]) -> None: ...
+
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ START_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ COUNT_FIELD_NUMBER: builtins.int
+ SUM_FIELD_NUMBER: builtins.int
+ SCALE_FIELD_NUMBER: builtins.int
+ ZERO_COUNT_FIELD_NUMBER: builtins.int
+ POSITIVE_FIELD_NUMBER: builtins.int
+ NEGATIVE_FIELD_NUMBER: builtins.int
+ FLAGS_FIELD_NUMBER: builtins.int
+ EXEMPLARS_FIELD_NUMBER: builtins.int
+ MIN_FIELD_NUMBER: builtins.int
+ MAX_FIELD_NUMBER: builtins.int
+ ZERO_THRESHOLD_FIELD_NUMBER: builtins.int
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """The set of key/value pairs that uniquely identify the timeseries from
+ where this point belongs. The list may be empty (may contain 0 elements).
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ start_time_unix_nano: builtins.int = ...
+ """StartTimeUnixNano is optional but strongly encouraged, see the
+ the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ time_unix_nano: builtins.int = ...
+ """TimeUnixNano is required, see the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ count: builtins.int = ...
+ """count is the number of values in the population. Must be
+ non-negative. This value must be equal to the sum of the "bucket_counts"
+ values in the positive and negative Buckets plus the "zero_count" field.
+ """
+
+ sum: builtins.float = ...
+ """sum of the values in the population. If count is zero then this field
+ must be zero.
+
+ Note: Sum should only be filled out when measuring non-negative discrete
+ events, and is assumed to be monotonic over the values of these events.
+ Negative events *can* be recorded, but sum should not be filled out when
+ doing so. This is specifically to enforce compatibility w/ OpenMetrics,
+ see: https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#histogram
+ """
+
+ scale: builtins.int = ...
+ """scale describes the resolution of the histogram. Boundaries are
+ located at powers of the base, where:
+
+ base = (2^(2^-scale))
+
+ The histogram bucket identified by `index`, a signed integer,
+ contains values that are greater than (base^index) and
+ less than or equal to (base^(index+1)).
+
+ The positive and negative ranges of the histogram are expressed
+ separately. Negative values are mapped by their absolute value
+ into the negative range using the same scale as the positive range.
+
+ scale is not restricted by the protocol, as the permissible
+ values depend on the range of the data.
+ """
+
+ zero_count: builtins.int = ...
+ """zero_count is the count of values that are either exactly zero or
+ within the region considered zero by the instrumentation at the
+ tolerated degree of precision. This bucket stores values that
+ cannot be expressed using the standard exponential formula as
+ well as values that have been rounded to zero.
+
+ Implementations MAY consider the zero bucket to have probability
+ mass equal to (zero_count / count).
+ """
+
+ @property
+ def positive(self) -> global___ExponentialHistogramDataPoint.Buckets:
+ """positive carries the positive range of exponential bucket counts."""
+ pass
+ @property
+ def negative(self) -> global___ExponentialHistogramDataPoint.Buckets:
+ """negative carries the negative range of exponential bucket counts."""
+ pass
+ flags: builtins.int = ...
+ """Flags that apply to this specific data point. See DataPointFlags
+ for the available flags and their meaning.
+ """
+
+ @property
+ def exemplars(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Exemplar]:
+ """(Optional) List of exemplars collected from
+ measurements that were used to form the data point
+ """
+ pass
+ min: builtins.float = ...
+ """min is the minimum value over (start_time, end_time]."""
+
+ max: builtins.float = ...
+ """max is the maximum value over (start_time, end_time]."""
+
+ zero_threshold: builtins.float = ...
+ """ZeroThreshold may be optionally set to convey the width of the zero
+ region. Where the zero region is defined as the closed interval
+ [-ZeroThreshold, ZeroThreshold].
+ When ZeroThreshold is 0, zero count bucket stores values that cannot be
+ expressed using the standard exponential formula as well as values that
+ have been rounded to zero.
+ """
+
+ def __init__(self,
+ *,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ start_time_unix_nano : builtins.int = ...,
+ time_unix_nano : builtins.int = ...,
+ count : builtins.int = ...,
+ sum : builtins.float = ...,
+ scale : builtins.int = ...,
+ zero_count : builtins.int = ...,
+ positive : typing.Optional[global___ExponentialHistogramDataPoint.Buckets] = ...,
+ negative : typing.Optional[global___ExponentialHistogramDataPoint.Buckets] = ...,
+ flags : builtins.int = ...,
+ exemplars : typing.Optional[typing.Iterable[global___Exemplar]] = ...,
+ min : builtins.float = ...,
+ max : builtins.float = ...,
+ zero_threshold : builtins.float = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["_max",b"_max","_min",b"_min","_sum",b"_sum","max",b"max","min",b"min","negative",b"negative","positive",b"positive","sum",b"sum"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["_max",b"_max","_min",b"_min","_sum",b"_sum","attributes",b"attributes","count",b"count","exemplars",b"exemplars","flags",b"flags","max",b"max","min",b"min","negative",b"negative","positive",b"positive","scale",b"scale","start_time_unix_nano",b"start_time_unix_nano","sum",b"sum","time_unix_nano",b"time_unix_nano","zero_count",b"zero_count","zero_threshold",b"zero_threshold"]) -> None: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_max",b"_max"]) -> typing.Optional[typing_extensions.Literal["max"]]: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_min",b"_min"]) -> typing.Optional[typing_extensions.Literal["min"]]: ...
+ @typing.overload
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["_sum",b"_sum"]) -> typing.Optional[typing_extensions.Literal["sum"]]: ...
+global___ExponentialHistogramDataPoint = ExponentialHistogramDataPoint
+
+class SummaryDataPoint(google.protobuf.message.Message):
+ """SummaryDataPoint is a single data point in a timeseries that describes the
+ time-varying values of a Summary metric.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ class ValueAtQuantile(google.protobuf.message.Message):
+ """Represents the value at a given quantile of a distribution.
+
+ To record Min and Max values following conventions are used:
+ - The 1.0 quantile is equivalent to the maximum value observed.
+ - The 0.0 quantile is equivalent to the minimum value observed.
+
+ See the following issue for more context:
+ https://github.com/open-telemetry/opentelemetry-proto/issues/125
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ QUANTILE_FIELD_NUMBER: builtins.int
+ VALUE_FIELD_NUMBER: builtins.int
+ quantile: builtins.float = ...
+ """The quantile of a distribution. Must be in the interval
+ [0.0, 1.0].
+ """
+
+ value: builtins.float = ...
+ """The value at the given quantile of a distribution.
+
+ Quantile values must NOT be negative.
+ """
+
+ def __init__(self,
+ *,
+ quantile : builtins.float = ...,
+ value : builtins.float = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["quantile",b"quantile","value",b"value"]) -> None: ...
+
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ START_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ COUNT_FIELD_NUMBER: builtins.int
+ SUM_FIELD_NUMBER: builtins.int
+ QUANTILE_VALUES_FIELD_NUMBER: builtins.int
+ FLAGS_FIELD_NUMBER: builtins.int
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """The set of key/value pairs that uniquely identify the timeseries from
+ where this point belongs. The list may be empty (may contain 0 elements).
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ start_time_unix_nano: builtins.int = ...
+ """StartTimeUnixNano is optional but strongly encouraged, see the
+ the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ time_unix_nano: builtins.int = ...
+ """TimeUnixNano is required, see the detailed comments above Metric.
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ count: builtins.int = ...
+ """count is the number of values in the population. Must be non-negative."""
+
+ sum: builtins.float = ...
+ """sum of the values in the population. If count is zero then this field
+ must be zero.
+
+ Note: Sum should only be filled out when measuring non-negative discrete
+ events, and is assumed to be monotonic over the values of these events.
+ Negative events *can* be recorded, but sum should not be filled out when
+ doing so. This is specifically to enforce compatibility w/ OpenMetrics,
+ see: https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#summary
+ """
+
+ @property
+ def quantile_values(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___SummaryDataPoint.ValueAtQuantile]:
+ """(Optional) list of values at different quantiles of the distribution calculated
+ from the current snapshot. The quantiles must be strictly increasing.
+ """
+ pass
+ flags: builtins.int = ...
+ """Flags that apply to this specific data point. See DataPointFlags
+ for the available flags and their meaning.
+ """
+
+ def __init__(self,
+ *,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ start_time_unix_nano : builtins.int = ...,
+ time_unix_nano : builtins.int = ...,
+ count : builtins.int = ...,
+ sum : builtins.float = ...,
+ quantile_values : typing.Optional[typing.Iterable[global___SummaryDataPoint.ValueAtQuantile]] = ...,
+ flags : builtins.int = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","count",b"count","flags",b"flags","quantile_values",b"quantile_values","start_time_unix_nano",b"start_time_unix_nano","sum",b"sum","time_unix_nano",b"time_unix_nano"]) -> None: ...
+global___SummaryDataPoint = SummaryDataPoint
+
+class Exemplar(google.protobuf.message.Message):
+ """A representation of an exemplar, which is a sample input measurement.
+ Exemplars also hold information about the environment when the measurement
+ was recorded, for example the span and trace ID of the active span when the
+ exemplar was recorded.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ FILTERED_ATTRIBUTES_FIELD_NUMBER: builtins.int
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ AS_DOUBLE_FIELD_NUMBER: builtins.int
+ AS_INT_FIELD_NUMBER: builtins.int
+ SPAN_ID_FIELD_NUMBER: builtins.int
+ TRACE_ID_FIELD_NUMBER: builtins.int
+ @property
+ def filtered_attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """The set of key/value pairs that were filtered out by the aggregator, but
+ recorded alongside the original measurement. Only key/value pairs that were
+ filtered out by the aggregator should be included
+ """
+ pass
+ time_unix_nano: builtins.int = ...
+ """time_unix_nano is the exact time when this exemplar was recorded
+
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
+ 1970.
+ """
+
+ as_double: builtins.float = ...
+ as_int: builtins.int = ...
+ span_id: builtins.bytes = ...
+ """(Optional) Span ID of the exemplar trace.
+ span_id may be missing if the measurement is not recorded inside a trace
+ or if the trace is not sampled.
+ """
+
+ trace_id: builtins.bytes = ...
+ """(Optional) Trace ID of the exemplar trace.
+ trace_id may be missing if the measurement is not recorded inside a trace
+ or if the trace is not sampled.
+ """
+
+ def __init__(self,
+ *,
+ filtered_attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ time_unix_nano : builtins.int = ...,
+ as_double : builtins.float = ...,
+ as_int : builtins.int = ...,
+ span_id : builtins.bytes = ...,
+ trace_id : builtins.bytes = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["as_double",b"as_double","as_int",b"as_int","value",b"value"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["as_double",b"as_double","as_int",b"as_int","filtered_attributes",b"filtered_attributes","span_id",b"span_id","time_unix_nano",b"time_unix_nano","trace_id",b"trace_id","value",b"value"]) -> None: ...
+ def WhichOneof(self, oneof_group: typing_extensions.Literal["value",b"value"]) -> typing.Optional[typing_extensions.Literal["as_double","as_int"]]: ...
+global___Exemplar = Exemplar
diff --git a/opentelemetry-proto/src/opentelemetry/proto/resource/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/resource/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/resource/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/resource/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.py
new file mode 100644
index 0000000000..728e9114dc
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.py
@@ -0,0 +1,36 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/resource/v1/resource.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.common.v1 import common_pb2 as opentelemetry_dot_proto_dot_common_dot_v1_dot_common__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n.opentelemetry/proto/resource/v1/resource.proto\x12\x1fopentelemetry.proto.resource.v1\x1a*opentelemetry/proto/common/v1/common.proto\"i\n\x08Resource\x12;\n\nattributes\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x02 \x01(\rB\x83\x01\n\"io.opentelemetry.proto.resource.v1B\rResourceProtoP\x01Z*go.opentelemetry.io/proto/otlp/resource/v1\xaa\x02\x1fOpenTelemetry.Proto.Resource.V1b\x06proto3')
+
+
+
+_RESOURCE = DESCRIPTOR.message_types_by_name['Resource']
+Resource = _reflection.GeneratedProtocolMessageType('Resource', (_message.Message,), {
+ 'DESCRIPTOR' : _RESOURCE,
+ '__module__' : 'opentelemetry.proto.resource.v1.resource_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.resource.v1.Resource)
+ })
+_sym_db.RegisterMessage(Resource)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n\"io.opentelemetry.proto.resource.v1B\rResourceProtoP\001Z*go.opentelemetry.io/proto/otlp/resource/v1\252\002\037OpenTelemetry.Proto.Resource.V1'
+ _RESOURCE._serialized_start=127
+ _RESOURCE._serialized_end=232
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.pyi
new file mode 100644
index 0000000000..f660c7f229
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/resource/v1/resource_pb2.pyi
@@ -0,0 +1,38 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.message
+import opentelemetry.proto.common.v1.common_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class Resource(google.protobuf.message.Message):
+ """Resource information."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """Set of attributes that describe the resource.
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ """dropped_attributes_count is the number of dropped attributes. If the value is 0, then
+ no attributes were dropped.
+ """
+
+ def __init__(self,
+ *,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","dropped_attributes_count",b"dropped_attributes_count"]) -> None: ...
+global___Resource = Resource
diff --git a/opentelemetry-proto/src/opentelemetry/proto/trace/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/trace/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/trace/v1/__init__.py b/opentelemetry-proto/src/opentelemetry/proto/trace/v1/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.py b/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.py
new file mode 100644
index 0000000000..6e80acce51
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: opentelemetry/proto/trace/v1/trace.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+from opentelemetry.proto.common.v1 import common_pb2 as opentelemetry_dot_proto_dot_common_dot_v1_dot_common__pb2
+from opentelemetry.proto.resource.v1 import resource_pb2 as opentelemetry_dot_proto_dot_resource_dot_v1_dot_resource__pb2
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(opentelemetry/proto/trace/v1/trace.proto\x12\x1copentelemetry.proto.trace.v1\x1a*opentelemetry/proto/common/v1/common.proto\x1a.opentelemetry/proto/resource/v1/resource.proto\"Q\n\nTracesData\x12\x43\n\x0eresource_spans\x18\x01 \x03(\x0b\x32+.opentelemetry.proto.trace.v1.ResourceSpans\"\xa7\x01\n\rResourceSpans\x12;\n\x08resource\x18\x01 \x01(\x0b\x32).opentelemetry.proto.resource.v1.Resource\x12=\n\x0bscope_spans\x18\x02 \x03(\x0b\x32(.opentelemetry.proto.trace.v1.ScopeSpans\x12\x12\n\nschema_url\x18\x03 \x01(\tJ\x06\x08\xe8\x07\x10\xe9\x07\"\x97\x01\n\nScopeSpans\x12\x42\n\x05scope\x18\x01 \x01(\x0b\x32\x33.opentelemetry.proto.common.v1.InstrumentationScope\x12\x31\n\x05spans\x18\x02 \x03(\x0b\x32\".opentelemetry.proto.trace.v1.Span\x12\x12\n\nschema_url\x18\x03 \x01(\t\"\xe6\x07\n\x04Span\x12\x10\n\x08trace_id\x18\x01 \x01(\x0c\x12\x0f\n\x07span_id\x18\x02 \x01(\x0c\x12\x13\n\x0btrace_state\x18\x03 \x01(\t\x12\x16\n\x0eparent_span_id\x18\x04 \x01(\x0c\x12\x0c\n\x04name\x18\x05 \x01(\t\x12\x39\n\x04kind\x18\x06 \x01(\x0e\x32+.opentelemetry.proto.trace.v1.Span.SpanKind\x12\x1c\n\x14start_time_unix_nano\x18\x07 \x01(\x06\x12\x1a\n\x12\x65nd_time_unix_nano\x18\x08 \x01(\x06\x12;\n\nattributes\x18\t \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\n \x01(\r\x12\x38\n\x06\x65vents\x18\x0b \x03(\x0b\x32(.opentelemetry.proto.trace.v1.Span.Event\x12\x1c\n\x14\x64ropped_events_count\x18\x0c \x01(\r\x12\x36\n\x05links\x18\r \x03(\x0b\x32\'.opentelemetry.proto.trace.v1.Span.Link\x12\x1b\n\x13\x64ropped_links_count\x18\x0e \x01(\r\x12\x34\n\x06status\x18\x0f \x01(\x0b\x32$.opentelemetry.proto.trace.v1.Status\x1a\x8c\x01\n\x05\x45vent\x12\x16\n\x0etime_unix_nano\x18\x01 \x01(\x06\x12\x0c\n\x04name\x18\x02 \x01(\t\x12;\n\nattributes\x18\x03 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x04 \x01(\r\x1a\x9d\x01\n\x04Link\x12\x10\n\x08trace_id\x18\x01 \x01(\x0c\x12\x0f\n\x07span_id\x18\x02 \x01(\x0c\x12\x13\n\x0btrace_state\x18\x03 \x01(\t\x12;\n\nattributes\x18\x04 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x05 \x01(\r\"\x99\x01\n\x08SpanKind\x12\x19\n\x15SPAN_KIND_UNSPECIFIED\x10\x00\x12\x16\n\x12SPAN_KIND_INTERNAL\x10\x01\x12\x14\n\x10SPAN_KIND_SERVER\x10\x02\x12\x14\n\x10SPAN_KIND_CLIENT\x10\x03\x12\x16\n\x12SPAN_KIND_PRODUCER\x10\x04\x12\x16\n\x12SPAN_KIND_CONSUMER\x10\x05\"\xae\x01\n\x06Status\x12\x0f\n\x07message\x18\x02 \x01(\t\x12=\n\x04\x63ode\x18\x03 \x01(\x0e\x32/.opentelemetry.proto.trace.v1.Status.StatusCode\"N\n\nStatusCode\x12\x15\n\x11STATUS_CODE_UNSET\x10\x00\x12\x12\n\x0eSTATUS_CODE_OK\x10\x01\x12\x15\n\x11STATUS_CODE_ERROR\x10\x02J\x04\x08\x01\x10\x02\x42w\n\x1fio.opentelemetry.proto.trace.v1B\nTraceProtoP\x01Z\'go.opentelemetry.io/proto/otlp/trace/v1\xaa\x02\x1cOpenTelemetry.Proto.Trace.V1b\x06proto3')
+
+
+
+_TRACESDATA = DESCRIPTOR.message_types_by_name['TracesData']
+_RESOURCESPANS = DESCRIPTOR.message_types_by_name['ResourceSpans']
+_SCOPESPANS = DESCRIPTOR.message_types_by_name['ScopeSpans']
+_SPAN = DESCRIPTOR.message_types_by_name['Span']
+_SPAN_EVENT = _SPAN.nested_types_by_name['Event']
+_SPAN_LINK = _SPAN.nested_types_by_name['Link']
+_STATUS = DESCRIPTOR.message_types_by_name['Status']
+_SPAN_SPANKIND = _SPAN.enum_types_by_name['SpanKind']
+_STATUS_STATUSCODE = _STATUS.enum_types_by_name['StatusCode']
+TracesData = _reflection.GeneratedProtocolMessageType('TracesData', (_message.Message,), {
+ 'DESCRIPTOR' : _TRACESDATA,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.TracesData)
+ })
+_sym_db.RegisterMessage(TracesData)
+
+ResourceSpans = _reflection.GeneratedProtocolMessageType('ResourceSpans', (_message.Message,), {
+ 'DESCRIPTOR' : _RESOURCESPANS,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.ResourceSpans)
+ })
+_sym_db.RegisterMessage(ResourceSpans)
+
+ScopeSpans = _reflection.GeneratedProtocolMessageType('ScopeSpans', (_message.Message,), {
+ 'DESCRIPTOR' : _SCOPESPANS,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.ScopeSpans)
+ })
+_sym_db.RegisterMessage(ScopeSpans)
+
+Span = _reflection.GeneratedProtocolMessageType('Span', (_message.Message,), {
+
+ 'Event' : _reflection.GeneratedProtocolMessageType('Event', (_message.Message,), {
+ 'DESCRIPTOR' : _SPAN_EVENT,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.Span.Event)
+ })
+ ,
+
+ 'Link' : _reflection.GeneratedProtocolMessageType('Link', (_message.Message,), {
+ 'DESCRIPTOR' : _SPAN_LINK,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.Span.Link)
+ })
+ ,
+ 'DESCRIPTOR' : _SPAN,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.Span)
+ })
+_sym_db.RegisterMessage(Span)
+_sym_db.RegisterMessage(Span.Event)
+_sym_db.RegisterMessage(Span.Link)
+
+Status = _reflection.GeneratedProtocolMessageType('Status', (_message.Message,), {
+ 'DESCRIPTOR' : _STATUS,
+ '__module__' : 'opentelemetry.proto.trace.v1.trace_pb2'
+ # @@protoc_insertion_point(class_scope:opentelemetry.proto.trace.v1.Status)
+ })
+_sym_db.RegisterMessage(Status)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ DESCRIPTOR._serialized_options = b'\n\037io.opentelemetry.proto.trace.v1B\nTraceProtoP\001Z\'go.opentelemetry.io/proto/otlp/trace/v1\252\002\034OpenTelemetry.Proto.Trace.V1'
+ _TRACESDATA._serialized_start=166
+ _TRACESDATA._serialized_end=247
+ _RESOURCESPANS._serialized_start=250
+ _RESOURCESPANS._serialized_end=417
+ _SCOPESPANS._serialized_start=420
+ _SCOPESPANS._serialized_end=571
+ _SPAN._serialized_start=574
+ _SPAN._serialized_end=1572
+ _SPAN_EVENT._serialized_start=1116
+ _SPAN_EVENT._serialized_end=1256
+ _SPAN_LINK._serialized_start=1259
+ _SPAN_LINK._serialized_end=1416
+ _SPAN_SPANKIND._serialized_start=1419
+ _SPAN_SPANKIND._serialized_end=1572
+ _STATUS._serialized_start=1575
+ _STATUS._serialized_end=1749
+ _STATUS_STATUSCODE._serialized_start=1665
+ _STATUS_STATUSCODE._serialized_end=1743
+# @@protoc_insertion_point(module_scope)
diff --git a/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.pyi b/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.pyi
new file mode 100644
index 0000000000..52052ff7e9
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/trace/v1/trace_pb2.pyi
@@ -0,0 +1,474 @@
+"""
+@generated by mypy-protobuf. Do not edit manually!
+isort:skip_file
+"""
+import builtins
+import google.protobuf.descriptor
+import google.protobuf.internal.containers
+import google.protobuf.internal.enum_type_wrapper
+import google.protobuf.message
+import opentelemetry.proto.common.v1.common_pb2
+import opentelemetry.proto.resource.v1.resource_pb2
+import typing
+import typing_extensions
+
+DESCRIPTOR: google.protobuf.descriptor.FileDescriptor = ...
+
+class TracesData(google.protobuf.message.Message):
+ """TracesData represents the traces data that can be stored in a persistent storage,
+ OR can be embedded by other protocols that transfer OTLP traces data but do
+ not implement the OTLP protocol.
+
+ The main difference between this message and collector protocol is that
+ in this message there will not be any "control" or "metadata" specific to
+ OTLP protocol.
+
+ When new fields are added into this message, the OTLP request MUST be updated
+ as well.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_SPANS_FIELD_NUMBER: builtins.int
+ @property
+ def resource_spans(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ResourceSpans]:
+ """An array of ResourceSpans.
+ For data coming from a single resource this array will typically contain
+ one element. Intermediary nodes that receive data from multiple origins
+ typically batch the data before forwarding further and in that case this
+ array will contain multiple elements.
+ """
+ pass
+ def __init__(self,
+ *,
+ resource_spans : typing.Optional[typing.Iterable[global___ResourceSpans]] = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource_spans",b"resource_spans"]) -> None: ...
+global___TracesData = TracesData
+
+class ResourceSpans(google.protobuf.message.Message):
+ """A collection of ScopeSpans from a Resource."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ RESOURCE_FIELD_NUMBER: builtins.int
+ SCOPE_SPANS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def resource(self) -> opentelemetry.proto.resource.v1.resource_pb2.Resource:
+ """The resource for the spans in this message.
+ If this field is not set then no resource info is known.
+ """
+ pass
+ @property
+ def scope_spans(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ScopeSpans]:
+ """A list of ScopeSpans that originate from a resource."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to the data in the "resource" field. It does not apply
+ to the data in the "scope_spans" field which have their own schema_url field.
+ """
+
+ def __init__(self,
+ *,
+ resource : typing.Optional[opentelemetry.proto.resource.v1.resource_pb2.Resource] = ...,
+ scope_spans : typing.Optional[typing.Iterable[global___ScopeSpans]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["resource",b"resource"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["resource",b"resource","schema_url",b"schema_url","scope_spans",b"scope_spans"]) -> None: ...
+global___ResourceSpans = ResourceSpans
+
+class ScopeSpans(google.protobuf.message.Message):
+ """A collection of Spans produced by an InstrumentationScope."""
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ SCOPE_FIELD_NUMBER: builtins.int
+ SPANS_FIELD_NUMBER: builtins.int
+ SCHEMA_URL_FIELD_NUMBER: builtins.int
+ @property
+ def scope(self) -> opentelemetry.proto.common.v1.common_pb2.InstrumentationScope:
+ """The instrumentation scope information for the spans in this message.
+ Semantically when InstrumentationScope isn't set, it is equivalent with
+ an empty instrumentation scope name (unknown).
+ """
+ pass
+ @property
+ def spans(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Span]:
+ """A list of Spans that originate from an instrumentation scope."""
+ pass
+ schema_url: typing.Text = ...
+ """This schema_url applies to all spans and span events in the "spans" field."""
+
+ def __init__(self,
+ *,
+ scope : typing.Optional[opentelemetry.proto.common.v1.common_pb2.InstrumentationScope] = ...,
+ spans : typing.Optional[typing.Iterable[global___Span]] = ...,
+ schema_url : typing.Text = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["scope",b"scope"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["schema_url",b"schema_url","scope",b"scope","spans",b"spans"]) -> None: ...
+global___ScopeSpans = ScopeSpans
+
+class Span(google.protobuf.message.Message):
+ """A Span represents a single operation performed by a single component of the system.
+
+ The next available field id is 17.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ class SpanKind(_SpanKind, metaclass=_SpanKindEnumTypeWrapper):
+ """SpanKind is the type of span. Can be used to specify additional relationships between spans
+ in addition to a parent/child relationship.
+ """
+ pass
+ class _SpanKind:
+ V = typing.NewType('V', builtins.int)
+ class _SpanKindEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_SpanKind.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ SPAN_KIND_UNSPECIFIED = Span.SpanKind.V(0)
+ """Unspecified. Do NOT use as default.
+ Implementations MAY assume SpanKind to be INTERNAL when receiving UNSPECIFIED.
+ """
+
+ SPAN_KIND_INTERNAL = Span.SpanKind.V(1)
+ """Indicates that the span represents an internal operation within an application,
+ as opposed to an operation happening at the boundaries. Default value.
+ """
+
+ SPAN_KIND_SERVER = Span.SpanKind.V(2)
+ """Indicates that the span covers server-side handling of an RPC or other
+ remote network request.
+ """
+
+ SPAN_KIND_CLIENT = Span.SpanKind.V(3)
+ """Indicates that the span describes a request to some remote service."""
+
+ SPAN_KIND_PRODUCER = Span.SpanKind.V(4)
+ """Indicates that the span describes a producer sending a message to a broker.
+ Unlike CLIENT and SERVER, there is often no direct critical path latency relationship
+ between producer and consumer spans. A PRODUCER span ends when the message was accepted
+ by the broker while the logical processing of the message might span a much longer time.
+ """
+
+ SPAN_KIND_CONSUMER = Span.SpanKind.V(5)
+ """Indicates that the span describes consumer receiving a message from a broker.
+ Like the PRODUCER kind, there is often no direct critical path latency relationship
+ between producer and consumer spans.
+ """
+
+
+ SPAN_KIND_UNSPECIFIED = Span.SpanKind.V(0)
+ """Unspecified. Do NOT use as default.
+ Implementations MAY assume SpanKind to be INTERNAL when receiving UNSPECIFIED.
+ """
+
+ SPAN_KIND_INTERNAL = Span.SpanKind.V(1)
+ """Indicates that the span represents an internal operation within an application,
+ as opposed to an operation happening at the boundaries. Default value.
+ """
+
+ SPAN_KIND_SERVER = Span.SpanKind.V(2)
+ """Indicates that the span covers server-side handling of an RPC or other
+ remote network request.
+ """
+
+ SPAN_KIND_CLIENT = Span.SpanKind.V(3)
+ """Indicates that the span describes a request to some remote service."""
+
+ SPAN_KIND_PRODUCER = Span.SpanKind.V(4)
+ """Indicates that the span describes a producer sending a message to a broker.
+ Unlike CLIENT and SERVER, there is often no direct critical path latency relationship
+ between producer and consumer spans. A PRODUCER span ends when the message was accepted
+ by the broker while the logical processing of the message might span a much longer time.
+ """
+
+ SPAN_KIND_CONSUMER = Span.SpanKind.V(5)
+ """Indicates that the span describes consumer receiving a message from a broker.
+ Like the PRODUCER kind, there is often no direct critical path latency relationship
+ between producer and consumer spans.
+ """
+
+
+ class Event(google.protobuf.message.Message):
+ """Event is a time-stamped annotation of the span, consisting of user-supplied
+ text description and key-value pairs.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ NAME_FIELD_NUMBER: builtins.int
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ time_unix_nano: builtins.int = ...
+ """time_unix_nano is the time the event occurred."""
+
+ name: typing.Text = ...
+ """name of the event.
+ This field is semantically required to be set to non-empty string.
+ """
+
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """attributes is a collection of attribute key/value pairs on the event.
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ """dropped_attributes_count is the number of dropped attributes. If the value is 0,
+ then no attributes were dropped.
+ """
+
+ def __init__(self,
+ *,
+ time_unix_nano : builtins.int = ...,
+ name : typing.Text = ...,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","dropped_attributes_count",b"dropped_attributes_count","name",b"name","time_unix_nano",b"time_unix_nano"]) -> None: ...
+
+ class Link(google.protobuf.message.Message):
+ """A pointer from the current span to another span in the same trace or in a
+ different trace. For example, this can be used in batching operations,
+ where a single batch handler processes multiple requests from different
+ traces or when the handler receives a request from a different project.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ TRACE_ID_FIELD_NUMBER: builtins.int
+ SPAN_ID_FIELD_NUMBER: builtins.int
+ TRACE_STATE_FIELD_NUMBER: builtins.int
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ trace_id: builtins.bytes = ...
+ """A unique identifier of a trace that this linked span is part of. The ID is a
+ 16-byte array.
+ """
+
+ span_id: builtins.bytes = ...
+ """A unique identifier for the linked span. The ID is an 8-byte array."""
+
+ trace_state: typing.Text = ...
+ """The trace_state associated with the link."""
+
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """attributes is a collection of attribute key/value pairs on the link.
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ """dropped_attributes_count is the number of dropped attributes. If the value is 0,
+ then no attributes were dropped.
+ """
+
+ def __init__(self,
+ *,
+ trace_id : builtins.bytes = ...,
+ span_id : builtins.bytes = ...,
+ trace_state : typing.Text = ...,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","dropped_attributes_count",b"dropped_attributes_count","span_id",b"span_id","trace_id",b"trace_id","trace_state",b"trace_state"]) -> None: ...
+
+ TRACE_ID_FIELD_NUMBER: builtins.int
+ SPAN_ID_FIELD_NUMBER: builtins.int
+ TRACE_STATE_FIELD_NUMBER: builtins.int
+ PARENT_SPAN_ID_FIELD_NUMBER: builtins.int
+ NAME_FIELD_NUMBER: builtins.int
+ KIND_FIELD_NUMBER: builtins.int
+ START_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ END_TIME_UNIX_NANO_FIELD_NUMBER: builtins.int
+ ATTRIBUTES_FIELD_NUMBER: builtins.int
+ DROPPED_ATTRIBUTES_COUNT_FIELD_NUMBER: builtins.int
+ EVENTS_FIELD_NUMBER: builtins.int
+ DROPPED_EVENTS_COUNT_FIELD_NUMBER: builtins.int
+ LINKS_FIELD_NUMBER: builtins.int
+ DROPPED_LINKS_COUNT_FIELD_NUMBER: builtins.int
+ STATUS_FIELD_NUMBER: builtins.int
+ trace_id: builtins.bytes = ...
+ """A unique identifier for a trace. All spans from the same trace share
+ the same `trace_id`. The ID is a 16-byte array. An ID with all zeroes OR
+ of length other than 16 bytes is considered invalid (empty string in OTLP/JSON
+ is zero-length and thus is also invalid).
+
+ This field is required.
+ """
+
+ span_id: builtins.bytes = ...
+ """A unique identifier for a span within a trace, assigned when the span
+ is created. The ID is an 8-byte array. An ID with all zeroes OR of length
+ other than 8 bytes is considered invalid (empty string in OTLP/JSON
+ is zero-length and thus is also invalid).
+
+ This field is required.
+ """
+
+ trace_state: typing.Text = ...
+ """trace_state conveys information about request position in multiple distributed tracing graphs.
+ It is a trace_state in w3c-trace-context format: https://www.w3.org/TR/trace-context/#tracestate-header
+ See also https://github.com/w3c/distributed-tracing for more details about this field.
+ """
+
+ parent_span_id: builtins.bytes = ...
+ """The `span_id` of this span's parent span. If this is a root span, then this
+ field must be empty. The ID is an 8-byte array.
+ """
+
+ name: typing.Text = ...
+ """A description of the span's operation.
+
+ For example, the name can be a qualified method name or a file name
+ and a line number where the operation is called. A best practice is to use
+ the same display name at the same call point in an application.
+ This makes it easier to correlate spans in different traces.
+
+ This field is semantically required to be set to non-empty string.
+ Empty value is equivalent to an unknown span name.
+
+ This field is required.
+ """
+
+ kind: global___Span.SpanKind.V = ...
+ """Distinguishes between spans generated in a particular context. For example,
+ two spans with the same name may be distinguished using `CLIENT` (caller)
+ and `SERVER` (callee) to identify queueing latency associated with the span.
+ """
+
+ start_time_unix_nano: builtins.int = ...
+ """start_time_unix_nano is the start time of the span. On the client side, this is the time
+ kept by the local machine where the span execution starts. On the server side, this
+ is the time when the server's application handler starts running.
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
+
+ This field is semantically required and it is expected that end_time >= start_time.
+ """
+
+ end_time_unix_nano: builtins.int = ...
+ """end_time_unix_nano is the end time of the span. On the client side, this is the time
+ kept by the local machine where the span execution ends. On the server side, this
+ is the time when the server application handler stops running.
+ Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
+
+ This field is semantically required and it is expected that end_time >= start_time.
+ """
+
+ @property
+ def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[opentelemetry.proto.common.v1.common_pb2.KeyValue]:
+ """attributes is a collection of key/value pairs. Note, global attributes
+ like server name can be set using the resource API. Examples of attributes:
+
+ "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
+ "/http/server_latency": 300
+ "example.com/myattribute": true
+ "example.com/score": 10.239
+
+ The OpenTelemetry API specification further restricts the allowed value types:
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute
+ Attribute keys MUST be unique (it is not allowed to have more than one
+ attribute with the same key).
+ """
+ pass
+ dropped_attributes_count: builtins.int = ...
+ """dropped_attributes_count is the number of attributes that were discarded. Attributes
+ can be discarded because their keys are too long or because there are too many
+ attributes. If this value is 0, then no attributes were dropped.
+ """
+
+ @property
+ def events(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Span.Event]:
+ """events is a collection of Event items."""
+ pass
+ dropped_events_count: builtins.int = ...
+ """dropped_events_count is the number of dropped events. If the value is 0, then no
+ events were dropped.
+ """
+
+ @property
+ def links(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Span.Link]:
+ """links is a collection of Links, which are references from this span to a span
+ in the same or different trace.
+ """
+ pass
+ dropped_links_count: builtins.int = ...
+ """dropped_links_count is the number of dropped links after the maximum size was
+ enforced. If this value is 0, then no links were dropped.
+ """
+
+ @property
+ def status(self) -> global___Status:
+ """An optional final status for this span. Semantically when Status isn't set, it means
+ span's status code is unset, i.e. assume STATUS_CODE_UNSET (code = 0).
+ """
+ pass
+ def __init__(self,
+ *,
+ trace_id : builtins.bytes = ...,
+ span_id : builtins.bytes = ...,
+ trace_state : typing.Text = ...,
+ parent_span_id : builtins.bytes = ...,
+ name : typing.Text = ...,
+ kind : global___Span.SpanKind.V = ...,
+ start_time_unix_nano : builtins.int = ...,
+ end_time_unix_nano : builtins.int = ...,
+ attributes : typing.Optional[typing.Iterable[opentelemetry.proto.common.v1.common_pb2.KeyValue]] = ...,
+ dropped_attributes_count : builtins.int = ...,
+ events : typing.Optional[typing.Iterable[global___Span.Event]] = ...,
+ dropped_events_count : builtins.int = ...,
+ links : typing.Optional[typing.Iterable[global___Span.Link]] = ...,
+ dropped_links_count : builtins.int = ...,
+ status : typing.Optional[global___Status] = ...,
+ ) -> None: ...
+ def HasField(self, field_name: typing_extensions.Literal["status",b"status"]) -> builtins.bool: ...
+ def ClearField(self, field_name: typing_extensions.Literal["attributes",b"attributes","dropped_attributes_count",b"dropped_attributes_count","dropped_events_count",b"dropped_events_count","dropped_links_count",b"dropped_links_count","end_time_unix_nano",b"end_time_unix_nano","events",b"events","kind",b"kind","links",b"links","name",b"name","parent_span_id",b"parent_span_id","span_id",b"span_id","start_time_unix_nano",b"start_time_unix_nano","status",b"status","trace_id",b"trace_id","trace_state",b"trace_state"]) -> None: ...
+global___Span = Span
+
+class Status(google.protobuf.message.Message):
+ """The Status type defines a logical error model that is suitable for different
+ programming environments, including REST APIs and RPC APIs.
+ """
+ DESCRIPTOR: google.protobuf.descriptor.Descriptor = ...
+ class StatusCode(_StatusCode, metaclass=_StatusCodeEnumTypeWrapper):
+ """For the semantics of status codes see
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status
+ """
+ pass
+ class _StatusCode:
+ V = typing.NewType('V', builtins.int)
+ class _StatusCodeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_StatusCode.V], builtins.type):
+ DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor = ...
+ STATUS_CODE_UNSET = Status.StatusCode.V(0)
+ """The default status."""
+
+ STATUS_CODE_OK = Status.StatusCode.V(1)
+ """The Span has been validated by an Application developer or Operator to
+ have completed successfully.
+ """
+
+ STATUS_CODE_ERROR = Status.StatusCode.V(2)
+ """The Span contains an error."""
+
+
+ STATUS_CODE_UNSET = Status.StatusCode.V(0)
+ """The default status."""
+
+ STATUS_CODE_OK = Status.StatusCode.V(1)
+ """The Span has been validated by an Application developer or Operator to
+ have completed successfully.
+ """
+
+ STATUS_CODE_ERROR = Status.StatusCode.V(2)
+ """The Span contains an error."""
+
+
+ MESSAGE_FIELD_NUMBER: builtins.int
+ CODE_FIELD_NUMBER: builtins.int
+ message: typing.Text = ...
+ """A developer-facing human readable error message."""
+
+ code: global___Status.StatusCode.V = ...
+ """The status code."""
+
+ def __init__(self,
+ *,
+ message : typing.Text = ...,
+ code : global___Status.StatusCode.V = ...,
+ ) -> None: ...
+ def ClearField(self, field_name: typing_extensions.Literal["code",b"code","message",b"message"]) -> None: ...
+global___Status = Status
diff --git a/opentelemetry-proto/src/opentelemetry/proto/version.py b/opentelemetry-proto/src/opentelemetry/proto/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/opentelemetry-proto/src/opentelemetry/proto/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/opentelemetry-proto/tests/__init__.py b/opentelemetry-proto/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-proto/tests/test_proto.py b/opentelemetry-proto/tests/test_proto.py
new file mode 100644
index 0000000000..9670be4627
--- /dev/null
+++ b/opentelemetry-proto/tests/test_proto.py
@@ -0,0 +1,24 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+from importlib.util import find_spec
+from unittest import TestCase
+
+
+class TestInstrumentor(TestCase):
+ def test_proto(self):
+
+ if find_spec("opentelemetry.proto") is None:
+ self.fail("opentelemetry-proto not installed")
diff --git a/opentelemetry-python b/opentelemetry-python
new file mode 160000
index 0000000000..975733c714
--- /dev/null
+++ b/opentelemetry-python
@@ -0,0 +1 @@
+Subproject commit 975733c71473cddddd0859c6fcbd2b02405f7e12
diff --git a/opentelemetry-sdk/LICENSE b/opentelemetry-sdk/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/opentelemetry-sdk/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/opentelemetry-sdk/README.rst b/opentelemetry-sdk/README.rst
new file mode 100644
index 0000000000..e2bc0f6a72
--- /dev/null
+++ b/opentelemetry-sdk/README.rst
@@ -0,0 +1,19 @@
+OpenTelemetry Python SDK
+============================================================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-sdk.svg
+ :target: https://pypi.org/project/opentelemetry-sdk/
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-sdk
+
+References
+----------
+
+* `OpenTelemetry Project `_
diff --git a/opentelemetry-sdk/pyproject.toml b/opentelemetry-sdk/pyproject.toml
new file mode 100644
index 0000000000..925eadb2a0
--- /dev/null
+++ b/opentelemetry-sdk/pyproject.toml
@@ -0,0 +1,86 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-sdk"
+dynamic = ["version"]
+description = "OpenTelemetry Python SDK"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-api == 1.23.0.dev",
+ "opentelemetry-semantic-conventions == 0.44b0.dev",
+ "typing-extensions >= 3.7.4",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_environment_variables]
+sdk = "opentelemetry.sdk.environment_variables"
+
+[project.entry-points.opentelemetry_id_generator]
+random = "opentelemetry.sdk.trace.id_generator:RandomIdGenerator"
+
+[project.entry-points.opentelemetry_traces_sampler]
+always_on = "opentelemetry.sdk.trace.sampling:_AlwaysOn"
+always_off = "opentelemetry.sdk.trace.sampling:_AlwaysOff"
+parentbased_always_on = "opentelemetry.sdk.trace.sampling:_ParentBasedAlwaysOn"
+parentbased_always_off = "opentelemetry.sdk.trace.sampling:_ParentBasedAlwaysOff"
+traceidratio = "opentelemetry.sdk.trace.sampling:TraceIdRatioBased"
+parentbased_traceidratio = "opentelemetry.sdk.trace.sampling:ParentBasedTraceIdRatio"
+
+[project.entry-points.opentelemetry_logger_provider]
+sdk_logger_provider = "opentelemetry.sdk._logs:LoggerProvider"
+
+[project.entry-points.opentelemetry_logs_exporter]
+console = "opentelemetry.sdk._logs.export:ConsoleLogExporter"
+
+[project.entry-points.opentelemetry_meter_provider]
+sdk_meter_provider = "opentelemetry.sdk.metrics:MeterProvider"
+
+[project.entry-points.opentelemetry_metrics_exporter]
+console = "opentelemetry.sdk.metrics.export:ConsoleMetricExporter"
+
+[project.entry-points.opentelemetry_tracer_provider]
+sdk_tracer_provider = "opentelemetry.sdk.trace:TracerProvider"
+
+[project.entry-points.opentelemetry_traces_exporter]
+console = "opentelemetry.sdk.trace.export:ConsoleSpanExporter"
+
+[project.entry-points.opentelemetry_resource_detector]
+otel = "opentelemetry.sdk.resources:OTELResourceDetector"
+process = "opentelemetry.sdk.resources:ProcessResourceDetector"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-sdk"
+
+[tool.hatch.version]
+path = "src/opentelemetry/sdk/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/__init__.pyi b/opentelemetry-sdk/src/opentelemetry/sdk/__init__.pyi
new file mode 100644
index 0000000000..e57edc0f58
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/__init__.pyi
@@ -0,0 +1,18 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The OpenTelemetry SDK package is an implementation of the OpenTelemetry
+API
+"""
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py
new file mode 100644
index 0000000000..33c5147a59
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_configuration/__init__.py
@@ -0,0 +1,422 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+"""
+OpenTelemetry SDK Configurator for Easy Instrumentation with Distros
+"""
+
+import logging
+import os
+from abc import ABC, abstractmethod
+from os import environ
+from typing import Callable, Dict, List, Optional, Sequence, Tuple, Type, Union
+
+from typing_extensions import Literal
+
+from opentelemetry._logs import set_logger_provider
+from opentelemetry.environment_variables import (
+ OTEL_LOGS_EXPORTER,
+ OTEL_METRICS_EXPORTER,
+ OTEL_PYTHON_ID_GENERATOR,
+ OTEL_TRACES_EXPORTER,
+)
+from opentelemetry.metrics import set_meter_provider
+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
+from opentelemetry.sdk._logs.export import BatchLogRecordProcessor, LogExporter
+from opentelemetry.sdk.environment_variables import (
+ _OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED,
+ OTEL_EXPORTER_OTLP_LOGS_PROTOCOL,
+ OTEL_EXPORTER_OTLP_METRICS_PROTOCOL,
+ OTEL_EXPORTER_OTLP_PROTOCOL,
+ OTEL_EXPORTER_OTLP_TRACES_PROTOCOL,
+ OTEL_TRACES_SAMPLER,
+ OTEL_TRACES_SAMPLER_ARG,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ MetricExporter,
+ MetricReader,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor, SpanExporter
+from opentelemetry.sdk.trace.id_generator import IdGenerator
+from opentelemetry.sdk.trace.sampling import Sampler
+from opentelemetry.semconv.resource import ResourceAttributes
+from opentelemetry.trace import set_tracer_provider
+from opentelemetry.util._importlib_metadata import entry_points
+
+_EXPORTER_OTLP = "otlp"
+_EXPORTER_OTLP_PROTO_GRPC = "otlp_proto_grpc"
+_EXPORTER_OTLP_PROTO_HTTP = "otlp_proto_http"
+
+_EXPORTER_BY_OTLP_PROTOCOL = {
+ "grpc": _EXPORTER_OTLP_PROTO_GRPC,
+ "http/protobuf": _EXPORTER_OTLP_PROTO_HTTP,
+}
+
+_EXPORTER_ENV_BY_SIGNAL_TYPE = {
+ "traces": OTEL_TRACES_EXPORTER,
+ "metrics": OTEL_METRICS_EXPORTER,
+ "logs": OTEL_LOGS_EXPORTER,
+}
+
+_PROTOCOL_ENV_BY_SIGNAL_TYPE = {
+ "traces": OTEL_EXPORTER_OTLP_TRACES_PROTOCOL,
+ "metrics": OTEL_EXPORTER_OTLP_METRICS_PROTOCOL,
+ "logs": OTEL_EXPORTER_OTLP_LOGS_PROTOCOL,
+}
+
+_RANDOM_ID_GENERATOR = "random"
+_DEFAULT_ID_GENERATOR = _RANDOM_ID_GENERATOR
+
+_OTEL_SAMPLER_ENTRY_POINT_GROUP = "opentelemetry_traces_sampler"
+
+_logger = logging.getLogger(__name__)
+
+
+def _import_config_components(
+ selected_components: List[str], entry_point_name: str
+) -> Sequence[Tuple[str, object]]:
+
+ component_implementations = []
+
+ for selected_component in selected_components:
+ try:
+ component_implementations.append(
+ (
+ selected_component,
+ next(
+ iter(
+ entry_points(
+ group=entry_point_name, name=selected_component
+ )
+ )
+ ).load(),
+ )
+ )
+ except KeyError:
+
+ raise RuntimeError(
+ f"Requested entry point '{entry_point_name}' not found"
+ )
+
+ except StopIteration:
+
+ raise RuntimeError(
+ f"Requested component '{selected_component}' not found in "
+ f"entry point '{entry_point_name}'"
+ )
+
+ return component_implementations
+
+
+def _get_sampler() -> Optional[str]:
+ return environ.get(OTEL_TRACES_SAMPLER, None)
+
+
+def _get_id_generator() -> str:
+ return environ.get(OTEL_PYTHON_ID_GENERATOR, _DEFAULT_ID_GENERATOR)
+
+
+def _get_exporter_entry_point(
+ exporter_name: str, signal_type: Literal["traces", "metrics", "logs"]
+):
+ if exporter_name not in (
+ _EXPORTER_OTLP,
+ _EXPORTER_OTLP_PROTO_GRPC,
+ _EXPORTER_OTLP_PROTO_HTTP,
+ ):
+ return exporter_name
+
+ # Checking env vars for OTLP protocol (grpc/http).
+ otlp_protocol = environ.get(
+ _PROTOCOL_ENV_BY_SIGNAL_TYPE[signal_type]
+ ) or environ.get(OTEL_EXPORTER_OTLP_PROTOCOL)
+
+ if not otlp_protocol:
+ if exporter_name == _EXPORTER_OTLP:
+ return _EXPORTER_OTLP_PROTO_GRPC
+ return exporter_name
+
+ otlp_protocol = otlp_protocol.strip()
+
+ if exporter_name == _EXPORTER_OTLP:
+ if otlp_protocol not in _EXPORTER_BY_OTLP_PROTOCOL:
+ # Invalid value was set by the env var
+ raise RuntimeError(
+ f"Unsupported OTLP protocol '{otlp_protocol}' is configured"
+ )
+
+ return _EXPORTER_BY_OTLP_PROTOCOL[otlp_protocol]
+
+ # grpc/http already specified by exporter_name, only add a warning in case
+ # of a conflict.
+ exporter_name_by_env = _EXPORTER_BY_OTLP_PROTOCOL.get(otlp_protocol)
+ if exporter_name_by_env and exporter_name != exporter_name_by_env:
+ _logger.warning(
+ "Conflicting values for %s OTLP exporter protocol, using '%s'",
+ signal_type,
+ exporter_name,
+ )
+
+ return exporter_name
+
+
+def _get_exporter_names(
+ signal_type: Literal["traces", "metrics", "logs"]
+) -> Sequence[str]:
+ names = environ.get(_EXPORTER_ENV_BY_SIGNAL_TYPE.get(signal_type, ""))
+
+ if not names or names.lower().strip() == "none":
+ return []
+
+ return [
+ _get_exporter_entry_point(_exporter.strip(), signal_type)
+ for _exporter in names.split(",")
+ ]
+
+
+def _init_tracing(
+ exporters: Dict[str, Type[SpanExporter]],
+ id_generator: IdGenerator = None,
+ sampler: Sampler = None,
+ resource: Resource = None,
+):
+ provider = TracerProvider(
+ id_generator=id_generator,
+ sampler=sampler,
+ resource=resource,
+ )
+ set_tracer_provider(provider)
+
+ for _, exporter_class in exporters.items():
+ exporter_args = {}
+ provider.add_span_processor(
+ BatchSpanProcessor(exporter_class(**exporter_args))
+ )
+
+
+def _init_metrics(
+ exporters_or_readers: Dict[
+ str, Union[Type[MetricExporter], Type[MetricReader]]
+ ],
+ resource: Resource = None,
+):
+ metric_readers = []
+
+ for _, exporter_or_reader_class in exporters_or_readers.items():
+ exporter_args = {}
+
+ if issubclass(exporter_or_reader_class, MetricReader):
+ metric_readers.append(exporter_or_reader_class(**exporter_args))
+ else:
+ metric_readers.append(
+ PeriodicExportingMetricReader(
+ exporter_or_reader_class(**exporter_args)
+ )
+ )
+
+ provider = MeterProvider(resource=resource, metric_readers=metric_readers)
+ set_meter_provider(provider)
+
+
+def _init_logging(
+ exporters: Dict[str, Type[LogExporter]],
+ resource: Resource = None,
+):
+ provider = LoggerProvider(resource=resource)
+ set_logger_provider(provider)
+
+ for _, exporter_class in exporters.items():
+ exporter_args = {}
+ provider.add_log_record_processor(
+ BatchLogRecordProcessor(exporter_class(**exporter_args))
+ )
+
+ handler = LoggingHandler(level=logging.NOTSET, logger_provider=provider)
+
+ logging.getLogger().addHandler(handler)
+
+
+def _import_exporters(
+ trace_exporter_names: Sequence[str],
+ metric_exporter_names: Sequence[str],
+ log_exporter_names: Sequence[str],
+) -> Tuple[
+ Dict[str, Type[SpanExporter]],
+ Dict[str, Union[Type[MetricExporter], Type[MetricReader]]],
+ Dict[str, Type[LogExporter]],
+]:
+ trace_exporters = {}
+ metric_exporters = {}
+ log_exporters = {}
+
+ for (exporter_name, exporter_impl,) in _import_config_components(
+ trace_exporter_names, "opentelemetry_traces_exporter"
+ ):
+ if issubclass(exporter_impl, SpanExporter):
+ trace_exporters[exporter_name] = exporter_impl
+ else:
+ raise RuntimeError(f"{exporter_name} is not a trace exporter")
+
+ for (exporter_name, exporter_impl,) in _import_config_components(
+ metric_exporter_names, "opentelemetry_metrics_exporter"
+ ):
+ # The metric exporter components may be push MetricExporter or pull exporters which
+ # subclass MetricReader directly
+ if issubclass(exporter_impl, (MetricExporter, MetricReader)):
+ metric_exporters[exporter_name] = exporter_impl
+ else:
+ raise RuntimeError(f"{exporter_name} is not a metric exporter")
+
+ for (exporter_name, exporter_impl,) in _import_config_components(
+ log_exporter_names, "opentelemetry_logs_exporter"
+ ):
+ if issubclass(exporter_impl, LogExporter):
+ log_exporters[exporter_name] = exporter_impl
+ else:
+ raise RuntimeError(f"{exporter_name} is not a log exporter")
+
+ return trace_exporters, metric_exporters, log_exporters
+
+
+def _import_sampler_factory(sampler_name: str) -> Callable[[str], Sampler]:
+ _, sampler_impl = _import_config_components(
+ [sampler_name.strip()], _OTEL_SAMPLER_ENTRY_POINT_GROUP
+ )[0]
+ return sampler_impl
+
+
+def _import_sampler(sampler_name: str) -> Optional[Sampler]:
+ if not sampler_name:
+ return None
+ try:
+ sampler_factory = _import_sampler_factory(sampler_name)
+ arg = None
+ if sampler_name in ("traceidratio", "parentbased_traceidratio"):
+ try:
+ rate = float(os.getenv(OTEL_TRACES_SAMPLER_ARG))
+ except (ValueError, TypeError):
+ _logger.warning(
+ "Could not convert TRACES_SAMPLER_ARG to float. Using default value 1.0."
+ )
+ rate = 1.0
+ arg = rate
+ else:
+ arg = os.getenv(OTEL_TRACES_SAMPLER_ARG)
+
+ sampler = sampler_factory(arg)
+ if not isinstance(sampler, Sampler):
+ message = f"Sampler factory, {sampler_factory}, produced output, {sampler}, which is not a Sampler."
+ _logger.warning(message)
+ raise ValueError(message)
+ return sampler
+ except Exception as exc: # pylint: disable=broad-except
+ _logger.warning(
+ "Using default sampler. Failed to initialize sampler, %s: %s",
+ sampler_name,
+ exc,
+ )
+ return None
+
+
+def _import_id_generator(id_generator_name: str) -> IdGenerator:
+ id_generator_name, id_generator_impl = _import_config_components(
+ [id_generator_name.strip()], "opentelemetry_id_generator"
+ )[0]
+
+ if issubclass(id_generator_impl, IdGenerator):
+ return id_generator_impl()
+
+ raise RuntimeError(f"{id_generator_name} is not an IdGenerator")
+
+
+def _initialize_components(auto_instrumentation_version):
+ trace_exporters, metric_exporters, log_exporters = _import_exporters(
+ _get_exporter_names("traces"),
+ _get_exporter_names("metrics"),
+ _get_exporter_names("logs"),
+ )
+ sampler_name = _get_sampler()
+ sampler = _import_sampler(sampler_name)
+ id_generator_name = _get_id_generator()
+ id_generator = _import_id_generator(id_generator_name)
+ # if env var OTEL_RESOURCE_ATTRIBUTES is given, it will read the service_name
+ # from the env variable else defaults to "unknown_service"
+ auto_resource = {}
+ # populate version if using auto-instrumentation
+ if auto_instrumentation_version:
+ auto_resource[
+ ResourceAttributes.TELEMETRY_AUTO_VERSION
+ ] = auto_instrumentation_version
+ resource = Resource.create(auto_resource)
+
+ _init_tracing(
+ exporters=trace_exporters,
+ id_generator=id_generator,
+ sampler=sampler,
+ resource=resource,
+ )
+ _init_metrics(metric_exporters, resource)
+ logging_enabled = os.getenv(
+ _OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED, "false"
+ )
+ if logging_enabled.strip().lower() == "true":
+ _init_logging(log_exporters, resource)
+
+
+class _BaseConfigurator(ABC):
+ """An ABC for configurators
+
+ Configurators are used to configure
+ SDKs (i.e. TracerProvider, MeterProvider, Processors...)
+ to reduce the amount of manual configuration required.
+ """
+
+ _instance = None
+ _is_instrumented = False
+
+ def __new__(cls, *args, **kwargs):
+
+ if cls._instance is None:
+ cls._instance = object.__new__(cls, *args, **kwargs)
+
+ return cls._instance
+
+ @abstractmethod
+ def _configure(self, **kwargs):
+ """Configure the SDK"""
+
+ def configure(self, **kwargs):
+ """Configure the SDK"""
+ self._configure(**kwargs)
+
+
+class _OTelSDKConfigurator(_BaseConfigurator):
+ """A basic Configurator by OTel Python for initializing OTel SDK components
+
+ Initializes several crucial OTel SDK components (i.e. TracerProvider,
+ MeterProvider, Processors...) according to a default implementation. Other
+ Configurators can subclass and slightly alter this initialization.
+
+ NOTE: This class should not be instantiated nor should it become an entry
+ point on the `opentelemetry-sdk` package. Instead, distros should subclass
+ this Configurator and enhance it as needed.
+ """
+
+ def _configure(self, **kwargs):
+ _initialize_components(kwargs.get("auto_instrumentation_version"))
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/__init__.py
new file mode 100644
index 0000000000..881bb9a4b2
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/__init__.py
@@ -0,0 +1,34 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.sdk._logs._internal import (
+ LogData,
+ Logger,
+ LoggerProvider,
+ LoggingHandler,
+ LogLimits,
+ LogRecord,
+ LogRecordProcessor,
+)
+
+__all__ = [
+ "LogData",
+ "Logger",
+ "LoggerProvider",
+ "LoggingHandler",
+ "LogLimits",
+ "LogRecord",
+ "LogRecordProcessor",
+]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py
new file mode 100644
index 0000000000..cfa4d6cfa9
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py
@@ -0,0 +1,652 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import abc
+import atexit
+import concurrent.futures
+import json
+import logging
+import threading
+import traceback
+from os import environ
+from time import time_ns
+from typing import Any, Callable, Optional, Tuple, Union # noqa
+
+from opentelemetry._logs import Logger as APILogger
+from opentelemetry._logs import LoggerProvider as APILoggerProvider
+from opentelemetry._logs import LogRecord as APILogRecord
+from opentelemetry._logs import (
+ NoOpLogger,
+ SeverityNumber,
+ get_logger,
+ get_logger_provider,
+ std_to_otel,
+)
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk.environment_variables import (
+ OTEL_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util import ns_to_iso_str
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.semconv.trace import SpanAttributes
+from opentelemetry.trace import (
+ format_span_id,
+ format_trace_id,
+ get_current_span,
+)
+from opentelemetry.trace.span import TraceFlags
+from opentelemetry.util.types import Attributes
+
+_logger = logging.getLogger(__name__)
+
+_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT = 128
+_ENV_VALUE_UNSET = ""
+
+
+class LogLimits:
+ """This class is based on a SpanLimits class in the Tracing module.
+
+ This class represents the limits that should be enforced on recorded data such as events, links, attributes etc.
+
+ This class does not enforce any limits itself. It only provides a way to read limits from env,
+ default values and from user provided arguments.
+
+ All limit arguments must be either a non-negative integer, ``None`` or ``LogLimits.UNSET``.
+
+ - All limit arguments are optional.
+ - If a limit argument is not set, the class will try to read its value from the corresponding
+ environment variable.
+ - If the environment variable is not set, the default value, if any, will be used.
+
+ Limit precedence:
+
+ - If a model specific limit is set, it will be used.
+ - Else if the corresponding global limit is set, it will be used.
+ - Else if the model specific limit has a default value, the default value will be used.
+ - Else if the global limit has a default value, the default value will be used.
+
+ Args:
+ max_attributes: Maximum number of attributes that can be added to a span, event, and link.
+ Environment variable: ``OTEL_ATTRIBUTE_COUNT_LIMIT``
+ Default: {_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT}
+ max_attribute_length: Maximum length an attribute value can have. Values longer than
+ the specified length will be truncated.
+ """
+
+ UNSET = -1
+
+ def __init__(
+ self,
+ max_attributes: Optional[int] = None,
+ max_attribute_length: Optional[int] = None,
+ ):
+
+ # attribute count
+ global_max_attributes = self._from_env_if_absent(
+ max_attributes, OTEL_ATTRIBUTE_COUNT_LIMIT
+ )
+ self.max_attributes = (
+ global_max_attributes
+ if global_max_attributes is not None
+ else _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT
+ )
+
+ # attribute length
+ self.max_attribute_length = self._from_env_if_absent(
+ max_attribute_length,
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ )
+
+ def __repr__(self):
+ return f"{type(self).__name__}(max_attributes={self.max_attributes}, max_attribute_length={self.max_attribute_length})"
+
+ @classmethod
+ def _from_env_if_absent(
+ cls, value: Optional[int], env_var: str, default: Optional[int] = None
+ ) -> Optional[int]:
+ if value == cls.UNSET:
+ return None
+
+ err_msg = "{0} must be a non-negative integer but got {}"
+
+ # if no value is provided for the limit, try to load it from env
+ if value is None:
+ # return default value if env var is not set
+ if env_var not in environ:
+ return default
+
+ str_value = environ.get(env_var, "").strip().lower()
+ if str_value == _ENV_VALUE_UNSET:
+ return None
+
+ try:
+ value = int(str_value)
+ except ValueError:
+ raise ValueError(err_msg.format(env_var, str_value))
+
+ if value < 0:
+ raise ValueError(err_msg.format(env_var, value))
+ return value
+
+
+_UnsetLogLimits = LogLimits(
+ max_attributes=LogLimits.UNSET,
+ max_attribute_length=LogLimits.UNSET,
+)
+
+
+class LogRecord(APILogRecord):
+ """A LogRecord instance represents an event being logged.
+
+ LogRecord instances are created and emitted via `Logger`
+ every time something is logged. They contain all the information
+ pertinent to the event being logged.
+ """
+
+ def __init__(
+ self,
+ timestamp: Optional[int] = None,
+ observed_timestamp: Optional[int] = None,
+ trace_id: Optional[int] = None,
+ span_id: Optional[int] = None,
+ trace_flags: Optional[TraceFlags] = None,
+ severity_text: Optional[str] = None,
+ severity_number: Optional[SeverityNumber] = None,
+ body: Optional[Any] = None,
+ resource: Optional[Resource] = None,
+ attributes: Optional[Attributes] = None,
+ limits: Optional[LogLimits] = _UnsetLogLimits,
+ ):
+ super().__init__(
+ **{
+ "timestamp": timestamp,
+ "observed_timestamp": observed_timestamp,
+ "trace_id": trace_id,
+ "span_id": span_id,
+ "trace_flags": trace_flags,
+ "severity_text": severity_text,
+ "severity_number": severity_number,
+ "body": body,
+ "attributes": BoundedAttributes(
+ maxlen=limits.max_attributes,
+ attributes=attributes if bool(attributes) else None,
+ immutable=False,
+ max_value_len=limits.max_attribute_length,
+ ),
+ }
+ )
+ self.resource = resource
+
+ def __eq__(self, other: object) -> bool:
+ if not isinstance(other, LogRecord):
+ return NotImplemented
+ return self.__dict__ == other.__dict__
+
+ def to_json(self, indent=4) -> str:
+ return json.dumps(
+ {
+ "body": self.body,
+ "severity_number": repr(self.severity_number),
+ "severity_text": self.severity_text,
+ "attributes": dict(self.attributes)
+ if bool(self.attributes)
+ else None,
+ "dropped_attributes": self.dropped_attributes,
+ "timestamp": ns_to_iso_str(self.timestamp),
+ "trace_id": f"0x{format_trace_id(self.trace_id)}"
+ if self.trace_id is not None
+ else "",
+ "span_id": f"0x{format_span_id(self.span_id)}"
+ if self.span_id is not None
+ else "",
+ "trace_flags": self.trace_flags,
+ "resource": repr(self.resource.attributes)
+ if self.resource
+ else "",
+ },
+ indent=indent,
+ )
+
+ @property
+ def dropped_attributes(self) -> int:
+ if self.attributes:
+ return self.attributes.dropped
+ return 0
+
+
+class LogData:
+ """Readable LogRecord data plus associated InstrumentationLibrary."""
+
+ def __init__(
+ self,
+ log_record: LogRecord,
+ instrumentation_scope: InstrumentationScope,
+ ):
+ self.log_record = log_record
+ self.instrumentation_scope = instrumentation_scope
+
+
+class LogRecordProcessor(abc.ABC):
+ """Interface to hook the log record emitting action.
+
+ Log processors can be registered directly using
+ :func:`LoggerProvider.add_log_record_processor` and they are invoked
+ in the same order as they were registered.
+ """
+
+ @abc.abstractmethod
+ def emit(self, log_data: LogData):
+ """Emits the `LogData`"""
+
+ @abc.abstractmethod
+ def shutdown(self):
+ """Called when a :class:`opentelemetry.sdk._logs.Logger` is shutdown"""
+
+ @abc.abstractmethod
+ def force_flush(self, timeout_millis: int = 30000):
+ """Export all the received logs to the configured Exporter that have not yet
+ been exported.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for logs to be
+ exported.
+
+ Returns:
+ False if the timeout is exceeded, True otherwise.
+ """
+
+
+# Temporary fix until https://github.com/PyCQA/pylint/issues/4098 is resolved
+# pylint:disable=no-member
+class SynchronousMultiLogRecordProcessor(LogRecordProcessor):
+ """Implementation of class:`LogRecordProcessor` that forwards all received
+ events to a list of log processors sequentially.
+
+ The underlying log processors are called in sequential order as they were
+ added.
+ """
+
+ def __init__(self):
+ # use a tuple to avoid race conditions when adding a new log and
+ # iterating through it on "emit".
+ self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]
+ self._lock = threading.Lock()
+
+ def add_log_record_processor(
+ self, log_record_processor: LogRecordProcessor
+ ) -> None:
+ """Adds a Logprocessor to the list of log processors handled by this instance"""
+ with self._lock:
+ self._log_record_processors += (log_record_processor,)
+
+ def emit(self, log_data: LogData) -> None:
+ for lp in self._log_record_processors:
+ lp.emit(log_data)
+
+ def shutdown(self) -> None:
+ """Shutdown the log processors one by one"""
+ for lp in self._log_record_processors:
+ lp.shutdown()
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Force flush the log processors one by one
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for logs to be
+ exported. If the first n log processors exceeded the timeout
+ then remaining log processors will not be flushed.
+
+ Returns:
+ True if all the log processors flushes the logs within timeout,
+ False otherwise.
+ """
+ deadline_ns = time_ns() + timeout_millis * 1000000
+ for lp in self._log_record_processors:
+ current_ts = time_ns()
+ if current_ts >= deadline_ns:
+ return False
+
+ if not lp.force_flush((deadline_ns - current_ts) // 1000000):
+ return False
+
+ return True
+
+
+class ConcurrentMultiLogRecordProcessor(LogRecordProcessor):
+ """Implementation of :class:`LogRecordProcessor` that forwards all received
+ events to a list of log processors in parallel.
+
+ Calls to the underlying log processors are forwarded in parallel by
+ submitting them to a thread pool executor and waiting until each log
+ processor finished its work.
+
+ Args:
+ max_workers: The number of threads managed by the thread pool executor
+ and thus defining how many log processors can work in parallel.
+ """
+
+ def __init__(self, max_workers: int = 2):
+ # use a tuple to avoid race conditions when adding a new log and
+ # iterating through it on "emit".
+ self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]
+ self._lock = threading.Lock()
+ self._executor = concurrent.futures.ThreadPoolExecutor(
+ max_workers=max_workers
+ )
+
+ def add_log_record_processor(
+ self, log_record_processor: LogRecordProcessor
+ ):
+ with self._lock:
+ self._log_record_processors += (log_record_processor,)
+
+ def _submit_and_wait(
+ self,
+ func: Callable[[LogRecordProcessor], Callable[..., None]],
+ *args: Any,
+ **kwargs: Any,
+ ):
+ futures = []
+ for lp in self._log_record_processors:
+ future = self._executor.submit(func(lp), *args, **kwargs)
+ futures.append(future)
+ for future in futures:
+ future.result()
+
+ def emit(self, log_data: LogData):
+ self._submit_and_wait(lambda lp: lp.emit, log_data)
+
+ def shutdown(self):
+ self._submit_and_wait(lambda lp: lp.shutdown)
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Force flush the log processors in parallel.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for logs to be
+ exported.
+
+ Returns:
+ True if all the log processors flushes the logs within timeout,
+ False otherwise.
+ """
+ futures = []
+ for lp in self._log_record_processors:
+ future = self._executor.submit(lp.force_flush, timeout_millis)
+ futures.append(future)
+
+ done_futures, not_done_futures = concurrent.futures.wait(
+ futures, timeout_millis / 1e3
+ )
+
+ if not_done_futures:
+ return False
+
+ for future in done_futures:
+ if not future.result():
+ return False
+
+ return True
+
+
+# skip natural LogRecord attributes
+# http://docs.python.org/library/logging.html#logrecord-attributes
+_RESERVED_ATTRS = frozenset(
+ (
+ "asctime",
+ "args",
+ "created",
+ "exc_info",
+ "exc_text",
+ "filename",
+ "funcName",
+ "message",
+ "levelname",
+ "levelno",
+ "lineno",
+ "module",
+ "msecs",
+ "msg",
+ "name",
+ "pathname",
+ "process",
+ "processName",
+ "relativeCreated",
+ "stack_info",
+ "thread",
+ "threadName",
+ "taskName",
+ )
+)
+
+
+class LoggingHandler(logging.Handler):
+ """A handler class which writes logging records, in OTLP format, to
+ a network destination or file. Supports signals from the `logging` module.
+ https://docs.python.org/3/library/logging.html
+ """
+
+ def __init__(
+ self,
+ level=logging.NOTSET,
+ logger_provider=None,
+ ) -> None:
+ super().__init__(level=level)
+ self._logger_provider = logger_provider or get_logger_provider()
+ self._logger = get_logger(
+ __name__, logger_provider=self._logger_provider
+ )
+
+ @staticmethod
+ def _get_attributes(record: logging.LogRecord) -> Attributes:
+ attributes = {
+ k: v for k, v in vars(record).items() if k not in _RESERVED_ATTRS
+ }
+ if record.exc_info:
+ exc_type = ""
+ message = ""
+ stack_trace = ""
+ exctype, value, tb = record.exc_info
+ if exctype is not None:
+ exc_type = exctype.__name__
+ if value is not None and value.args:
+ message = value.args[0]
+ if tb is not None:
+ # https://github.com/open-telemetry/opentelemetry-specification/blob/9fa7c656b26647b27e485a6af7e38dc716eba98a/specification/trace/semantic_conventions/exceptions.md#stacktrace-representation
+ stack_trace = "".join(
+ traceback.format_exception(*record.exc_info)
+ )
+ attributes[SpanAttributes.EXCEPTION_TYPE] = exc_type
+ attributes[SpanAttributes.EXCEPTION_MESSAGE] = message
+ attributes[SpanAttributes.EXCEPTION_STACKTRACE] = stack_trace
+ return attributes
+
+ def _translate(self, record: logging.LogRecord) -> LogRecord:
+ timestamp = int(record.created * 1e9)
+ span_context = get_current_span().get_span_context()
+ attributes = self._get_attributes(record)
+ # This comment is taken from GanyedeNil's PR #3343, I have redacted it
+ # slightly for clarity:
+ # According to the definition of the Body field type in the
+ # OTel 1.22.0 Logs Data Model article, the Body field should be of
+ # type 'any' and should not use the str method to directly translate
+ # the msg. This is because str only converts non-text types into a
+ # human-readable form, rather than a standard format, which leads to
+ # the need for additional operations when collected through a log
+ # collector.
+ # Considering that he Body field should be of type 'any' and should not
+ # use the str method but record.msg is also a string type, then the
+ # difference is just the self.args formatting?
+ # The primary consideration depends on the ultimate purpose of the log.
+ # Converting the default log directly into a string is acceptable as it
+ # will be required to be presented in a more readable format. However,
+ # this approach might not be as "standard" when hoping to aggregate
+ # logs and perform subsequent data analysis. In the context of log
+ # extraction, it would be more appropriate for the msg to be
+ # converted into JSON format or remain unchanged, as it will eventually
+ # be transformed into JSON. If the final output JSON data contains a
+ # structure that appears similar to JSON but is not, it may confuse
+ # users. This is particularly true for operation and maintenance
+ # personnel who need to deal with log data in various languages.
+ # Where is the JSON converting occur? and what about when the msg
+ # represents something else but JSON, the expected behavior change?
+ # For the ConsoleLogExporter, it performs the to_json operation in
+ # opentelemetry.sdk._logs._internal.export.ConsoleLogExporter.__init__,
+ # so it can handle any type of input without problems. As for the
+ # OTLPLogExporter, it also handles any type of input encoding in
+ # _encode_log located in
+ # opentelemetry.exporter.otlp.proto.common._internal._log_encoder.
+ # Therefore, no extra operation is needed to support this change.
+ # The only thing to consider is the users who have already been using
+ # this SDK. If they upgrade the SDK after this change, they will need
+ # to readjust their logging collection rules to adapt to the latest
+ # output format. Therefore, this change is considered a breaking
+ # change and needs to be upgraded at an appropriate time.
+ severity_number = std_to_otel(record.levelno)
+ if isinstance(record.msg, str) and record.args:
+ body = record.msg % record.args
+ else:
+ body = record.msg
+ return LogRecord(
+ timestamp=timestamp,
+ trace_id=span_context.trace_id,
+ span_id=span_context.span_id,
+ trace_flags=span_context.trace_flags,
+ severity_text=record.levelname,
+ severity_number=severity_number,
+ body=body,
+ resource=self._logger.resource,
+ attributes=attributes,
+ )
+
+ def emit(self, record: logging.LogRecord) -> None:
+ """
+ Emit a record. Skip emitting if logger is NoOp.
+
+ The record is translated to OTel format, and then sent across the pipeline.
+ """
+ if not isinstance(self._logger, NoOpLogger):
+ self._logger.emit(self._translate(record))
+
+ def flush(self) -> None:
+ """
+ Flushes the logging output.
+ """
+ self._logger_provider.force_flush()
+
+
+class Logger(APILogger):
+ def __init__(
+ self,
+ resource: Resource,
+ multi_log_record_processor: Union[
+ SynchronousMultiLogRecordProcessor,
+ ConcurrentMultiLogRecordProcessor,
+ ],
+ instrumentation_scope: InstrumentationScope,
+ ):
+ super().__init__(
+ instrumentation_scope.name,
+ instrumentation_scope.version,
+ instrumentation_scope.schema_url,
+ )
+ self._resource = resource
+ self._multi_log_record_processor = multi_log_record_processor
+ self._instrumentation_scope = instrumentation_scope
+
+ @property
+ def resource(self):
+ return self._resource
+
+ def emit(self, record: LogRecord):
+ """Emits the :class:`LogData` by associating :class:`LogRecord`
+ and instrumentation info.
+ """
+ log_data = LogData(record, self._instrumentation_scope)
+ self._multi_log_record_processor.emit(log_data)
+
+
+class LoggerProvider(APILoggerProvider):
+ def __init__(
+ self,
+ resource: Resource = None,
+ shutdown_on_exit: bool = True,
+ multi_log_record_processor: Union[
+ SynchronousMultiLogRecordProcessor,
+ ConcurrentMultiLogRecordProcessor,
+ ] = None,
+ ):
+ if resource is None:
+ self._resource = Resource.create({})
+ else:
+ self._resource = resource
+ self._multi_log_record_processor = (
+ multi_log_record_processor or SynchronousMultiLogRecordProcessor()
+ )
+ self._at_exit_handler = None
+ if shutdown_on_exit:
+ self._at_exit_handler = atexit.register(self.shutdown)
+
+ @property
+ def resource(self):
+ return self._resource
+
+ def get_logger(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> Logger:
+ return Logger(
+ self._resource,
+ self._multi_log_record_processor,
+ InstrumentationScope(
+ name,
+ version,
+ schema_url,
+ ),
+ )
+
+ def add_log_record_processor(
+ self, log_record_processor: LogRecordProcessor
+ ):
+ """Registers a new :class:`LogRecordProcessor` for this `LoggerProvider` instance.
+
+ The log processors are invoked in the same order they are registered.
+ """
+ self._multi_log_record_processor.add_log_record_processor(
+ log_record_processor
+ )
+
+ def shutdown(self):
+ """Shuts down the log processors."""
+ self._multi_log_record_processor.shutdown()
+ if self._at_exit_handler is not None:
+ atexit.unregister(self._at_exit_handler)
+ self._at_exit_handler = None
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Force flush the log processors.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for logs to be
+ exported.
+
+ Returns:
+ True if all the log processors flushes the logs within timeout,
+ False otherwise.
+ """
+ return self._multi_log_record_processor.force_flush(timeout_millis)
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/__init__.py
new file mode 100644
index 0000000000..14140d26b7
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/__init__.py
@@ -0,0 +1,466 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import abc
+import collections
+import enum
+import logging
+import os
+import sys
+import threading
+from os import environ, linesep
+from time import time_ns
+from typing import IO, Callable, Deque, List, Optional, Sequence
+
+from opentelemetry.context import (
+ _SUPPRESS_INSTRUMENTATION_KEY,
+ attach,
+ detach,
+ set_value,
+)
+from opentelemetry.sdk._logs import LogData, LogRecord, LogRecordProcessor
+from opentelemetry.sdk.environment_variables import (
+ OTEL_BLRP_EXPORT_TIMEOUT,
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE,
+ OTEL_BLRP_MAX_QUEUE_SIZE,
+ OTEL_BLRP_SCHEDULE_DELAY,
+)
+from opentelemetry.util._once import Once
+
+_DEFAULT_SCHEDULE_DELAY_MILLIS = 5000
+_DEFAULT_MAX_EXPORT_BATCH_SIZE = 512
+_DEFAULT_EXPORT_TIMEOUT_MILLIS = 30000
+_DEFAULT_MAX_QUEUE_SIZE = 2048
+_ENV_VAR_INT_VALUE_ERROR_MESSAGE = (
+ "Unable to parse value for %s as integer. Defaulting to %s."
+)
+
+_logger = logging.getLogger(__name__)
+
+
+class LogExportResult(enum.Enum):
+ SUCCESS = 0
+ FAILURE = 1
+
+
+class LogExporter(abc.ABC):
+ """Interface for exporting logs.
+
+ Interface to be implemented by services that want to export logs received
+ in their own format.
+
+ To export data this MUST be registered to the :class`opentelemetry.sdk._logs.Logger` using a
+ log processor.
+ """
+
+ @abc.abstractmethod
+ def export(self, batch: Sequence[LogData]):
+ """Exports a batch of logs.
+
+ Args:
+ batch: The list of `LogData` objects to be exported
+
+ Returns:
+ The result of the export
+ """
+
+ @abc.abstractmethod
+ def shutdown(self):
+ """Shuts down the exporter.
+
+ Called when the SDK is shut down.
+ """
+
+
+class ConsoleLogExporter(LogExporter):
+ """Implementation of :class:`LogExporter` that prints log records to the
+ console.
+
+ This class can be used for diagnostic purposes. It prints the exported
+ log records to the console STDOUT.
+ """
+
+ def __init__(
+ self,
+ out: IO = sys.stdout,
+ formatter: Callable[[LogRecord], str] = lambda record: record.to_json()
+ + linesep,
+ ):
+ self.out = out
+ self.formatter = formatter
+
+ def export(self, batch: Sequence[LogData]):
+ for data in batch:
+ self.out.write(self.formatter(data.log_record))
+ self.out.flush()
+ return LogExportResult.SUCCESS
+
+ def shutdown(self):
+ pass
+
+
+class SimpleLogRecordProcessor(LogRecordProcessor):
+ """This is an implementation of LogRecordProcessor which passes
+ received logs in the export-friendly LogData representation to the
+ configured LogExporter, as soon as they are emitted.
+ """
+
+ def __init__(self, exporter: LogExporter):
+ self._exporter = exporter
+ self._shutdown = False
+
+ def emit(self, log_data: LogData):
+ if self._shutdown:
+ _logger.warning("Processor is already shutdown, ignoring call")
+ return
+ token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
+ try:
+ self._exporter.export((log_data,))
+ except Exception: # pylint: disable=broad-except
+ _logger.exception("Exception while exporting logs.")
+ detach(token)
+
+ def shutdown(self):
+ self._shutdown = True
+ self._exporter.shutdown()
+
+ def force_flush(
+ self, timeout_millis: int = 30000
+ ) -> bool: # pylint: disable=no-self-use
+ return True
+
+
+class _FlushRequest:
+ __slots__ = ["event", "num_log_records"]
+
+ def __init__(self):
+ self.event = threading.Event()
+ self.num_log_records = 0
+
+
+_BSP_RESET_ONCE = Once()
+
+
+class BatchLogRecordProcessor(LogRecordProcessor):
+ """This is an implementation of LogRecordProcessor which creates batches of
+ received logs in the export-friendly LogData representation and
+ send to the configured LogExporter, as soon as they are emitted.
+
+ `BatchLogRecordProcessor` is configurable with the following environment
+ variables which correspond to constructor parameters:
+
+ - :envvar:`OTEL_BLRP_SCHEDULE_DELAY`
+ - :envvar:`OTEL_BLRP_MAX_QUEUE_SIZE`
+ - :envvar:`OTEL_BLRP_MAX_EXPORT_BATCH_SIZE`
+ - :envvar:`OTEL_BLRP_EXPORT_TIMEOUT`
+ """
+
+ _queue: Deque[LogData]
+ _flush_request: Optional[_FlushRequest]
+ _log_records: List[Optional[LogData]]
+
+ def __init__(
+ self,
+ exporter: LogExporter,
+ schedule_delay_millis: float = None,
+ max_export_batch_size: int = None,
+ export_timeout_millis: float = None,
+ max_queue_size: int = None,
+ ):
+ if max_queue_size is None:
+ max_queue_size = BatchLogRecordProcessor._default_max_queue_size()
+
+ if schedule_delay_millis is None:
+ schedule_delay_millis = (
+ BatchLogRecordProcessor._default_schedule_delay_millis()
+ )
+
+ if max_export_batch_size is None:
+ max_export_batch_size = (
+ BatchLogRecordProcessor._default_max_export_batch_size()
+ )
+
+ if export_timeout_millis is None:
+ export_timeout_millis = (
+ BatchLogRecordProcessor._default_export_timeout_millis()
+ )
+
+ BatchLogRecordProcessor._validate_arguments(
+ max_queue_size, schedule_delay_millis, max_export_batch_size
+ )
+
+ self._exporter = exporter
+ self._max_queue_size = max_queue_size
+ self._schedule_delay_millis = schedule_delay_millis
+ self._max_export_batch_size = max_export_batch_size
+ self._export_timeout_millis = export_timeout_millis
+ self._queue = collections.deque([], max_queue_size)
+ self._worker_thread = threading.Thread(
+ name="OtelBatchLogRecordProcessor",
+ target=self.worker,
+ daemon=True,
+ )
+ self._condition = threading.Condition(threading.Lock())
+ self._shutdown = False
+ self._flush_request = None
+ self._log_records = [None] * self._max_export_batch_size
+ self._worker_thread.start()
+ # Only available in *nix since py37.
+ if hasattr(os, "register_at_fork"):
+ os.register_at_fork(
+ after_in_child=self._at_fork_reinit
+ ) # pylint: disable=protected-access
+ self._pid = os.getpid()
+
+ def _at_fork_reinit(self):
+ self._condition = threading.Condition(threading.Lock())
+ self._queue.clear()
+ self._worker_thread = threading.Thread(
+ name="OtelBatchLogRecordProcessor",
+ target=self.worker,
+ daemon=True,
+ )
+ self._worker_thread.start()
+ self._pid = os.getpid()
+
+ def worker(self):
+ timeout = self._schedule_delay_millis / 1e3
+ flush_request: Optional[_FlushRequest] = None
+ while not self._shutdown:
+ with self._condition:
+ if self._shutdown:
+ # shutdown may have been called, avoid further processing
+ break
+ flush_request = self._get_and_unset_flush_request()
+ if (
+ len(self._queue) < self._max_export_batch_size
+ and flush_request is None
+ ):
+ self._condition.wait(timeout)
+
+ flush_request = self._get_and_unset_flush_request()
+ if not self._queue:
+ timeout = self._schedule_delay_millis / 1e3
+ self._notify_flush_request_finished(flush_request)
+ flush_request = None
+ continue
+ if self._shutdown:
+ break
+
+ start_ns = time_ns()
+ self._export(flush_request)
+ end_ns = time_ns()
+ # subtract the duration of this export call to the next timeout
+ timeout = self._schedule_delay_millis / 1e3 - (
+ (end_ns - start_ns) / 1e9
+ )
+
+ self._notify_flush_request_finished(flush_request)
+ flush_request = None
+
+ # there might have been a new flush request while export was running
+ # and before the done flag switched to true
+ with self._condition:
+ shutdown_flush_request = self._get_and_unset_flush_request()
+
+ # flush the remaining logs
+ self._drain_queue()
+ self._notify_flush_request_finished(flush_request)
+ self._notify_flush_request_finished(shutdown_flush_request)
+
+ def _export(self, flush_request: Optional[_FlushRequest] = None):
+ """Exports logs considering the given flush_request.
+
+ If flush_request is not None then logs are exported in batches
+ until the number of exported logs reached or exceeded the num of logs in
+ flush_request, otherwise exports at max max_export_batch_size logs.
+ """
+ if flush_request is None:
+ self._export_batch()
+ return
+
+ num_log_records = flush_request.num_log_records
+ while self._queue:
+ exported = self._export_batch()
+ num_log_records -= exported
+
+ if num_log_records <= 0:
+ break
+
+ def _export_batch(self) -> int:
+ """Exports at most max_export_batch_size logs and returns the number of
+ exported logs.
+ """
+ idx = 0
+ while idx < self._max_export_batch_size and self._queue:
+ record = self._queue.pop()
+ self._log_records[idx] = record
+ idx += 1
+ token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
+ try:
+ self._exporter.export(self._log_records[:idx]) # type: ignore
+ except Exception: # pylint: disable=broad-except
+ _logger.exception("Exception while exporting logs.")
+ detach(token)
+
+ for index in range(idx):
+ self._log_records[index] = None
+ return idx
+
+ def _drain_queue(self):
+ """Export all elements until queue is empty.
+
+ Can only be called from the worker thread context because it invokes
+ `export` that is not thread safe.
+ """
+ while self._queue:
+ self._export_batch()
+
+ def _get_and_unset_flush_request(self) -> Optional[_FlushRequest]:
+ flush_request = self._flush_request
+ self._flush_request = None
+ if flush_request is not None:
+ flush_request.num_log_records = len(self._queue)
+ return flush_request
+
+ @staticmethod
+ def _notify_flush_request_finished(
+ flush_request: Optional[_FlushRequest] = None,
+ ):
+ if flush_request is not None:
+ flush_request.event.set()
+
+ def _get_or_create_flush_request(self) -> _FlushRequest:
+ if self._flush_request is None:
+ self._flush_request = _FlushRequest()
+ return self._flush_request
+
+ def emit(self, log_data: LogData) -> None:
+ """Adds the `LogData` to queue and notifies the waiting threads
+ when size of queue reaches max_export_batch_size.
+ """
+ if self._shutdown:
+ return
+ if self._pid != os.getpid():
+ _BSP_RESET_ONCE.do_once(self._at_fork_reinit)
+
+ self._queue.appendleft(log_data)
+ if len(self._queue) >= self._max_export_batch_size:
+ with self._condition:
+ self._condition.notify()
+
+ def shutdown(self):
+ self._shutdown = True
+ with self._condition:
+ self._condition.notify_all()
+ self._worker_thread.join()
+ self._exporter.shutdown()
+
+ def force_flush(self, timeout_millis: Optional[int] = None) -> bool:
+ if timeout_millis is None:
+ timeout_millis = self._export_timeout_millis
+ if self._shutdown:
+ return True
+
+ with self._condition:
+ flush_request = self._get_or_create_flush_request()
+ self._condition.notify_all()
+
+ ret = flush_request.event.wait(timeout_millis / 1e3)
+ if not ret:
+ _logger.warning("Timeout was exceeded in force_flush().")
+ return ret
+
+ @staticmethod
+ def _default_max_queue_size():
+ try:
+ return int(
+ environ.get(OTEL_BLRP_MAX_QUEUE_SIZE, _DEFAULT_MAX_QUEUE_SIZE)
+ )
+ except ValueError:
+ _logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BLRP_MAX_QUEUE_SIZE,
+ _DEFAULT_MAX_QUEUE_SIZE,
+ )
+ return _DEFAULT_MAX_QUEUE_SIZE
+
+ @staticmethod
+ def _default_schedule_delay_millis():
+ try:
+ return int(
+ environ.get(
+ OTEL_BLRP_SCHEDULE_DELAY, _DEFAULT_SCHEDULE_DELAY_MILLIS
+ )
+ )
+ except ValueError:
+ _logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BLRP_SCHEDULE_DELAY,
+ _DEFAULT_SCHEDULE_DELAY_MILLIS,
+ )
+ return _DEFAULT_SCHEDULE_DELAY_MILLIS
+
+ @staticmethod
+ def _default_max_export_batch_size():
+ try:
+ return int(
+ environ.get(
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE,
+ _DEFAULT_MAX_EXPORT_BATCH_SIZE,
+ )
+ )
+ except ValueError:
+ _logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE,
+ _DEFAULT_MAX_EXPORT_BATCH_SIZE,
+ )
+ return _DEFAULT_MAX_EXPORT_BATCH_SIZE
+
+ @staticmethod
+ def _default_export_timeout_millis():
+ try:
+ return int(
+ environ.get(
+ OTEL_BLRP_EXPORT_TIMEOUT, _DEFAULT_EXPORT_TIMEOUT_MILLIS
+ )
+ )
+ except ValueError:
+ _logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BLRP_EXPORT_TIMEOUT,
+ _DEFAULT_EXPORT_TIMEOUT_MILLIS,
+ )
+ return _DEFAULT_EXPORT_TIMEOUT_MILLIS
+
+ @staticmethod
+ def _validate_arguments(
+ max_queue_size, schedule_delay_millis, max_export_batch_size
+ ):
+ if max_queue_size <= 0:
+ raise ValueError("max_queue_size must be a positive integer.")
+
+ if schedule_delay_millis <= 0:
+ raise ValueError("schedule_delay_millis must be positive.")
+
+ if max_export_batch_size <= 0:
+ raise ValueError(
+ "max_export_batch_size must be a positive integer."
+ )
+
+ if max_export_batch_size > max_queue_size:
+ raise ValueError(
+ "max_export_batch_size must be less than or equal to max_queue_size."
+ )
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/in_memory_log_exporter.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/in_memory_log_exporter.py
new file mode 100644
index 0000000000..68cb6b7389
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/export/in_memory_log_exporter.py
@@ -0,0 +1,51 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import threading
+import typing
+
+from opentelemetry.sdk._logs import LogData
+from opentelemetry.sdk._logs.export import LogExporter, LogExportResult
+
+
+class InMemoryLogExporter(LogExporter):
+ """Implementation of :class:`.LogExporter` that stores logs in memory.
+
+ This class can be used for testing purposes. It stores the exported logs
+ in a list in memory that can be retrieved using the
+ :func:`.get_finished_logs` method.
+ """
+
+ def __init__(self):
+ self._logs = []
+ self._lock = threading.Lock()
+ self._stopped = False
+
+ def clear(self) -> None:
+ with self._lock:
+ self._logs.clear()
+
+ def get_finished_logs(self) -> typing.Tuple[LogData, ...]:
+ with self._lock:
+ return tuple(self._logs)
+
+ def export(self, batch: typing.Sequence[LogData]) -> LogExportResult:
+ if self._stopped:
+ return LogExportResult.FAILURE
+ with self._lock:
+ self._logs.extend(batch)
+ return LogExportResult.SUCCESS
+
+ def shutdown(self) -> None:
+ self._stopped = True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/export/__init__.py
new file mode 100644
index 0000000000..37a9eca7a0
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/export/__init__.py
@@ -0,0 +1,35 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry.sdk._logs._internal.export import (
+ BatchLogRecordProcessor,
+ ConsoleLogExporter,
+ LogExporter,
+ LogExportResult,
+ SimpleLogRecordProcessor,
+)
+
+# The point module is not in the export directory to avoid a circular import.
+from opentelemetry.sdk._logs._internal.export.in_memory_log_exporter import (
+ InMemoryLogExporter,
+)
+
+__all__ = [
+ "BatchLogRecordProcessor",
+ "ConsoleLogExporter",
+ "LogExporter",
+ "LogExportResult",
+ "SimpleLogRecordProcessor",
+ "InMemoryLogExporter",
+]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/environment_variables.py b/opentelemetry-sdk/src/opentelemetry/sdk/environment_variables.py
new file mode 100644
index 0000000000..a69e451cbb
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/environment_variables.py
@@ -0,0 +1,704 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OTEL_RESOURCE_ATTRIBUTES = "OTEL_RESOURCE_ATTRIBUTES"
+"""
+.. envvar:: OTEL_RESOURCE_ATTRIBUTES
+
+The :envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable allows resource
+attributes to be passed to the SDK at process invocation. The attributes from
+:envvar:`OTEL_RESOURCE_ATTRIBUTES` are merged with those passed to
+`Resource.create`, meaning :envvar:`OTEL_RESOURCE_ATTRIBUTES` takes *lower*
+priority. Attributes should be in the format ``key1=value1,key2=value2``.
+Additional details are available `in the specification
+`__.
+
+.. code-block:: console
+
+ $ OTEL_RESOURCE_ATTRIBUTES="service.name=shoppingcard,will_be_overridden=foo" python - <`__.
+"""
+
+OTEL_EXPORTER_OTLP_TIMEOUT = "OTEL_EXPORTER_OTLP_TIMEOUT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TIMEOUT
+
+The :envvar:`OTEL_EXPORTER_OTLP_TIMEOUT` is the maximum time the OTLP exporter will wait for each batch export.
+Default: 10
+"""
+
+OTEL_EXPORTER_OTLP_ENDPOINT = "OTEL_EXPORTER_OTLP_ENDPOINT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_ENDPOINT
+
+The :envvar:`OTEL_EXPORTER_OTLP_ENDPOINT` target to which the exporter is going to send spans or metrics.
+The endpoint MUST be a valid URL host, and MAY contain a scheme (http or https), port and path.
+A scheme of https indicates a secure connection and takes precedence over the insecure configuration setting.
+Default: "http://localhost:4317"
+"""
+
+OTEL_EXPORTER_OTLP_INSECURE = "OTEL_EXPORTER_OTLP_INSECURE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_INSECURE
+
+The :envvar:`OTEL_EXPORTER_OTLP_INSECURE` represents whether to enable client transport security for gRPC requests.
+A scheme of https takes precedence over this configuration setting.
+Default: False
+"""
+
+OTEL_EXPORTER_OTLP_TRACES_INSECURE = "OTEL_EXPORTER_OTLP_TRACES_INSECURE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_INSECURE
+
+The :envvar:`OTEL_EXPORTER_OTLP_TRACES_INSECURE` represents whether to enable client transport security
+for gRPC requests for spans. A scheme of https takes precedence over the this configuration setting.
+Default: False
+"""
+
+
+OTEL_EXPORTER_OTLP_TRACES_ENDPOINT = "OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
+
+The :envvar:`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` target to which the span exporter is going to send spans.
+The endpoint MUST be a valid URL host, and MAY contain a scheme (http or https), port and path.
+A scheme of https indicates a secure connection and takes precedence over this configuration setting.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_ENDPOINT = "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_ENDPOINT` target to which the metrics exporter is going to send metrics.
+The endpoint MUST be a valid URL host, and MAY contain a scheme (http or https), port and path.
+A scheme of https indicates a secure connection and takes precedence over this configuration setting.
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_ENDPOINT = "OTEL_EXPORTER_OTLP_LOGS_ENDPOINT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
+
+The :envvar:`OTEL_EXPORTER_OTLP_LOGS_ENDPOINT` target to which the log exporter is going to send logs.
+The endpoint MUST be a valid URL host, and MAY contain a scheme (http or https), port and path.
+A scheme of https indicates a secure connection and takes precedence over this configuration setting.
+"""
+
+OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE = "OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE` stores the path to the certificate file for
+TLS credentials of gRPC client for traces. Should only be used for a secure connection for tracing.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE = (
+ "OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE` stores the path to the certificate file for
+TLS credentials of gRPC client for metrics. Should only be used for a secure connection for exporting metrics.
+"""
+
+OTEL_EXPORTER_OTLP_TRACES_HEADERS = "OTEL_EXPORTER_OTLP_TRACES_HEADERS"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_HEADERS
+
+The :envvar:`OTEL_EXPORTER_OTLP_TRACES_HEADERS` contains the key-value pairs to be used as headers for spans
+associated with gRPC or HTTP requests.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_HEADERS = "OTEL_EXPORTER_OTLP_METRICS_HEADERS"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_HEADERS
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_HEADERS` contains the key-value pairs to be used as headers for metrics
+associated with gRPC or HTTP requests.
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_HEADERS = "OTEL_EXPORTER_OTLP_LOGS_HEADERS"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_HEADERS
+
+The :envvar:`OTEL_EXPORTER_OTLP_LOGS_HEADERS` contains the key-value pairs to be used as headers for logs
+associated with gRPC or HTTP requests.
+"""
+
+OTEL_EXPORTER_OTLP_TRACES_COMPRESSION = "OTEL_EXPORTER_OTLP_TRACES_COMPRESSION"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_COMPRESSION
+
+Same as :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` but only for the span
+exporter. If both are present, this takes higher precedence.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_COMPRESSION = (
+ "OTEL_EXPORTER_OTLP_METRICS_COMPRESSION"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_COMPRESSION
+
+Same as :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` but only for the metric
+exporter. If both are present, this takes higher precedence.
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_COMPRESSION = "OTEL_EXPORTER_OTLP_LOGS_COMPRESSION"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_COMPRESSION
+
+Same as :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` but only for the log
+exporter. If both are present, this takes higher precedence.
+"""
+
+OTEL_EXPORTER_OTLP_TRACES_TIMEOUT = "OTEL_EXPORTER_OTLP_TRACES_TIMEOUT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_TRACES_TIMEOUT
+
+The :envvar:`OTEL_EXPORTER_OTLP_TRACES_TIMEOUT` is the maximum time the OTLP exporter will
+wait for each batch export for spans.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_TIMEOUT = "OTEL_EXPORTER_OTLP_METRICS_TIMEOUT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_TIMEOUT
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_TIMEOUT` is the maximum time the OTLP exporter will
+wait for each batch export for metrics.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_INSECURE = "OTEL_EXPORTER_OTLP_METRICS_INSECURE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_INSECURE
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_INSECURE` represents whether to enable client transport security
+for gRPC requests for metrics. A scheme of https takes precedence over the this configuration setting.
+Default: False
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_INSECURE = "OTEL_EXPORTER_OTLP_LOGS_INSECURE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_INSECURE
+
+The :envvar:`OTEL_EXPORTER_OTLP_LOGS_INSECURE` represents whether to enable client transport security
+for gRPC requests for metrics. A scheme of https takes precedence over the this configuration setting.
+Default: False
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_ENDPOINT = "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_ENDPOINT` target to which the metric exporter is going to send spans.
+The endpoint MUST be a valid URL host, and MAY contain a scheme (http or https), port and path.
+A scheme of https indicates a secure connection and takes precedence over this configuration setting.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE = (
+ "OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE` stores the path to the certificate file for
+TLS credentials of gRPC client for traces. Should only be used for a secure connection for tracing.
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE = "OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_OTLP_LOGS_CERTIFICATE` stores the path to the certificate file for
+TLS credentials of gRPC client for traces. Should only be used for a secure connection for tracing.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_HEADERS = "OTEL_EXPORTER_OTLP_METRICS_HEADERS"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_HEADERS
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_HEADERS` contains the key-value pairs to be used as headers for metrics
+associated with gRPC or HTTP requests.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_TIMEOUT = "OTEL_EXPORTER_OTLP_METRICS_TIMEOUT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_TIMEOUT
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_TIMEOUT` is the maximum time the OTLP exporter will
+wait for each batch export for metrics.
+"""
+
+OTEL_EXPORTER_OTLP_LOGS_TIMEOUT = "OTEL_EXPORTER_OTLP_LOGS_TIMEOUT"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_LOGS_TIMEOUT
+
+The :envvar:`OTEL_EXPORTER_OTLP_LOGS_TIMEOUT` is the maximum time the OTLP exporter will
+wait for each batch export for logs.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_COMPRESSION = (
+ "OTEL_EXPORTER_OTLP_METRICS_COMPRESSION"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_COMPRESSION
+
+Same as :envvar:`OTEL_EXPORTER_OTLP_COMPRESSION` but only for the metric
+exporter. If both are present, this takes higher precedence.
+"""
+
+OTEL_EXPORTER_JAEGER_CERTIFICATE = "OTEL_EXPORTER_JAEGER_CERTIFICATE"
+"""
+.. envvar:: OTEL_EXPORTER_JAEGER_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_JAEGER_CERTIFICATE` stores the path to the certificate file for
+TLS credentials of gRPC client for Jaeger. Should only be used for a secure connection with Jaeger.
+"""
+
+OTEL_EXPORTER_JAEGER_AGENT_SPLIT_OVERSIZED_BATCHES = (
+ "OTEL_EXPORTER_JAEGER_AGENT_SPLIT_OVERSIZED_BATCHES"
+)
+"""
+.. envvar:: OTEL_EXPORTER_JAEGER_AGENT_SPLIT_OVERSIZED_BATCHES
+
+The :envvar:`OTEL_EXPORTER_JAEGER_AGENT_SPLIT_OVERSIZED_BATCHES` is a boolean flag to determine whether
+to split a large span batch to admire the udp packet size limit.
+"""
+
+OTEL_SERVICE_NAME = "OTEL_SERVICE_NAME"
+"""
+.. envvar:: OTEL_SERVICE_NAME
+
+Convenience environment variable for setting the service name resource attribute.
+The following two environment variables have the same effect
+
+.. code-block:: console
+
+ OTEL_SERVICE_NAME=my-python-service
+
+ OTEL_RESOURCE_ATTRIBUTES=service.name=my-python-service
+
+
+If both are set, :envvar:`OTEL_SERVICE_NAME` takes precedence.
+"""
+
+
+_OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED = (
+ "OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED"
+)
+"""
+.. envvar:: OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED
+
+The :envvar:`OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED` environment variable allows users to
+enable/disable the logging SDK auto instrumentation.
+Default: False
+
+Note: Logs SDK and its related settings are experimental.
+"""
+
+
+OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE = (
+ "OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE` environment
+variable allows users to set the default aggregation temporality policy to use
+on the basis of instrument kind. The valid (case-insensitive) values are:
+
+``CUMULATIVE``: Use ``CUMULATIVE`` aggregation temporality for all instrument kinds.
+``DELTA``: Use ``DELTA`` aggregation temporality for ``Counter``, ``Asynchronous Counter`` and ``Histogram``.
+Use ``CUMULATIVE`` aggregation temporality for ``UpDownCounter`` and ``Asynchronous UpDownCounter``.
+``LOWMEMORY``: Use ``DELTA`` aggregation temporality for ``Counter`` and ``Histogram``.
+Use ``CUMULATIVE`` aggregation temporality for ``UpDownCounter``, ``AsynchronousCounter`` and ``Asynchronous UpDownCounter``.
+"""
+
+OTEL_EXPORTER_JAEGER_GRPC_INSECURE = "OTEL_EXPORTER_JAEGER_GRPC_INSECURE"
+"""
+.. envvar:: OTEL_EXPORTER_JAEGER_GRPC_INSECURE
+
+The :envvar:`OTEL_EXPORTER_JAEGER_GRPC_INSECURE` is a boolean flag to True if collector has no encryption or authentication.
+"""
+
+OTEL_METRIC_EXPORT_INTERVAL = "OTEL_METRIC_EXPORT_INTERVAL"
+"""
+.. envvar:: OTEL_METRIC_EXPORT_INTERVAL
+
+The :envvar:`OTEL_METRIC_EXPORT_INTERVAL` is the time interval (in milliseconds) between the start of two export attempts.
+"""
+
+OTEL_METRIC_EXPORT_TIMEOUT = "OTEL_METRIC_EXPORT_TIMEOUT"
+"""
+.. envvar:: OTEL_METRIC_EXPORT_TIMEOUT
+
+The :envvar:`OTEL_METRIC_EXPORT_TIMEOUT` is the maximum allowed time (in milliseconds) to export data.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY = "OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY"
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY` is the clients private key to use in mTLS communication in PEM format.
+"""
+
+OTEL_METRICS_EXEMPLAR_FILTER = "OTEL_METRICS_EXEMPLAR_FILTER"
+"""
+.. envvar:: OTEL_METRICS_EXEMPLAR_FILTER
+
+The :envvar:`OTEL_METRICS_EXEMPLAR_FILTER` is the filter for which measurements can become Exemplars.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION = (
+ "OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION` is the default aggregation to use for histogram instruments.
+"""
+
+OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE = (
+ "OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE"
+)
+"""
+.. envvar:: OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE
+
+The :envvar:`OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE` is the client certificate/chain trust for clients private key to use in mTLS communication in PEM format.
+"""
+
+OTEL_EXPERIMENTAL_RESOURCE_DETECTORS = "OTEL_EXPERIMENTAL_RESOURCE_DETECTORS"
+"""
+.. envvar:: OTEL_EXPERIMENTAL_RESOURCE_DETECTORS
+
+The :envvar:`OTEL_EXPERIMENTAL_RESOURCE_DETECTORS` is a comma-separated string
+of names of resource detectors. These names must be the same as the names of
+entry points for the ```opentelemetry_resource_detector``` entry point. This is an
+experimental feature and the name of this variable and its behavior can change
+in a non-backwards compatible way.
+"""
+
+OTEL_EXPORTER_PROMETHEUS_HOST = "OTEL_EXPORTER_PROMETHEUS_HOST"
+"""
+.. envvar:: OTEL_EXPORTER_PROMETHEUS_HOST
+
+The :envvar:`OTEL_EXPORTER_PROMETHEUS_HOST` environment variable configures the host used by
+the Prometheus exporter.
+Default: "localhost"
+
+This is an experimental environment variable and the name of this variable and its behavior can
+change in a non-backwards compatible way.
+"""
+
+OTEL_EXPORTER_PROMETHEUS_PORT = "OTEL_EXPORTER_PROMETHEUS_PORT"
+"""
+.. envvar:: OTEL_EXPORTER_PROMETHEUS_PORT
+
+The :envvar:`OTEL_EXPORTER_PROMETHEUS_PORT` environment variable configures the port used by
+the Prometheus exporter.
+Default: 9464
+
+This is an experimental environment variable and the name of this variable and its behavior can
+change in a non-backwards compatible way.
+"""
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/error_handler/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/error_handler/__init__.py
new file mode 100644
index 0000000000..7b21d92d2a
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/error_handler/__init__.py
@@ -0,0 +1,151 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Global Error Handler
+
+This module provides a global error handler and an interface that allows
+error handlers to be registered with the global error handler via entry points.
+A default error handler is also provided.
+
+To use this feature, users can create an error handler that is registered
+using the ``opentelemetry_error_handler`` entry point. A class is to be
+registered in this entry point, this class must inherit from the
+``opentelemetry.sdk.error_handler.ErrorHandler`` class and implement the
+corresponding ``handle`` method. This method will receive the exception object
+that is to be handled. The error handler class should also inherit from the
+exception classes it wants to handle. For example, this would be an error
+handler that handles ``ZeroDivisionError``:
+
+.. code:: python
+
+ from opentelemetry.sdk.error_handler import ErrorHandler
+ from logging import getLogger
+
+ logger = getLogger(__name__)
+
+
+ class ErrorHandler0(ErrorHandler, ZeroDivisionError):
+
+ def _handle(self, error: Exception, *args, **kwargs):
+
+ logger.exception("ErrorHandler0 handling a ZeroDivisionError")
+
+To use the global error handler, just instantiate it as a context manager where
+you want exceptions to be handled:
+
+
+.. code:: python
+
+ from opentelemetry.sdk.error_handler import GlobalErrorHandler
+
+ with GlobalErrorHandler():
+ 1 / 0
+
+If the class of the exception raised in the scope of the ``GlobalErrorHandler``
+object is not parent of any registered error handler, then the default error
+handler will handle the exception. This default error handler will only log the
+exception to standard logging, the exception won't be raised any further.
+"""
+
+from abc import ABC, abstractmethod
+from logging import getLogger
+
+from opentelemetry.util._importlib_metadata import entry_points
+
+logger = getLogger(__name__)
+
+
+class ErrorHandler(ABC):
+ @abstractmethod
+ def _handle(self, error: Exception, *args, **kwargs):
+ """
+ Handle an exception
+ """
+
+
+class _DefaultErrorHandler(ErrorHandler):
+ """
+ Default error handler
+
+ This error handler just logs the exception using standard logging.
+ """
+
+ # pylint: disable=useless-return
+ def _handle(self, error: Exception, *args, **kwargs):
+
+ logger.exception("Error handled by default error handler: ")
+ return None
+
+
+class GlobalErrorHandler:
+ """
+ Global error handler
+
+ This is a singleton class that can be instantiated anywhere to get the
+ global error handler. This object provides a handle method that receives
+ an exception object that will be handled by the registered error handlers.
+ """
+
+ _instance = None
+
+ def __new__(cls) -> "GlobalErrorHandler":
+ if cls._instance is None:
+ cls._instance = super().__new__(cls)
+
+ return cls._instance
+
+ def __enter__(self):
+ pass
+
+ # pylint: disable=no-self-use
+ def __exit__(self, exc_type, exc_value, traceback):
+
+ if exc_value is None:
+
+ return None
+
+ plugin_handled = False
+
+ error_handler_entry_points = entry_points(
+ group="opentelemetry_error_handler"
+ )
+
+ for error_handler_entry_point in error_handler_entry_points:
+
+ error_handler_class = error_handler_entry_point.load()
+
+ if issubclass(error_handler_class, exc_value.__class__):
+
+ try:
+
+ error_handler_class()._handle(exc_value)
+ plugin_handled = True
+
+ # pylint: disable=broad-except
+ except Exception as error_handling_error:
+
+ logger.exception(
+ "%s error while handling error"
+ " %s by error handler %s",
+ error_handling_error.__class__.__name__,
+ exc_value.__class__.__name__,
+ error_handler_class.__name__,
+ )
+
+ if not plugin_handled:
+
+ _DefaultErrorHandler()._handle(exc_value)
+
+ return True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py
new file mode 100644
index 0000000000..1ca14283cf
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/__init__.py
@@ -0,0 +1,37 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.sdk.metrics._internal import Meter, MeterProvider
+from opentelemetry.sdk.metrics._internal.exceptions import MetricsTimeoutError
+from opentelemetry.sdk.metrics._internal.instrument import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+
+__all__ = [
+ "Meter",
+ "MeterProvider",
+ "MetricsTimeoutError",
+ "Counter",
+ "Histogram",
+ "ObservableCounter",
+ "ObservableGauge",
+ "ObservableUpDownCounter",
+ "UpDownCounter",
+]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/__init__.py
new file mode 100644
index 0000000000..ffec748ccb
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/__init__.py
@@ -0,0 +1,498 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from atexit import register, unregister
+from logging import getLogger
+from threading import Lock
+from time import time_ns
+from typing import Optional, Sequence
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics
+from opentelemetry.metrics import Counter as APICounter
+from opentelemetry.metrics import Histogram as APIHistogram
+from opentelemetry.metrics import Meter as APIMeter
+from opentelemetry.metrics import MeterProvider as APIMeterProvider
+from opentelemetry.metrics import NoOpMeter
+from opentelemetry.metrics import ObservableCounter as APIObservableCounter
+from opentelemetry.metrics import ObservableGauge as APIObservableGauge
+from opentelemetry.metrics import (
+ ObservableUpDownCounter as APIObservableUpDownCounter,
+)
+from opentelemetry.metrics import UpDownCounter as APIUpDownCounter
+from opentelemetry.sdk.metrics._internal.exceptions import MetricsTimeoutError
+from opentelemetry.sdk.metrics._internal.instrument import (
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableGauge,
+ _ObservableUpDownCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.measurement_consumer import (
+ MeasurementConsumer,
+ SynchronousMeasurementConsumer,
+)
+from opentelemetry.sdk.metrics._internal.sdk_configuration import (
+ SdkConfiguration,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.util._once import Once
+
+_logger = getLogger(__name__)
+
+
+class Meter(APIMeter):
+ """See `opentelemetry.metrics.Meter`."""
+
+ def __init__(
+ self,
+ instrumentation_scope: InstrumentationScope,
+ measurement_consumer: MeasurementConsumer,
+ ):
+ super().__init__(
+ name=instrumentation_scope.name,
+ version=instrumentation_scope.version,
+ schema_url=instrumentation_scope.schema_url,
+ )
+ self._instrumentation_scope = instrumentation_scope
+ self._measurement_consumer = measurement_consumer
+ self._instrument_id_instrument = {}
+ self._instrument_id_instrument_lock = Lock()
+
+ def create_counter(self, name, unit="", description="") -> APICounter:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(name, _Counter, unit, description)
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APICounter.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _Counter(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ unit,
+ description,
+ )
+
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+ def create_up_down_counter(
+ self, name, unit="", description=""
+ ) -> APIUpDownCounter:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(
+ name, _UpDownCounter, unit, description
+ )
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APIUpDownCounter.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _UpDownCounter(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ unit,
+ description,
+ )
+
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+ def create_observable_counter(
+ self, name, callbacks=None, unit="", description=""
+ ) -> APIObservableCounter:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(
+ name, _ObservableCounter, unit, description
+ )
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APIObservableCounter.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _ObservableCounter(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ callbacks,
+ unit,
+ description,
+ )
+
+ self._measurement_consumer.register_asynchronous_instrument(instrument)
+
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+ def create_histogram(self, name, unit="", description="") -> APIHistogram:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(name, _Histogram, unit, description)
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APIHistogram.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _Histogram(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+ def create_observable_gauge(
+ self, name, callbacks=None, unit="", description=""
+ ) -> APIObservableGauge:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(
+ name, _ObservableGauge, unit, description
+ )
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APIObservableGauge.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _ObservableGauge(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ callbacks,
+ unit,
+ description,
+ )
+
+ self._measurement_consumer.register_asynchronous_instrument(instrument)
+
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+ def create_observable_up_down_counter(
+ self, name, callbacks=None, unit="", description=""
+ ) -> APIObservableUpDownCounter:
+
+ (
+ is_instrument_registered,
+ instrument_id,
+ ) = self._is_instrument_registered(
+ name, _ObservableUpDownCounter, unit, description
+ )
+
+ if is_instrument_registered:
+ # FIXME #2558 go through all views here and check if this
+ # instrument registration conflict can be fixed. If it can be, do
+ # not log the following warning.
+ _logger.warning(
+ "An instrument with name %s, type %s, unit %s and "
+ "description %s has been created already.",
+ name,
+ APIObservableUpDownCounter.__name__,
+ unit,
+ description,
+ )
+ with self._instrument_id_instrument_lock:
+ return self._instrument_id_instrument[instrument_id]
+
+ instrument = _ObservableUpDownCounter(
+ name,
+ self._instrumentation_scope,
+ self._measurement_consumer,
+ callbacks,
+ unit,
+ description,
+ )
+
+ self._measurement_consumer.register_asynchronous_instrument(instrument)
+
+ with self._instrument_id_instrument_lock:
+ self._instrument_id_instrument[instrument_id] = instrument
+ return instrument
+
+
+class MeterProvider(APIMeterProvider):
+ r"""See `opentelemetry.metrics.MeterProvider`.
+
+ Args:
+ metric_readers: Register metric readers to collect metrics from the SDK
+ on demand. Each :class:`opentelemetry.sdk.metrics.export.MetricReader` is
+ completely independent and will collect separate streams of
+ metrics. TODO: reference ``PeriodicExportingMetricReader`` usage with push
+ exporters here.
+ resource: The resource representing what the metrics emitted from the SDK pertain to.
+ shutdown_on_exit: If true, registers an `atexit` handler to call
+ `MeterProvider.shutdown`
+ views: The views to configure the metric output the SDK
+
+ By default, instruments which do not match any :class:`opentelemetry.sdk.metrics.view.View` (or if no :class:`opentelemetry.sdk.metrics.view.View`\ s
+ are provided) will report metrics with the default aggregation for the
+ instrument's kind. To disable instruments by default, configure a match-all
+ :class:`opentelemetry.sdk.metrics.view.View` with `DropAggregation` and then create :class:`opentelemetry.sdk.metrics.view.View`\ s to re-enable
+ individual instruments:
+
+ .. code-block:: python
+ :caption: Disable default views
+
+ MeterProvider(
+ views=[
+ View(instrument_name="*", aggregation=DropAggregation()),
+ View(instrument_name="mycounter"),
+ ],
+ # ...
+ )
+ """
+
+ _all_metric_readers_lock = Lock()
+ _all_metric_readers = set()
+
+ def __init__(
+ self,
+ metric_readers: Sequence[
+ "opentelemetry.sdk.metrics.export.MetricReader"
+ ] = (),
+ resource: Resource = None,
+ shutdown_on_exit: bool = True,
+ views: Sequence["opentelemetry.sdk.metrics.view.View"] = (),
+ ):
+ self._lock = Lock()
+ self._meter_lock = Lock()
+ self._atexit_handler = None
+ if resource is None:
+ resource = Resource.create({})
+ self._sdk_config = SdkConfiguration(
+ resource=resource,
+ metric_readers=metric_readers,
+ views=views,
+ )
+ self._measurement_consumer = SynchronousMeasurementConsumer(
+ sdk_config=self._sdk_config
+ )
+
+ if shutdown_on_exit:
+ self._atexit_handler = register(self.shutdown)
+
+ self._meters = {}
+ self._shutdown_once = Once()
+ self._shutdown = False
+
+ for metric_reader in self._sdk_config.metric_readers:
+
+ with self._all_metric_readers_lock:
+ if metric_reader in self._all_metric_readers:
+ raise Exception(
+ f"MetricReader {metric_reader} has been registered "
+ "already in other MeterProvider instance"
+ )
+
+ self._all_metric_readers.add(metric_reader)
+
+ metric_reader._set_collect_callback(
+ self._measurement_consumer.collect
+ )
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ deadline_ns = time_ns() + timeout_millis * 10**6
+
+ metric_reader_error = {}
+
+ for metric_reader in self._sdk_config.metric_readers:
+ current_ts = time_ns()
+ try:
+ if current_ts >= deadline_ns:
+ raise MetricsTimeoutError(
+ "Timed out while flushing metric readers"
+ )
+ metric_reader.force_flush(
+ timeout_millis=(deadline_ns - current_ts) / 10**6
+ )
+
+ # pylint: disable=broad-except
+ except Exception as error:
+
+ metric_reader_error[metric_reader] = error
+
+ if metric_reader_error:
+
+ metric_reader_error_string = "\n".join(
+ [
+ f"{metric_reader.__class__.__name__}: {repr(error)}"
+ for metric_reader, error in metric_reader_error.items()
+ ]
+ )
+
+ raise Exception(
+ "MeterProvider.force_flush failed because the following "
+ "metric readers failed during collect:\n"
+ f"{metric_reader_error_string}"
+ )
+ return True
+
+ def shutdown(self, timeout_millis: float = 30_000):
+ deadline_ns = time_ns() + timeout_millis * 10**6
+
+ def _shutdown():
+ self._shutdown = True
+
+ did_shutdown = self._shutdown_once.do_once(_shutdown)
+
+ if not did_shutdown:
+ _logger.warning("shutdown can only be called once")
+ return
+
+ metric_reader_error = {}
+
+ for metric_reader in self._sdk_config.metric_readers:
+ current_ts = time_ns()
+ try:
+ if current_ts >= deadline_ns:
+ raise Exception(
+ "Didn't get to execute, deadline already exceeded"
+ )
+ metric_reader.shutdown(
+ timeout_millis=(deadline_ns - current_ts) / 10**6
+ )
+
+ # pylint: disable=broad-except
+ except Exception as error:
+
+ metric_reader_error[metric_reader] = error
+
+ if self._atexit_handler is not None:
+ unregister(self._atexit_handler)
+ self._atexit_handler = None
+
+ if metric_reader_error:
+
+ metric_reader_error_string = "\n".join(
+ [
+ f"{metric_reader.__class__.__name__}: {repr(error)}"
+ for metric_reader, error in metric_reader_error.items()
+ ]
+ )
+
+ raise Exception(
+ (
+ "MeterProvider.shutdown failed because the following "
+ "metric readers failed during shutdown:\n"
+ f"{metric_reader_error_string}"
+ )
+ )
+
+ def get_meter(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> Meter:
+
+ if self._shutdown:
+ _logger.warning(
+ "A shutdown `MeterProvider` can not provide a `Meter`"
+ )
+ return NoOpMeter(name, version=version, schema_url=schema_url)
+
+ if not name:
+ _logger.warning("Meter name cannot be None or empty.")
+ return NoOpMeter(name, version=version, schema_url=schema_url)
+
+ info = InstrumentationScope(name, version, schema_url)
+ with self._meter_lock:
+ if not self._meters.get(info):
+ # FIXME #2558 pass SDKConfig object to meter so that the meter
+ # has access to views.
+ self._meters[info] = Meter(
+ info,
+ self._measurement_consumer,
+ )
+ return self._meters[info]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/_view_instrument_match.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/_view_instrument_match.py
new file mode 100644
index 0000000000..7dd7f58f27
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/_view_instrument_match.py
@@ -0,0 +1,143 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from logging import getLogger
+from threading import Lock
+from time import time_ns
+from typing import Dict, List, Optional, Sequence
+
+from opentelemetry.metrics import Instrument
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ Aggregation,
+ DefaultAggregation,
+ _Aggregation,
+ _SumAggregation,
+)
+from opentelemetry.sdk.metrics._internal.export import AggregationTemporality
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.point import DataPointT
+from opentelemetry.sdk.metrics._internal.view import View
+
+_logger = getLogger(__name__)
+
+
+class _ViewInstrumentMatch:
+ def __init__(
+ self,
+ view: View,
+ instrument: Instrument,
+ instrument_class_aggregation: Dict[type, Aggregation],
+ ):
+ self._start_time_unix_nano = time_ns()
+ self._view = view
+ self._instrument = instrument
+ self._attributes_aggregation: Dict[frozenset, _Aggregation] = {}
+ self._lock = Lock()
+ self._instrument_class_aggregation = instrument_class_aggregation
+ self._name = self._view._name or self._instrument.name
+ self._description = (
+ self._view._description or self._instrument.description
+ )
+ if not isinstance(self._view._aggregation, DefaultAggregation):
+ self._aggregation = self._view._aggregation._create_aggregation(
+ self._instrument, None, 0
+ )
+ else:
+ self._aggregation = self._instrument_class_aggregation[
+ self._instrument.__class__
+ ]._create_aggregation(self._instrument, None, 0)
+
+ def conflicts(self, other: "_ViewInstrumentMatch") -> bool:
+ # pylint: disable=protected-access
+
+ result = (
+ self._name == other._name
+ and self._instrument.unit == other._instrument.unit
+ # The aggregation class is being used here instead of data point
+ # type since they are functionally equivalent.
+ and self._aggregation.__class__ == other._aggregation.__class__
+ )
+ if isinstance(self._aggregation, _SumAggregation):
+ result = (
+ result
+ and self._aggregation._instrument_is_monotonic
+ == other._aggregation._instrument_is_monotonic
+ and self._aggregation._instrument_aggregation_temporality
+ == other._aggregation._instrument_aggregation_temporality
+ )
+
+ return result
+
+ # pylint: disable=protected-access
+ def consume_measurement(self, measurement: Measurement) -> None:
+
+ if self._view._attribute_keys is not None:
+
+ attributes = {}
+
+ for key, value in (measurement.attributes or {}).items():
+ if key in self._view._attribute_keys:
+ attributes[key] = value
+ elif measurement.attributes is not None:
+ attributes = measurement.attributes
+ else:
+ attributes = {}
+
+ aggr_key = frozenset(attributes.items())
+
+ if aggr_key not in self._attributes_aggregation:
+ with self._lock:
+ if aggr_key not in self._attributes_aggregation:
+ if not isinstance(
+ self._view._aggregation, DefaultAggregation
+ ):
+ aggregation = (
+ self._view._aggregation._create_aggregation(
+ self._instrument,
+ attributes,
+ self._start_time_unix_nano,
+ )
+ )
+ else:
+ aggregation = self._instrument_class_aggregation[
+ self._instrument.__class__
+ ]._create_aggregation(
+ self._instrument,
+ attributes,
+ self._start_time_unix_nano,
+ )
+ self._attributes_aggregation[aggr_key] = aggregation
+
+ self._attributes_aggregation[aggr_key].aggregate(measurement)
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nanos: int,
+ ) -> Optional[Sequence[DataPointT]]:
+
+ data_points: List[DataPointT] = []
+ with self._lock:
+ for aggregation in self._attributes_aggregation.values():
+ data_point = aggregation.collect(
+ collection_aggregation_temporality, collection_start_nanos
+ )
+ if data_point is not None:
+ data_points.append(data_point)
+
+ # Returning here None instead of an empty list because the caller
+ # does not consume a sequence and to be consistent with the rest of
+ # collect methods that also return None.
+ return data_points or None
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/aggregation.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/aggregation.py
new file mode 100644
index 0000000000..62ba091ebf
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/aggregation.py
@@ -0,0 +1,1239 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+
+from abc import ABC, abstractmethod
+from bisect import bisect_left
+from enum import IntEnum
+from logging import getLogger
+from math import inf
+from threading import Lock
+from typing import Generic, List, Optional, Sequence, TypeVar
+
+from opentelemetry.metrics import (
+ Asynchronous,
+ Counter,
+ Histogram,
+ Instrument,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ Synchronous,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.buckets import (
+ Buckets,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.exponent_mapping import (
+ ExponentMapping,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.logarithm_mapping import (
+ LogarithmMapping,
+)
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.point import Buckets as BucketsPoint
+from opentelemetry.sdk.metrics._internal.point import (
+ ExponentialHistogramDataPoint,
+ Gauge,
+)
+from opentelemetry.sdk.metrics._internal.point import (
+ Histogram as HistogramPoint,
+)
+from opentelemetry.sdk.metrics._internal.point import (
+ HistogramDataPoint,
+ NumberDataPoint,
+ Sum,
+)
+from opentelemetry.util.types import Attributes
+
+_DataPointVarT = TypeVar("_DataPointVarT", NumberDataPoint, HistogramDataPoint)
+
+_logger = getLogger(__name__)
+
+
+class AggregationTemporality(IntEnum):
+ """
+ The temporality to use when aggregating data.
+
+ Can be one of the following values:
+ """
+
+ UNSPECIFIED = 0
+ DELTA = 1
+ CUMULATIVE = 2
+
+
+class _Aggregation(ABC, Generic[_DataPointVarT]):
+ def __init__(self, attributes: Attributes):
+ self._lock = Lock()
+ self._attributes = attributes
+ self._previous_point = None
+
+ @abstractmethod
+ def aggregate(self, measurement: Measurement) -> None:
+ pass
+
+ @abstractmethod
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[_DataPointVarT]:
+ pass
+
+
+class _DropAggregation(_Aggregation):
+ def aggregate(self, measurement: Measurement) -> None:
+ pass
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[_DataPointVarT]:
+ pass
+
+
+class _SumAggregation(_Aggregation[Sum]):
+ def __init__(
+ self,
+ attributes: Attributes,
+ instrument_is_monotonic: bool,
+ instrument_aggregation_temporality: AggregationTemporality,
+ start_time_unix_nano: int,
+ ):
+ super().__init__(attributes)
+
+ self._start_time_unix_nano = start_time_unix_nano
+ self._instrument_aggregation_temporality = (
+ instrument_aggregation_temporality
+ )
+ self._instrument_is_monotonic = instrument_is_monotonic
+
+ self._current_value = None
+
+ self._previous_collection_start_nano = self._start_time_unix_nano
+ self._previous_cumulative_value = 0
+
+ def aggregate(self, measurement: Measurement) -> None:
+ with self._lock:
+ if self._current_value is None:
+ self._current_value = 0
+
+ self._current_value = self._current_value + measurement.value
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[NumberDataPoint]:
+ """
+ Atomically return a point for the current value of the metric and
+ reset the aggregation value.
+
+ Synchronous instruments have a method which is called directly with
+ increments for a given quantity:
+
+ For example, an instrument that counts the amount of passengers in
+ every vehicle that crosses a certain point in a highway:
+
+ synchronous_instrument.add(2)
+ collect(...) # 2 passengers are counted
+ synchronous_instrument.add(3)
+ collect(...) # 3 passengers are counted
+ synchronous_instrument.add(1)
+ collect(...) # 1 passenger is counted
+
+ In this case the instrument aggregation temporality is DELTA because
+ every value represents an increment to the count,
+
+ Asynchronous instruments have a callback which returns the total value
+ of a given quantity:
+
+ For example, an instrument that measures the amount of bytes written to
+ a certain hard drive:
+
+ callback() -> 1352
+ collect(...) # 1352 bytes have been written so far
+ callback() -> 2324
+ collect(...) # 2324 bytes have been written so far
+ callback() -> 4542
+ collect(...) # 4542 bytes have been written so far
+
+ In this case the instrument aggregation temporality is CUMULATIVE
+ because every value represents the total of the measurement.
+
+ There is also the collection aggregation temporality, which is passed
+ to this method. The collection aggregation temporality defines the
+ nature of the returned value by this aggregation.
+
+ When the collection aggregation temporality matches the
+ instrument aggregation temporality, then this method returns the
+ current value directly:
+
+ synchronous_instrument.add(2)
+ collect(DELTA) -> 2
+ synchronous_instrument.add(3)
+ collect(DELTA) -> 3
+ synchronous_instrument.add(1)
+ collect(DELTA) -> 1
+
+ callback() -> 1352
+ collect(CUMULATIVE) -> 1352
+ callback() -> 2324
+ collect(CUMULATIVE) -> 2324
+ callback() -> 4542
+ collect(CUMULATIVE) -> 4542
+
+ When the collection aggregation temporality does not match the
+ instrument aggregation temporality, then a conversion is made. For this
+ purpose, this aggregation keeps a private attribute,
+ self._previous_cumulative.
+
+ When the instrument is synchronous:
+
+ self._previous_cumulative_value is the sum of every previously
+ collected (delta) value. In this case, the returned (cumulative) value
+ will be:
+
+ self._previous_cumulative_value + current_value
+
+ synchronous_instrument.add(2)
+ collect(CUMULATIVE) -> 2
+ synchronous_instrument.add(3)
+ collect(CUMULATIVE) -> 5
+ synchronous_instrument.add(1)
+ collect(CUMULATIVE) -> 6
+
+ Also, as a diagram:
+
+ time ->
+
+ self._previous_cumulative_value
+ |-------------|
+
+ current_value (delta)
+ |----|
+
+ returned value (cumulative)
+ |------------------|
+
+ When the instrument is asynchronous:
+
+ self._previous_cumulative_value is the value of the previously
+ collected (cumulative) value. In this case, the returned (delta) value
+ will be:
+
+ current_value - self._previous_cumulative_value
+
+ callback() -> 1352
+ collect(DELTA) -> 1352
+ callback() -> 2324
+ collect(DELTA) -> 972
+ callback() -> 4542
+ collect(DELTA) -> 2218
+
+ Also, as a diagram:
+
+ time ->
+
+ self._previous_cumulative_value
+ |-------------|
+
+ current_value (cumulative)
+ |------------------|
+
+ returned value (delta)
+ |----|
+ """
+
+ with self._lock:
+ current_value = self._current_value
+ self._current_value = None
+
+ if (
+ self._instrument_aggregation_temporality
+ is AggregationTemporality.DELTA
+ ):
+ # This happens when the corresponding instrument for this
+ # aggregation is synchronous.
+ if (
+ collection_aggregation_temporality
+ is AggregationTemporality.DELTA
+ ):
+
+ if current_value is None:
+ return None
+
+ previous_collection_start_nano = (
+ self._previous_collection_start_nano
+ )
+ self._previous_collection_start_nano = (
+ collection_start_nano
+ )
+
+ return NumberDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=previous_collection_start_nano,
+ time_unix_nano=collection_start_nano,
+ value=current_value,
+ )
+
+ if current_value is None:
+ current_value = 0
+
+ self._previous_cumulative_value = (
+ current_value + self._previous_cumulative_value
+ )
+
+ return NumberDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=self._start_time_unix_nano,
+ time_unix_nano=collection_start_nano,
+ value=self._previous_cumulative_value,
+ )
+
+ # This happens when the corresponding instrument for this
+ # aggregation is asynchronous.
+
+ if current_value is None:
+ # This happens when the corresponding instrument callback
+ # does not produce measurements.
+ return None
+
+ if (
+ collection_aggregation_temporality
+ is AggregationTemporality.DELTA
+ ):
+ result_value = current_value - self._previous_cumulative_value
+
+ self._previous_cumulative_value = current_value
+
+ previous_collection_start_nano = (
+ self._previous_collection_start_nano
+ )
+ self._previous_collection_start_nano = collection_start_nano
+
+ return NumberDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=previous_collection_start_nano,
+ time_unix_nano=collection_start_nano,
+ value=result_value,
+ )
+
+ return NumberDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=self._start_time_unix_nano,
+ time_unix_nano=collection_start_nano,
+ value=current_value,
+ )
+
+
+class _LastValueAggregation(_Aggregation[Gauge]):
+ def __init__(self, attributes: Attributes):
+ super().__init__(attributes)
+ self._value = None
+
+ def aggregate(self, measurement: Measurement):
+ with self._lock:
+ self._value = measurement.value
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[_DataPointVarT]:
+ """
+ Atomically return a point for the current value of the metric.
+ """
+ with self._lock:
+ if self._value is None:
+ return None
+ value = self._value
+ self._value = None
+
+ return NumberDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=0,
+ time_unix_nano=collection_start_nano,
+ value=value,
+ )
+
+
+class _ExplicitBucketHistogramAggregation(_Aggregation[HistogramPoint]):
+ def __init__(
+ self,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ boundaries: Sequence[float] = (
+ 0.0,
+ 5.0,
+ 10.0,
+ 25.0,
+ 50.0,
+ 75.0,
+ 100.0,
+ 250.0,
+ 500.0,
+ 750.0,
+ 1000.0,
+ 2500.0,
+ 5000.0,
+ 7500.0,
+ 10000.0,
+ ),
+ record_min_max: bool = True,
+ ):
+ super().__init__(attributes)
+ self._boundaries = tuple(boundaries)
+ self._bucket_counts = self._get_empty_bucket_counts()
+ self._min = inf
+ self._max = -inf
+ self._sum = 0
+ self._record_min_max = record_min_max
+ self._start_time_unix_nano = start_time_unix_nano
+ # It is assumed that the "natural" aggregation temporality for a
+ # Histogram instrument is DELTA, like the "natural" aggregation
+ # temporality for a Counter is DELTA and the "natural" aggregation
+ # temporality for an ObservableCounter is CUMULATIVE.
+ self._instrument_aggregation_temporality = AggregationTemporality.DELTA
+
+ def _get_empty_bucket_counts(self) -> List[int]:
+ return [0] * (len(self._boundaries) + 1)
+
+ def aggregate(self, measurement: Measurement) -> None:
+
+ value = measurement.value
+
+ if self._record_min_max:
+ self._min = min(self._min, value)
+ self._max = max(self._max, value)
+
+ self._sum += value
+
+ self._bucket_counts[bisect_left(self._boundaries, value)] += 1
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[_DataPointVarT]:
+ """
+ Atomically return a point for the current value of the metric.
+ """
+ with self._lock:
+ if not any(self._bucket_counts):
+ return None
+
+ bucket_counts = self._bucket_counts
+ start_time_unix_nano = self._start_time_unix_nano
+ sum_ = self._sum
+ max_ = self._max
+ min_ = self._min
+
+ self._bucket_counts = self._get_empty_bucket_counts()
+ self._start_time_unix_nano = collection_start_nano
+ self._sum = 0
+ self._min = inf
+ self._max = -inf
+
+ current_point = HistogramDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=start_time_unix_nano,
+ time_unix_nano=collection_start_nano,
+ count=sum(bucket_counts),
+ sum=sum_,
+ bucket_counts=tuple(bucket_counts),
+ explicit_bounds=self._boundaries,
+ min=min_,
+ max=max_,
+ )
+
+ if self._previous_point is None or (
+ self._instrument_aggregation_temporality
+ is collection_aggregation_temporality
+ ):
+ self._previous_point = current_point
+ return current_point
+
+ max_ = current_point.max
+ min_ = current_point.min
+
+ if (
+ collection_aggregation_temporality
+ is AggregationTemporality.CUMULATIVE
+ ):
+ start_time_unix_nano = self._previous_point.start_time_unix_nano
+ sum_ = current_point.sum + self._previous_point.sum
+ # Only update min/max on delta -> cumulative
+ max_ = max(current_point.max, self._previous_point.max)
+ min_ = min(current_point.min, self._previous_point.min)
+ bucket_counts = [
+ curr_count + prev_count
+ for curr_count, prev_count in zip(
+ current_point.bucket_counts,
+ self._previous_point.bucket_counts,
+ )
+ ]
+ else:
+ start_time_unix_nano = self._previous_point.time_unix_nano
+ sum_ = current_point.sum - self._previous_point.sum
+ bucket_counts = [
+ curr_count - prev_count
+ for curr_count, prev_count in zip(
+ current_point.bucket_counts,
+ self._previous_point.bucket_counts,
+ )
+ ]
+
+ current_point = HistogramDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=start_time_unix_nano,
+ time_unix_nano=current_point.time_unix_nano,
+ count=sum(bucket_counts),
+ sum=sum_,
+ bucket_counts=tuple(bucket_counts),
+ explicit_bounds=current_point.explicit_bounds,
+ min=min_,
+ max=max_,
+ )
+ self._previous_point = current_point
+ return current_point
+
+
+# pylint: disable=protected-access
+class _ExponentialBucketHistogramAggregation(_Aggregation[HistogramPoint]):
+ # _min_max_size and _max_max_size are the smallest and largest values
+ # the max_size parameter may have, respectively.
+
+ # _min_max_size is is the smallest reasonable value which is small enough
+ # to contain the entire normal floating point range at the minimum scale.
+ _min_max_size = 2
+
+ # _max_max_size is an arbitrary limit meant to limit accidental creation of
+ # giant exponential bucket histograms.
+ _max_max_size = 16384
+
+ def __init__(
+ self,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ # This is the default maximum number of buckets per positive or
+ # negative number range. The value 160 is specified by OpenTelemetry.
+ # See the derivation here:
+ # https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#exponential-bucket-histogram-aggregation)
+ max_size: int = 160,
+ max_scale: int = 20,
+ ):
+ super().__init__(attributes)
+ # max_size is the maximum capacity of the positive and negative
+ # buckets.
+ if max_size < self._min_max_size:
+ raise ValueError(
+ f"Buckets max size {max_size} is smaller than "
+ "minimum max size {self._min_max_size}"
+ )
+
+ if max_size > self._max_max_size:
+ raise ValueError(
+ f"Buckets max size {max_size} is larger than "
+ "maximum max size {self._max_max_size}"
+ )
+
+ self._max_size = max_size
+ self._max_scale = max_scale
+
+ # _sum is the sum of all the values aggregated by this aggregator.
+ self._sum = 0
+
+ # _count is the count of all calls to aggregate.
+ self._count = 0
+
+ # _zero_count is the count of all the calls to aggregate when the value
+ # to be aggregated is exactly 0.
+ self._zero_count = 0
+
+ # _min is the smallest value aggregated by this aggregator.
+ self._min = inf
+
+ # _max is the smallest value aggregated by this aggregator.
+ self._max = -inf
+
+ # _positive holds the positive values.
+ self._positive = Buckets()
+
+ # _negative holds the negative values by their absolute value.
+ self._negative = Buckets()
+
+ # _mapping corresponds to the current scale, is shared by both the
+ # positive and negative buckets.
+
+ if self._max_scale > 20:
+ _logger.warning(
+ "max_scale is set to %s which is "
+ "larger than the recommended value of 20",
+ self._max_scale,
+ )
+ self._mapping = LogarithmMapping(self._max_scale)
+
+ self._instrument_aggregation_temporality = AggregationTemporality.DELTA
+ self._start_time_unix_nano = start_time_unix_nano
+
+ self._previous_scale = None
+ self._previous_start_time_unix_nano = None
+ self._previous_sum = None
+ self._previous_max = None
+ self._previous_min = None
+ self._previous_positive = None
+ self._previous_negative = None
+
+ def aggregate(self, measurement: Measurement) -> None:
+ # pylint: disable=too-many-branches,too-many-statements, too-many-locals
+
+ with self._lock:
+
+ value = measurement.value
+
+ # 0. Set the following attributes:
+ # _min
+ # _max
+ # _count
+ # _zero_count
+ # _sum
+ if value < self._min:
+ self._min = value
+
+ if value > self._max:
+ self._max = value
+
+ self._count += 1
+
+ if value == 0:
+ self._zero_count += 1
+ # No need to do anything else if value is zero, just increment the
+ # zero count.
+ return
+
+ self._sum += value
+
+ # 1. Use the positive buckets for positive values and the negative
+ # buckets for negative values.
+ if value > 0:
+ buckets = self._positive
+
+ else:
+ # Both exponential and logarithm mappings use only positive values
+ # so the absolute value is used here.
+ value = -value
+ buckets = self._negative
+
+ # 2. Compute the index for the value at the current scale.
+ index = self._mapping.map_to_index(value)
+
+ # IncrementIndexBy starts here
+
+ # 3. Determine if a change of scale is needed.
+ is_rescaling_needed = False
+ low, high = 0, 0
+
+ if len(buckets) == 0:
+ buckets.index_start = index
+ buckets.index_end = index
+ buckets.index_base = index
+
+ elif (
+ index < buckets.index_start
+ and (buckets.index_end - index) >= self._max_size
+ ):
+ is_rescaling_needed = True
+ low = index
+ high = buckets.index_end
+
+ elif (
+ index > buckets.index_end
+ and (index - buckets.index_start) >= self._max_size
+ ):
+ is_rescaling_needed = True
+ low = buckets.index_start
+ high = index
+
+ # 4. Rescale the mapping if needed.
+ if is_rescaling_needed:
+
+ self._downscale(
+ self._get_scale_change(low, high),
+ self._positive,
+ self._negative,
+ )
+
+ index = self._mapping.map_to_index(value)
+
+ # 5. If the index is outside
+ # [buckets.index_start, buckets.index_end] readjust the buckets
+ # boundaries or add more buckets.
+ if index < buckets.index_start:
+ span = buckets.index_end - index
+
+ if span >= len(buckets.counts):
+ buckets.grow(span + 1, self._max_size)
+
+ buckets.index_start = index
+
+ elif index > buckets.index_end:
+ span = index - buckets.index_start
+
+ if span >= len(buckets.counts):
+ buckets.grow(span + 1, self._max_size)
+
+ buckets.index_end = index
+
+ # 6. Compute the index of the bucket to be incremented.
+ bucket_index = index - buckets.index_base
+
+ if bucket_index < 0:
+ bucket_index += len(buckets.counts)
+
+ # 7. Increment the bucket.
+ buckets.increment_bucket(bucket_index)
+
+ def collect(
+ self,
+ collection_aggregation_temporality: AggregationTemporality,
+ collection_start_nano: int,
+ ) -> Optional[_DataPointVarT]:
+ """
+ Atomically return a point for the current value of the metric.
+ """
+ # pylint: disable=too-many-statements, too-many-locals
+
+ with self._lock:
+ if self._count == 0:
+ return None
+
+ current_negative = self._negative
+ current_positive = self._positive
+ current_zero_count = self._zero_count
+ current_count = self._count
+ current_start_time_unix_nano = self._start_time_unix_nano
+ current_sum = self._sum
+ current_max = self._max
+ if current_max == -inf:
+ current_max = None
+ current_min = self._min
+ if current_min == inf:
+ current_min = None
+
+ if self._count == self._zero_count:
+ current_scale = 0
+
+ else:
+ current_scale = self._mapping.scale
+
+ self._negative = Buckets()
+ self._positive = Buckets()
+ self._start_time_unix_nano = collection_start_nano
+ self._sum = 0
+ self._count = 0
+ self._zero_count = 0
+ self._min = inf
+ self._max = -inf
+
+ current_point = ExponentialHistogramDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=current_start_time_unix_nano,
+ time_unix_nano=collection_start_nano,
+ count=current_count,
+ sum=current_sum,
+ scale=current_scale,
+ zero_count=current_zero_count,
+ positive=BucketsPoint(
+ offset=current_positive.offset,
+ bucket_counts=current_positive.counts,
+ ),
+ negative=BucketsPoint(
+ offset=current_negative.offset,
+ bucket_counts=current_negative.counts,
+ ),
+ # FIXME: Find the right value for flags
+ flags=0,
+ min=current_min,
+ max=current_max,
+ )
+
+ if self._previous_scale is None or (
+ self._instrument_aggregation_temporality
+ is collection_aggregation_temporality
+ ):
+ self._previous_scale = current_scale
+ self._previous_start_time_unix_nano = (
+ current_start_time_unix_nano
+ )
+ self._previous_max = current_max
+ self._previous_min = current_min
+ self._previous_sum = current_sum
+ self._previous_positive = current_positive
+ self._previous_negative = current_negative
+
+ return current_point
+
+ min_scale = min(self._previous_scale, current_scale)
+
+ low_positive, high_positive = self._get_low_high_previous_current(
+ self._previous_positive, current_positive, min_scale
+ )
+ low_negative, high_negative = self._get_low_high_previous_current(
+ self._previous_negative, current_negative, min_scale
+ )
+
+ min_scale = min(
+ min_scale
+ - self._get_scale_change(low_positive, high_positive),
+ min_scale
+ - self._get_scale_change(low_negative, high_negative),
+ )
+
+ # FIXME Go implementation checks if the histogram (not the mapping
+ # but the histogram) has a count larger than zero, if not, scale
+ # (the histogram scale) would be zero. See exponential.go 191
+ self._downscale(
+ self._mapping.scale - min_scale,
+ self._previous_positive,
+ self._previous_negative,
+ )
+
+ if (
+ collection_aggregation_temporality
+ is AggregationTemporality.CUMULATIVE
+ ):
+
+ start_time_unix_nano = self._previous_start_time_unix_nano
+ sum_ = current_sum + self._previous_sum
+ # Only update min/max on delta -> cumulative
+ max_ = max(current_max, self._previous_max)
+ min_ = min(current_min, self._previous_min)
+
+ self._merge(
+ self._previous_positive,
+ current_positive,
+ current_scale,
+ min_scale,
+ collection_aggregation_temporality,
+ )
+ self._merge(
+ self._previous_negative,
+ current_negative,
+ current_scale,
+ min_scale,
+ collection_aggregation_temporality,
+ )
+
+ else:
+ start_time_unix_nano = self._previous_start_time_unix_nano
+ sum_ = current_sum - self._previous_sum
+ max_ = current_max
+ min_ = current_min
+
+ self._merge(
+ self._previous_positive,
+ current_positive,
+ current_scale,
+ min_scale,
+ collection_aggregation_temporality,
+ )
+ self._merge(
+ self._previous_negative,
+ current_negative,
+ current_scale,
+ min_scale,
+ collection_aggregation_temporality,
+ )
+
+ current_point = ExponentialHistogramDataPoint(
+ attributes=self._attributes,
+ start_time_unix_nano=start_time_unix_nano,
+ time_unix_nano=collection_start_nano,
+ count=current_count,
+ sum=sum_,
+ scale=current_scale,
+ zero_count=current_zero_count,
+ positive=BucketsPoint(
+ offset=current_positive.offset,
+ bucket_counts=current_positive.counts,
+ ),
+ negative=BucketsPoint(
+ offset=current_negative.offset,
+ bucket_counts=current_negative.counts,
+ ),
+ # FIXME: Find the right value for flags
+ flags=0,
+ min=min_,
+ max=max_,
+ )
+
+ self._previous_scale = current_scale
+ self._previous_positive = current_positive
+ self._previous_negative = current_negative
+ self._previous_start_time_unix_nano = current_start_time_unix_nano
+ self._previous_sum = current_sum
+
+ return current_point
+
+ def _get_low_high_previous_current(
+ self, previous_point_buckets, current_point_buckets, min_scale
+ ):
+
+ (previous_point_low, previous_point_high) = self._get_low_high(
+ previous_point_buckets, min_scale
+ )
+ (current_point_low, current_point_high) = self._get_low_high(
+ current_point_buckets, min_scale
+ )
+
+ if current_point_low > current_point_high:
+ low = previous_point_low
+ high = previous_point_high
+
+ elif previous_point_low > previous_point_high:
+ low = current_point_low
+ high = current_point_high
+
+ else:
+ low = min(previous_point_low, current_point_low)
+ high = max(previous_point_high, current_point_high)
+
+ return low, high
+
+ def _get_low_high(self, buckets, min_scale):
+ if buckets.counts == [0]:
+ return 0, -1
+
+ shift = self._mapping._scale - min_scale
+
+ return buckets.index_start >> shift, buckets.index_end >> shift
+
+ def _get_scale_change(self, low, high):
+
+ change = 0
+
+ while high - low >= self._max_size:
+ high = high >> 1
+ low = low >> 1
+
+ change += 1
+
+ return change
+
+ def _downscale(self, change: int, positive, negative):
+
+ if change == 0:
+ return
+
+ if change < 0:
+ raise Exception("Invalid change of scale")
+
+ new_scale = self._mapping.scale - change
+
+ positive.downscale(change)
+ negative.downscale(change)
+
+ if new_scale <= 0:
+ mapping = ExponentMapping(new_scale)
+ else:
+ mapping = LogarithmMapping(new_scale)
+
+ self._mapping = mapping
+
+ def _merge(
+ self,
+ previous_buckets,
+ current_buckets,
+ current_scale,
+ min_scale,
+ aggregation_temporality,
+ ):
+
+ current_change = current_scale - min_scale
+
+ for current_bucket_index, current_bucket in enumerate(
+ current_buckets.counts
+ ):
+
+ if current_bucket == 0:
+ continue
+
+ # Not considering the case where len(previous_buckets) == 0. This
+ # would not happen because self._previous_point is only assigned to
+ # an ExponentialHistogramDataPoint object if self._count != 0.
+
+ index = (
+ current_buckets.offset + current_bucket_index
+ ) >> current_change
+
+ if index < previous_buckets.index_start:
+ span = previous_buckets.index_end - index
+
+ if span >= self._max_size:
+ raise Exception("Incorrect merge scale")
+
+ if span >= len(previous_buckets.counts):
+ previous_buckets.grow(span + 1, self._max_size)
+
+ previous_buckets.index_start = index
+
+ if index > previous_buckets.index_end:
+ span = index - previous_buckets.index_end
+
+ if span >= self._max_size:
+ raise Exception("Incorrect merge scale")
+
+ if span >= len(previous_buckets.counts):
+ previous_buckets.grow(span + 1, self._max_size)
+
+ previous_buckets.index_end = index
+
+ bucket_index = index - previous_buckets.index_base
+
+ if bucket_index < 0:
+ bucket_index += len(previous_buckets.counts)
+
+ if aggregation_temporality is AggregationTemporality.DELTA:
+ current_bucket = -current_bucket
+
+ previous_buckets.increment_bucket(
+ bucket_index, increment=current_bucket
+ )
+
+
+class Aggregation(ABC):
+ """
+ Base class for all aggregation types.
+ """
+
+ @abstractmethod
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+ """Creates an aggregation"""
+
+
+class DefaultAggregation(Aggregation):
+ """
+ The default aggregation to be used in a `View`.
+
+ This aggregation will create an actual aggregation depending on the
+ instrument type, as specified next:
+
+ ==================================================== ====================================
+ Instrument Aggregation
+ ==================================================== ====================================
+ `opentelemetry.sdk.metrics.Counter` `SumAggregation`
+ `opentelemetry.sdk.metrics.UpDownCounter` `SumAggregation`
+ `opentelemetry.sdk.metrics.ObservableCounter` `SumAggregation`
+ `opentelemetry.sdk.metrics.ObservableUpDownCounter` `SumAggregation`
+ `opentelemetry.sdk.metrics.Histogram` `ExplicitBucketHistogramAggregation`
+ `opentelemetry.sdk.metrics.ObservableGauge` `LastValueAggregation`
+ ==================================================== ====================================
+ """
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+
+ # pylint: disable=too-many-return-statements
+ if isinstance(instrument, Counter):
+ return _SumAggregation(
+ attributes,
+ instrument_is_monotonic=True,
+ instrument_aggregation_temporality=(
+ AggregationTemporality.DELTA
+ ),
+ start_time_unix_nano=start_time_unix_nano,
+ )
+ if isinstance(instrument, UpDownCounter):
+ return _SumAggregation(
+ attributes,
+ instrument_is_monotonic=False,
+ instrument_aggregation_temporality=(
+ AggregationTemporality.DELTA
+ ),
+ start_time_unix_nano=start_time_unix_nano,
+ )
+
+ if isinstance(instrument, ObservableCounter):
+ return _SumAggregation(
+ attributes,
+ instrument_is_monotonic=True,
+ instrument_aggregation_temporality=(
+ AggregationTemporality.CUMULATIVE
+ ),
+ start_time_unix_nano=start_time_unix_nano,
+ )
+
+ if isinstance(instrument, ObservableUpDownCounter):
+ return _SumAggregation(
+ attributes,
+ instrument_is_monotonic=False,
+ instrument_aggregation_temporality=(
+ AggregationTemporality.CUMULATIVE
+ ),
+ start_time_unix_nano=start_time_unix_nano,
+ )
+
+ if isinstance(instrument, Histogram):
+ return _ExplicitBucketHistogramAggregation(
+ attributes, start_time_unix_nano
+ )
+
+ if isinstance(instrument, ObservableGauge):
+ return _LastValueAggregation(attributes)
+
+ raise Exception(f"Invalid instrument type {type(instrument)} found")
+
+
+class ExponentialBucketHistogramAggregation(Aggregation):
+ def __init__(
+ self,
+ max_size: int = 160,
+ max_scale: int = 20,
+ ):
+ self._max_size = max_size
+ self._max_scale = max_scale
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+ return _ExponentialBucketHistogramAggregation(
+ attributes,
+ start_time_unix_nano,
+ max_size=self._max_size,
+ max_scale=self._max_scale,
+ )
+
+
+class ExplicitBucketHistogramAggregation(Aggregation):
+ """This aggregation informs the SDK to collect:
+
+ - Count of Measurement values falling within explicit bucket boundaries.
+ - Arithmetic sum of Measurement values in population. This SHOULD NOT be collected when used with instruments that record negative measurements, e.g. UpDownCounter or ObservableGauge.
+ - Min (optional) Measurement value in population.
+ - Max (optional) Measurement value in population.
+
+
+ Args:
+ boundaries: Array of increasing values representing explicit bucket boundary values.
+ record_min_max: Whether to record min and max.
+ """
+
+ def __init__(
+ self,
+ boundaries: Sequence[float] = (
+ 0.0,
+ 5.0,
+ 10.0,
+ 25.0,
+ 50.0,
+ 75.0,
+ 100.0,
+ 250.0,
+ 500.0,
+ 750.0,
+ 1000.0,
+ 2500.0,
+ 5000.0,
+ 7500.0,
+ 10000.0,
+ ),
+ record_min_max: bool = True,
+ ) -> None:
+ self._boundaries = boundaries
+ self._record_min_max = record_min_max
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+ return _ExplicitBucketHistogramAggregation(
+ attributes,
+ start_time_unix_nano,
+ self._boundaries,
+ self._record_min_max,
+ )
+
+
+class SumAggregation(Aggregation):
+ """This aggregation informs the SDK to collect:
+
+ - The arithmetic sum of Measurement values.
+ """
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+
+ temporality = AggregationTemporality.UNSPECIFIED
+ if isinstance(instrument, Synchronous):
+ temporality = AggregationTemporality.DELTA
+ elif isinstance(instrument, Asynchronous):
+ temporality = AggregationTemporality.CUMULATIVE
+
+ return _SumAggregation(
+ attributes,
+ isinstance(instrument, (Counter, ObservableCounter)),
+ temporality,
+ start_time_unix_nano,
+ )
+
+
+class LastValueAggregation(Aggregation):
+ """
+ This aggregation informs the SDK to collect:
+
+ - The last Measurement.
+ - The timestamp of the last Measurement.
+ """
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+ return _LastValueAggregation(attributes)
+
+
+class DropAggregation(Aggregation):
+ """Using this aggregation will make all measurements be ignored."""
+
+ def _create_aggregation(
+ self,
+ instrument: Instrument,
+ attributes: Attributes,
+ start_time_unix_nano: int,
+ ) -> _Aggregation:
+ return _DropAggregation(attributes)
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exceptions.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exceptions.py
new file mode 100644
index 0000000000..0f8c3a7552
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exceptions.py
@@ -0,0 +1,17 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+class MetricsTimeoutError(Exception):
+ """Raised when a metrics function times out"""
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/buckets.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/buckets.py
new file mode 100644
index 0000000000..5c6b04bd39
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/buckets.py
@@ -0,0 +1,176 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import ceil, log2
+
+
+class Buckets:
+
+ # No method of this class is protected by locks because instances of this
+ # class are only used in methods that are protected by locks themselves.
+
+ def __init__(self):
+ self._counts = [0]
+
+ # The term index refers to the number of the exponential histogram bucket
+ # used to determine its boundaries. The lower boundary of a bucket is
+ # determined by base ** index and the upper boundary of a bucket is
+ # determined by base ** (index + 1). index values are signedto account
+ # for values less than or equal to 1.
+
+ # self._index_* will all have values equal to a certain index that is
+ # determined by the corresponding mapping _map_to_index function and
+ # the value of the index depends on the value passed to _map_to_index.
+
+ # Index of the 0th position in self._counts: self._counts[0] is the
+ # count in the bucket with index self.__index_base.
+ self.__index_base = 0
+
+ # self.__index_start is the smallest index value represented in
+ # self._counts.
+ self.__index_start = 0
+
+ # self.__index_start is the largest index value represented in
+ # self._counts.
+ self.__index_end = 0
+
+ @property
+ def index_start(self) -> int:
+ return self.__index_start
+
+ @index_start.setter
+ def index_start(self, value: int) -> None:
+ self.__index_start = value
+
+ @property
+ def index_end(self) -> int:
+ return self.__index_end
+
+ @index_end.setter
+ def index_end(self, value: int) -> None:
+ self.__index_end = value
+
+ @property
+ def index_base(self) -> int:
+ return self.__index_base
+
+ @index_base.setter
+ def index_base(self, value: int) -> None:
+ self.__index_base = value
+
+ @property
+ def counts(self):
+ return self._counts
+
+ def grow(self, needed: int, max_size: int) -> None:
+
+ size = len(self._counts)
+ bias = self.__index_base - self.__index_start
+ old_positive_limit = size - bias
+
+ # 2 ** ceil(log2(needed)) finds the smallest power of two that is larger
+ # or equal than needed:
+ # 2 ** ceil(log2(1)) == 1
+ # 2 ** ceil(log2(2)) == 2
+ # 2 ** ceil(log2(3)) == 4
+ # 2 ** ceil(log2(4)) == 4
+ # 2 ** ceil(log2(5)) == 8
+ # 2 ** ceil(log2(6)) == 8
+ # 2 ** ceil(log2(7)) == 8
+ # 2 ** ceil(log2(8)) == 8
+ new_size = min(2 ** ceil(log2(needed)), max_size)
+
+ new_positive_limit = new_size - bias
+
+ tmp = [0] * new_size
+ tmp[new_positive_limit:] = self._counts[old_positive_limit:]
+ tmp[0:old_positive_limit] = self._counts[0:old_positive_limit]
+ self._counts = tmp
+
+ @property
+ def offset(self) -> int:
+ return self.__index_start
+
+ def __len__(self) -> int:
+ if len(self._counts) == 0:
+ return 0
+
+ if self.__index_end == self.__index_start and self[0] == 0:
+ return 0
+
+ return self.__index_end - self.__index_start + 1
+
+ def __getitem__(self, key: int) -> int:
+ bias = self.__index_base - self.__index_start
+
+ if key < bias:
+ key += len(self._counts)
+
+ key -= bias
+
+ return self._counts[key]
+
+ def downscale(self, amount: int) -> None:
+ """
+ Rotates, then collapses 2 ** amount to 1 buckets.
+ """
+
+ bias = self.__index_base - self.__index_start
+
+ if bias != 0:
+
+ self.__index_base = self.__index_start
+
+ # [0, 1, 2, 3, 4] Original backing array
+
+ self._counts = self._counts[::-1]
+ # [4, 3, 2, 1, 0]
+
+ self._counts = (
+ self._counts[:bias][::-1] + self._counts[bias:][::-1]
+ )
+ # [3, 4, 0, 1, 2] This is a rotation of the backing array.
+
+ size = 1 + self.__index_end - self.__index_start
+ each = 1 << amount
+ inpos = 0
+ outpos = 0
+
+ pos = self.__index_start
+
+ while pos <= self.__index_end:
+ mod = pos % each
+ if mod < 0:
+ mod += each
+
+ index = mod
+
+ while index < each and inpos < size:
+
+ if outpos != inpos:
+ self._counts[outpos] += self._counts[inpos]
+ self._counts[inpos] = 0
+
+ inpos += 1
+ pos += 1
+ index += 1
+
+ outpos += 1
+
+ self.__index_start >>= amount
+ self.__index_end >>= amount
+ self.__index_base = self.__index_start
+
+ def increment_bucket(self, bucket_index: int, increment: int = 1) -> None:
+ self._counts[bucket_index] += increment
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/__init__.py
new file mode 100644
index 0000000000..d8c780cf40
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/__init__.py
@@ -0,0 +1,97 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from abc import ABC, abstractmethod
+
+
+class Mapping(ABC):
+ """
+ Parent class for `LogarithmMapping` and `ExponentialMapping`.
+ """
+
+ # pylint: disable=no-member
+ def __new__(cls, scale: int):
+
+ with cls._mappings_lock:
+ # cls._mappings and cls._mappings_lock are implemented in each of
+ # the child classes as a dictionary and a lock, respectively. They
+ # are not instantiated here because that would lead to both child
+ # classes having the same instance of cls._mappings and
+ # cls._mappings_lock.
+ if scale not in cls._mappings:
+ cls._mappings[scale] = super().__new__(cls)
+ cls._mappings[scale]._init(scale)
+
+ return cls._mappings[scale]
+
+ @abstractmethod
+ def _init(self, scale: int) -> None:
+ # pylint: disable=attribute-defined-outside-init
+
+ if scale > self._get_max_scale():
+ raise Exception(f"scale is larger than {self._max_scale}")
+
+ if scale < self._get_min_scale():
+ raise Exception(f"scale is smaller than {self._min_scale}")
+
+ # The size of the exponential histogram buckets is determined by a
+ # parameter known as scale, larger values of scale will produce smaller
+ # buckets. Bucket boundaries of the exponential histogram are located
+ # at integer powers of the base, where:
+ #
+ # base = 2 ** (2 ** (-scale))
+ # https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/data-model.md#all-scales-use-the-logarithm-function
+ self._scale = scale
+
+ @abstractmethod
+ def _get_min_scale(self) -> int:
+ """
+ Return the smallest possible value for the mapping scale
+ """
+
+ @abstractmethod
+ def _get_max_scale(self) -> int:
+ """
+ Return the largest possible value for the mapping scale
+ """
+
+ @abstractmethod
+ def map_to_index(self, value: float) -> int:
+ """
+ Maps positive floating point values to indexes corresponding to
+ `Mapping.scale`. Implementations are not expected to handle zeros,
+ +inf, NaN, or negative values.
+ """
+
+ @abstractmethod
+ def get_lower_boundary(self, index: int) -> float:
+ """
+ Returns the lower boundary of a given bucket index. The index is
+ expected to map onto a range that is at least partially inside the
+ range of normal floating point values. If the corresponding
+ bucket's upper boundary is less than or equal to 2 ** -1022,
+ :class:`~opentelemetry.sdk.metrics.MappingUnderflowError`
+ will be raised. If the corresponding bucket's lower boundary is greater
+ than ``sys.float_info.max``,
+ :class:`~opentelemetry.sdk.metrics.MappingOverflowError`
+ will be raised.
+ """
+
+ @property
+ def scale(self) -> int:
+ """
+ Returns the parameter that controls the resolution of this mapping.
+ See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/datamodel.md#exponential-scale
+ """
+ return self._scale
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/errors.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/errors.py
new file mode 100644
index 0000000000..477ed6f0f5
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/errors.py
@@ -0,0 +1,26 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+class MappingUnderflowError(Exception):
+ """
+ Raised when computing the lower boundary of an index that maps into a
+ denormal floating point value.
+ """
+
+
+class MappingOverflowError(Exception):
+ """
+ Raised when computing the lower boundary of an index that maps into +inf.
+ """
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/exponent_mapping.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/exponent_mapping.py
new file mode 100644
index 0000000000..297bb7a483
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/exponent_mapping.py
@@ -0,0 +1,141 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import ldexp
+from threading import Lock
+
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping import (
+ Mapping,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.errors import (
+ MappingOverflowError,
+ MappingUnderflowError,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.ieee_754 import (
+ MANTISSA_WIDTH,
+ MAX_NORMAL_EXPONENT,
+ MIN_NORMAL_EXPONENT,
+ MIN_NORMAL_VALUE,
+ get_ieee_754_exponent,
+ get_ieee_754_mantissa,
+)
+
+
+class ExponentMapping(Mapping):
+ # Reference implementation here:
+ # https://github.com/open-telemetry/opentelemetry-go/blob/0e6f9c29c10d6078e8131418e1d1d166c7195d61/sdk/metric/aggregator/exponential/mapping/exponent/exponent.go
+
+ _mappings = {}
+ _mappings_lock = Lock()
+
+ _min_scale = -10
+ _max_scale = 0
+
+ def _get_min_scale(self):
+ # _min_scale defines the point at which the exponential mapping
+ # function becomes useless for 64-bit floats. With scale -10, ignoring
+ # subnormal values, bucket indices range from -1 to 1.
+ return -10
+
+ def _get_max_scale(self):
+ # _max_scale is the largest scale supported by exponential mapping. Use
+ # a logarithm mapping for larger scales.
+ return 0
+
+ def _init(self, scale: int):
+ # pylint: disable=attribute-defined-outside-init
+
+ super()._init(scale)
+
+ # self._min_normal_lower_boundary_index is the largest index such that
+ # base ** index < MIN_NORMAL_VALUE and
+ # base ** (index + 1) >= MIN_NORMAL_VALUE. An exponential histogram
+ # bucket with this index covers the range
+ # (base ** index, base (index + 1)], including MIN_NORMAL_VALUE. This
+ # is the smallest valid index that contains at least one normal value.
+ index = MIN_NORMAL_EXPONENT >> -self._scale
+
+ if -self._scale < 2:
+ # For scales -1 and 0, the maximum value 2 ** -1022 is a
+ # power-of-two multiple, meaning base ** index == MIN_NORMAL_VALUE.
+ # Subtracting 1 so that base ** (index + 1) == MIN_NORMAL_VALUE.
+ index -= 1
+
+ self._min_normal_lower_boundary_index = index
+
+ # self._max_normal_lower_boundary_index is the index such that
+ # base**index equals the greatest representable lower boundary. An
+ # exponential histogram bucket with this index covers the range
+ # ((2 ** 1024) / base, 2 ** 1024], which includes opentelemetry.sdk.
+ # metrics._internal.exponential_histogram.ieee_754.MAX_NORMAL_VALUE.
+ # This bucket is incomplete, since the upper boundary cannot be
+ # represented. One greater than this index corresponds with the bucket
+ # containing values > 2 ** 1024.
+ self._max_normal_lower_boundary_index = (
+ MAX_NORMAL_EXPONENT >> -self._scale
+ )
+
+ def map_to_index(self, value: float) -> int:
+ if value < MIN_NORMAL_VALUE:
+ return self._min_normal_lower_boundary_index
+
+ exponent = get_ieee_754_exponent(value)
+
+ # Positive integers are represented in binary as having an infinite
+ # amount of leading zeroes, for example 2 is represented as ...00010.
+
+ # A negative integer -x is represented in binary as the complement of
+ # (x - 1). For example, -4 is represented as the complement of 4 - 1
+ # == 3. 3 is represented as ...00011. Its compliment is ...11100, the
+ # binary representation of -4.
+
+ # get_ieee_754_mantissa(value) gets the positive integer made up
+ # from the rightmost MANTISSA_WIDTH bits (the mantissa) of the IEEE
+ # 754 representation of value. If value is an exact power of 2, all
+ # these MANTISSA_WIDTH bits would be all zeroes, and when 1 is
+ # subtracted the resulting value is -1. The binary representation of
+ # -1 is ...111, so when these bits are right shifted MANTISSA_WIDTH
+ # places, the resulting value for correction is -1. If value is not an
+ # exact power of 2, at least one of the rightmost MANTISSA_WIDTH
+ # bits would be 1 (even for values whose decimal part is 0, like 5.0
+ # since the IEEE 754 of such number is too the product of a power of 2
+ # (defined in the exponent part of the IEEE 754 representation) and the
+ # value defined in the mantissa). Having at least one of the rightmost
+ # MANTISSA_WIDTH bit being 1 means that get_ieee_754(value) will
+ # always be greater or equal to 1, and when 1 is subtracted, the
+ # result will be greater or equal to 0, whose representation in binary
+ # will be of at most MANTISSA_WIDTH ones that have an infinite
+ # amount of leading zeroes. When those MANTISSA_WIDTH bits are
+ # shifted to the right MANTISSA_WIDTH places, the resulting value
+ # will be 0.
+
+ # In summary, correction will be -1 if value is a power of 2, 0 if not.
+
+ # FIXME Document why we can assume value will not be 0, inf, or NaN.
+ correction = (get_ieee_754_mantissa(value) - 1) >> MANTISSA_WIDTH
+
+ return (exponent + correction) >> -self._scale
+
+ def get_lower_boundary(self, index: int) -> float:
+ if index < self._min_normal_lower_boundary_index:
+ raise MappingUnderflowError()
+
+ if index > self._max_normal_lower_boundary_index:
+ raise MappingOverflowError()
+
+ return ldexp(1, index << -self._scale)
+
+ @property
+ def scale(self) -> int:
+ return self._scale
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.md b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.md
new file mode 100644
index 0000000000..ba9601bdf9
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.md
@@ -0,0 +1,175 @@
+# IEEE 754 Explained
+
+IEEE 754 is a standard that defines a way to represent certain mathematical
+objects using binary numbers.
+
+## Binary Number Fields
+
+The binary numbers used in IEEE 754 can have different lengths, the length that
+is interesting for the purposes of this project is 64 bits. These binary
+numbers are made up of 3 contiguous fields of bits, from left to right:
+
+1. 1 sign bit
+2. 11 exponent bits
+3. 52 mantissa bits
+
+Depending on the values these fields have, the represented mathematical object
+can be one of:
+
+* Floating point number
+* Zero
+* NaN
+* Infinite
+
+## Floating Point Numbers
+
+IEEE 754 represents a floating point number $f$ using an exponential
+notation with 4 components: $sign$, $mantissa$, $base$ and $exponent$:
+
+$$f = sign \times mantissa \times base ^ {exponent}$$
+
+There are two possible representations of floating point numbers:
+_normal_ and _denormal_, which have different valid values for
+their $mantissa$ and $exponent$ fields.
+
+### Binary Representation
+
+$sign$, $mantissa$, and $exponent$ are represented in binary, the
+representation of each component has certain details explained next.
+
+$base$ is always $2$ and it is not represented in binary.
+
+#### Sign
+
+$sign$ can have 2 values:
+
+1. $1$ if the `sign` bit is `0`
+2. $-1$ if the `sign` bit is `1`.
+
+#### Mantissa
+
+##### Normal Floating Point Numbers
+
+$mantissa$ is a positive fractional number whose integer part is $1$, for example
+$1.2345 \dots$. The `mantissa` bits represent only the fractional part and the
+$mantissa$ value can be calculated as:
+
+$$mantissa = 1 + \sum_{i=1}^{52} b_{i} \times 2^{-i} = 1 + \frac{b_{1}}{2^{1}} + \frac{b_{2}}{2^{2}} + \dots + \frac{b_{63}}{2^{63}} + \frac{b_{52}}{2^{52}}$$
+
+Where $b_{i}$ is:
+
+1. $0$ if the bit at the position `i - 1` is `0`.
+2. $1$ if the bit at the position `i - 1` is `1`.
+
+##### Denormal Floating Point Numbers
+
+$mantissa$ is a positive fractional number whose integer part is $0$, for example
+$0.12345 \dots$. The `mantissa` bits represent only the fractional part and the
+$mantissa$ value can be calculated as:
+
+$$mantissa = \sum_{i=1}^{52} b_{i} \times 2^{-i} = \frac{b_{1}}{2^{1}} + \frac{b_{2}}{2^{2}} + \dots + \frac{b_{63}}{2^{63}} + \frac{b_{52}}{2^{52}}$$
+
+Where $b_{i}$ is:
+
+1. $0$ if the bit at the position `i - 1` is `0`.
+2. $1$ if the bit at the position `i - 1` is `1`.
+
+#### Exponent
+
+##### Normal Floating Point Numbers
+
+Only the following bit sequences are allowed: `00000000001` to `11111111110`.
+That is, there must be at least one `0` and one `1` in the exponent bits.
+
+The actual value of the $exponent$ can be calculated as:
+
+$$exponent = v - bias$$
+
+where $v$ is the value of the binary number in the exponent bits and $bias$ is $1023$.
+Considering the restrictions above, the respective minimum and maximum values for the
+exponent are:
+
+1. `00000000001` = $1$, $1 - 1023 = -1022$
+2. `11111111110` = $2046$, $2046 - 1023 = 1023$
+
+So, $exponent$ is an integer in the range $\left[-1022, 1023\right]$.
+
+
+##### Denormal Floating Point Numbers
+
+$exponent$ is always $-1022$. Nevertheless, it is always represented as `00000000000`.
+
+### Normal and Denormal Floating Point Numbers
+
+The smallest absolute value a normal floating point number can have is calculated
+like this:
+
+$$1 \times 1.0\dots0 \times 2^{-1022} = 2.2250738585072014 \times 10^{-308}$$
+
+Since normal floating point numbers always have a $1$ as the integer part of the
+$mantissa$, then smaller values can be achieved by using the smallest possible exponent
+( $-1022$ ) and a $0$ in the integer part of the $mantissa$, but significant digits are lost.
+
+The smallest absolute value a denormal floating point number can have is calculated
+like this:
+
+$$1 \times 2^{-52} \times 2^{-1022} = 5 \times 10^{-324}$$
+
+## Zero
+
+Zero is represented like this:
+
+* Sign bit: `X`
+* Exponent bits: `00000000000`
+* Mantissa bits: `0000000000000000000000000000000000000000000000000000`
+
+where `X` means `0` or `1`.
+
+## NaN
+
+There are 2 kinds of NaNs that are represented:
+
+1. QNaNs (Quiet NaNs): represent the result of indeterminate operations.
+2. SNaNs (Signalling NaNs): represent the result of invalid operations.
+
+### QNaNs
+
+QNaNs are represented like this:
+
+* Sign bit: `X`
+* Exponent bits: `11111111111`
+* Mantissa bits: `1XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
+
+where `X` means `0` or `1`.
+
+### SNaNs
+
+SNaNs are represented like this:
+
+* Sign bit: `X`
+* Exponent bits: `11111111111`
+* Mantissa bits: `0XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX1`
+
+where `X` means `0` or `1`.
+
+## Infinite
+
+### Positive Infinite
+
+Positive infinite is represented like this:
+
+* Sign bit: `0`
+* Exponent bits: `11111111111`
+* Mantissa bits: `0000000000000000000000000000000000000000000000000000`
+
+where `X` means `0` or `1`.
+
+### Negative Infinite
+
+Negative infinite is represented like this:
+
+* Sign bit: `1`
+* Exponent bits: `11111111111`
+* Mantissa bits: `0000000000000000000000000000000000000000000000000000`
+
+where `X` means `0` or `1`.
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.py
new file mode 100644
index 0000000000..9503b57c0e
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/ieee_754.py
@@ -0,0 +1,118 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ctypes import c_double, c_uint64
+from sys import float_info
+
+# IEEE 754 64-bit floating point numbers use 11 bits for the exponent and 52
+# bits for the mantissa.
+MANTISSA_WIDTH = 52
+EXPONENT_WIDTH = 11
+
+# This mask is equivalent to 52 "1" bits (there are 13 hexadecimal 4-bit "f"s
+# in the mantissa mask, 13 * 4 == 52) or 0xfffffffffffff in hexadecimal.
+MANTISSA_MASK = (1 << MANTISSA_WIDTH) - 1
+
+# There are 11 bits for the exponent, but the exponent values 0 (11 "0"
+# bits) and 2047 (11 "1" bits) have special meanings so the exponent range is
+# from 1 to 2046. To calculate the exponent value, 1023 (the bias) is
+# subtracted from the exponent, so the exponent value range is from -1022 to
+# +1023.
+EXPONENT_BIAS = (2 ** (EXPONENT_WIDTH - 1)) - 1
+
+# All the exponent mask bits are set to 1 for the 11 exponent bits.
+EXPONENT_MASK = ((1 << EXPONENT_WIDTH) - 1) << MANTISSA_WIDTH
+
+# The sign mask has the first bit set to 1 and the rest to 0.
+SIGN_MASK = 1 << (EXPONENT_WIDTH + MANTISSA_WIDTH)
+
+# For normal floating point numbers, the exponent can have a value in the
+# range [-1022, 1023].
+MIN_NORMAL_EXPONENT = -EXPONENT_BIAS + 1
+MAX_NORMAL_EXPONENT = EXPONENT_BIAS
+
+# The smallest possible normal value is 2.2250738585072014e-308.
+# This value is the result of using the smallest possible number in the
+# mantissa, 1.0000000000000000000000000000000000000000000000000000 (52 "0"s in
+# the fractional part) and a single "1" in the exponent.
+# Finally 1 * (2 ** -1022) = 2.2250738585072014e-308.
+MIN_NORMAL_VALUE = float_info.min
+
+# Greatest possible normal value (1.7976931348623157e+308)
+# The binary representation of a float in scientific notation uses (for the
+# mantissa) one bit for the integer part (which is implicit) and 52 bits for
+# the fractional part. Consider a float binary 1.111. It is equal to 1 + 1/2 +
+# 1/4 + 1/8. The greatest possible value in the 52-bit binary mantissa would be
+# then 1.1111111111111111111111111111111111111111111111111111 (52 "1"s in the
+# fractional part) whose decimal value is 1.9999999999999998. Finally,
+# 1.9999999999999998 * (2 ** 1023) = 1.7976931348623157e+308.
+MAX_NORMAL_VALUE = float_info.max
+
+
+def get_ieee_754_exponent(value: float) -> int:
+ """
+ Gets the exponent of the IEEE 754 representation of a float.
+ """
+
+ return (
+ (
+ # This step gives the integer that corresponds to the IEEE 754
+ # representation of a float. For example, consider
+ # -MAX_NORMAL_VALUE for an example. We choose this value because
+ # of its binary representation which makes easy to understand the
+ # subsequent operations.
+ #
+ # c_uint64.from_buffer(c_double(-MAX_NORMAL_VALUE)).value == 18442240474082181119
+ # bin(18442240474082181119) == '0b1111111111101111111111111111111111111111111111111111111111111111'
+ #
+ # The first bit of the previous binary number is the sign bit: 1 (1 means negative, 0 means positive)
+ # The next 11 bits are the exponent bits: 11111111110
+ # The next 52 bits are the mantissa bits: 1111111111111111111111111111111111111111111111111111
+ #
+ # This step isolates the exponent bits, turning every bit outside
+ # of the exponent field (sign and mantissa bits) to 0.
+ c_uint64.from_buffer(c_double(value)).value
+ & EXPONENT_MASK
+ # For the example this means:
+ # 18442240474082181119 & EXPONENT_MASK == 9214364837600034816
+ # bin(9214364837600034816) == '0b111111111100000000000000000000000000000000000000000000000000000'
+ # Notice that the previous binary representation does not include
+ # leading zeroes, so the sign bit is not included since it is a
+ # zero.
+ )
+ # This step moves the exponent bits to the right, removing the
+ # mantissa bits that were set to 0 by the previous step. This
+ # leaves the IEEE 754 exponent value, ready for the next step.
+ >> MANTISSA_WIDTH
+ # For the example this means:
+ # 9214364837600034816 >> MANTISSA_WIDTH == 2046
+ # bin(2046) == '0b11111111110'
+ # As shown above, these are the original 11 bits that correspond to the
+ # exponent.
+ # This step subtracts the exponent bias from the IEEE 754 value,
+ # leaving the actual exponent value.
+ ) - EXPONENT_BIAS
+ # For the example this means:
+ # 2046 - EXPONENT_BIAS == 1023
+ # As mentioned in a comment above, the largest value for the exponent is
+
+
+def get_ieee_754_mantissa(value: float) -> int:
+ return (
+ c_uint64.from_buffer(c_double(value)).value
+ # This step isolates the mantissa bits. There is no need to do any
+ # bit shifting as the mantissa bits are already the rightmost field
+ # in an IEEE 754 representation.
+ & MANTISSA_MASK
+ )
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/logarithm_mapping.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/logarithm_mapping.py
new file mode 100644
index 0000000000..5abf9238b9
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/exponential_histogram/mapping/logarithm_mapping.py
@@ -0,0 +1,139 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import exp, floor, ldexp, log
+from threading import Lock
+
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping import (
+ Mapping,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.errors import (
+ MappingOverflowError,
+ MappingUnderflowError,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.ieee_754 import (
+ MAX_NORMAL_EXPONENT,
+ MIN_NORMAL_EXPONENT,
+ MIN_NORMAL_VALUE,
+ get_ieee_754_exponent,
+ get_ieee_754_mantissa,
+)
+
+
+class LogarithmMapping(Mapping):
+ # Reference implementation here:
+ # https://github.com/open-telemetry/opentelemetry-go/blob/0e6f9c29c10d6078e8131418e1d1d166c7195d61/sdk/metric/aggregator/exponential/mapping/logarithm/logarithm.go
+
+ _mappings = {}
+ _mappings_lock = Lock()
+
+ _min_scale = 1
+ _max_scale = 20
+
+ def _get_min_scale(self):
+ # _min_scale ensures that ExponentMapping is used for zero and negative
+ # scale values.
+ return self._min_scale
+
+ def _get_max_scale(self):
+ # FIXME The Go implementation uses a value of 20 here, find out the
+ # right value for this implementation, more information here:
+ # https://github.com/lightstep/otel-launcher-go/blob/c9ca8483be067a39ab306b09060446e7fda65f35/lightstep/sdk/metric/aggregator/histogram/structure/README.md#mapping-function
+ # https://github.com/open-telemetry/opentelemetry-go/blob/0e6f9c29c10d6078e8131418e1d1d166c7195d61/sdk/metric/aggregator/exponential/mapping/logarithm/logarithm.go#L32-L45
+ return self._max_scale
+
+ def _init(self, scale: int):
+ # pylint: disable=attribute-defined-outside-init
+
+ super()._init(scale)
+
+ # self._scale_factor is defined as a multiplier because multiplication
+ # is faster than division. self._scale_factor is defined as:
+ # index = log(value) * self._scale_factor
+ # Where:
+ # index = log(value) / log(base)
+ # index = log(value) / log(2 ** (2 ** -scale))
+ # index = log(value) / ((2 ** -scale) * log(2))
+ # index = log(value) * ((1 / log(2)) * (2 ** scale))
+ # self._scale_factor = ((1 / log(2)) * (2 ** scale))
+ # self._scale_factor = (1 /log(2)) * (2 ** scale)
+ # self._scale_factor = ldexp(1 / log(2), scale)
+ # This implementation was copied from a Java prototype. See:
+ # https://github.com/newrelic-experimental/newrelic-sketch-java/blob/1ce245713603d61ba3a4510f6df930a5479cd3f6/src/main/java/com/newrelic/nrsketch/indexer/LogIndexer.java
+ # for the equations used here.
+ self._scale_factor = ldexp(1 / log(2), scale)
+
+ # self._min_normal_lower_boundary_index is the index such that
+ # base ** index == MIN_NORMAL_VALUE. An exponential histogram bucket
+ # with this index covers the range
+ # (MIN_NORMAL_VALUE, MIN_NORMAL_VALUE * base]. One less than this index
+ # corresponds with the bucket containing values <= MIN_NORMAL_VALUE.
+ self._min_normal_lower_boundary_index = (
+ MIN_NORMAL_EXPONENT << self._scale
+ )
+
+ # self._max_normal_lower_boundary_index is the index such that
+ # base ** index equals the greatest representable lower boundary. An
+ # exponential histogram bucket with this index covers the range
+ # ((2 ** 1024) / base, 2 ** 1024], which includes opentelemetry.sdk.
+ # metrics._internal.exponential_histogram.ieee_754.MAX_NORMAL_VALUE.
+ # This bucket is incomplete, since the upper boundary cannot be
+ # represented. One greater than this index corresponds with the bucket
+ # containing values > 2 ** 1024.
+ self._max_normal_lower_boundary_index = (
+ (MAX_NORMAL_EXPONENT + 1) << self._scale
+ ) - 1
+
+ def map_to_index(self, value: float) -> int:
+ """
+ Maps positive floating point values to indexes corresponding to scale.
+ """
+
+ # value is subnormal
+ if value <= MIN_NORMAL_VALUE:
+ return self._min_normal_lower_boundary_index - 1
+
+ # value is an exact power of two.
+ if get_ieee_754_mantissa(value) == 0:
+ exponent = get_ieee_754_exponent(value)
+ return (exponent << self._scale) - 1
+
+ return min(
+ floor(log(value) * self._scale_factor),
+ self._max_normal_lower_boundary_index,
+ )
+
+ def get_lower_boundary(self, index: int) -> float:
+
+ if index >= self._max_normal_lower_boundary_index:
+ if index == self._max_normal_lower_boundary_index:
+ return 2 * exp(
+ (index - (1 << self._scale)) / self._scale_factor
+ )
+ raise MappingOverflowError()
+
+ if index <= self._min_normal_lower_boundary_index:
+ if index == self._min_normal_lower_boundary_index:
+ return MIN_NORMAL_VALUE
+ if index == self._min_normal_lower_boundary_index - 1:
+ return (
+ exp((index + (1 << self._scale)) / self._scale_factor) / 2
+ )
+ raise MappingUnderflowError()
+
+ return exp(index / self._scale_factor)
+
+ @property
+ def scale(self) -> int:
+ return self._scale
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/export/__init__.py
new file mode 100644
index 0000000000..0568270ae6
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/export/__init__.py
@@ -0,0 +1,554 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+import os
+from abc import ABC, abstractmethod
+from enum import Enum
+from logging import getLogger
+from os import environ, linesep
+from sys import stdout
+from threading import Event, Lock, RLock, Thread
+from time import time_ns
+from typing import IO, Callable, Dict, Iterable, Optional
+
+from typing_extensions import final
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics._internal
+from opentelemetry.context import (
+ _SUPPRESS_INSTRUMENTATION_KEY,
+ attach,
+ detach,
+ set_value,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_METRIC_EXPORT_INTERVAL,
+ OTEL_METRIC_EXPORT_TIMEOUT,
+)
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ AggregationTemporality,
+ DefaultAggregation,
+)
+from opentelemetry.sdk.metrics._internal.exceptions import MetricsTimeoutError
+from opentelemetry.sdk.metrics._internal.instrument import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableGauge,
+ _ObservableUpDownCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.point import MetricsData
+from opentelemetry.util._once import Once
+
+_logger = getLogger(__name__)
+
+
+class MetricExportResult(Enum):
+ """Result of exporting a metric
+
+ Can be any of the following values:"""
+
+ SUCCESS = 0
+ FAILURE = 1
+
+
+class MetricExporter(ABC):
+ """Interface for exporting metrics.
+
+ Interface to be implemented by services that want to export metrics received
+ in their own format.
+
+ Args:
+ preferred_temporality: Used by `opentelemetry.sdk.metrics.export.PeriodicExportingMetricReader` to
+ configure exporter level preferred temporality. See `opentelemetry.sdk.metrics.export.MetricReader` for
+ more details on what preferred temporality is.
+ preferred_aggregation: Used by `opentelemetry.sdk.metrics.export.PeriodicExportingMetricReader` to
+ configure exporter level preferred aggregation. See `opentelemetry.sdk.metrics.export.MetricReader` for
+ more details on what preferred aggregation is.
+ """
+
+ def __init__(
+ self,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[
+ type, "opentelemetry.sdk.metrics.view.Aggregation"
+ ] = None,
+ ) -> None:
+ self._preferred_temporality = preferred_temporality
+ self._preferred_aggregation = preferred_aggregation
+
+ @abstractmethod
+ def export(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ """Exports a batch of telemetry data.
+
+ Args:
+ metrics: The list of `opentelemetry.sdk.metrics.export.Metric` objects to be exported
+
+ Returns:
+ The result of the export
+ """
+
+ @abstractmethod
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ """
+ Ensure that export of any metrics currently received by the exporter
+ are completed as soon as possible.
+ """
+
+ @abstractmethod
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ """Shuts down the exporter.
+
+ Called when the SDK is shut down.
+ """
+
+
+class ConsoleMetricExporter(MetricExporter):
+ """Implementation of :class:`MetricExporter` that prints metrics to the
+ console.
+
+ This class can be used for diagnostic purposes. It prints the exported
+ metrics to the console STDOUT.
+ """
+
+ def __init__(
+ self,
+ out: IO = stdout,
+ formatter: Callable[
+ ["opentelemetry.sdk.metrics.export.MetricsData"], str
+ ] = lambda metrics_data: metrics_data.to_json()
+ + linesep,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[
+ type, "opentelemetry.sdk.metrics.view.Aggregation"
+ ] = None,
+ ):
+ super().__init__(
+ preferred_temporality=preferred_temporality,
+ preferred_aggregation=preferred_aggregation,
+ )
+ self.out = out
+ self.formatter = formatter
+
+ def export(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ self.out.write(self.formatter(metrics_data))
+ self.out.flush()
+ return MetricExportResult.SUCCESS
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ return True
+
+
+class MetricReader(ABC):
+ # pylint: disable=too-many-branches
+ """
+ Base class for all metric readers
+
+ Args:
+ preferred_temporality: A mapping between instrument classes and
+ aggregation temporality. By default uses CUMULATIVE for all instrument
+ classes. This mapping will be used to define the default aggregation
+ temporality of every instrument class. If the user wants to make a
+ change in the default aggregation temporality of an instrument class,
+ it is enough to pass here a dictionary whose keys are the instrument
+ classes and the values are the corresponding desired aggregation
+ temporalities of the classes that the user wants to change, not all of
+ them. The classes not included in the passed dictionary will retain
+ their association to their default aggregation temporalities.
+ preferred_aggregation: A mapping between instrument classes and
+ aggregation instances. By default maps all instrument classes to an
+ instance of `DefaultAggregation`. This mapping will be used to
+ define the default aggregation of every instrument class. If the
+ user wants to make a change in the default aggregation of an
+ instrument class, it is enough to pass here a dictionary whose keys
+ are the instrument classes and the values are the corresponding
+ desired aggregation for the instrument classes that the user wants
+ to change, not necessarily all of them. The classes not included in
+ the passed dictionary will retain their association to their
+ default aggregations. The aggregation defined here will be
+ overridden by an aggregation defined by a view that is not
+ `DefaultAggregation`.
+
+ .. document protected _receive_metrics which is a intended to be overridden by subclass
+ .. automethod:: _receive_metrics
+ """
+
+ def __init__(
+ self,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[
+ type, "opentelemetry.sdk.metrics.view.Aggregation"
+ ] = None,
+ ) -> None:
+ self._collect: Callable[
+ [
+ "opentelemetry.sdk.metrics.export.MetricReader",
+ AggregationTemporality,
+ ],
+ Iterable["opentelemetry.sdk.metrics.export.Metric"],
+ ] = None
+
+ self._instrument_class_temporality = {
+ _Counter: AggregationTemporality.CUMULATIVE,
+ _UpDownCounter: AggregationTemporality.CUMULATIVE,
+ _Histogram: AggregationTemporality.CUMULATIVE,
+ _ObservableCounter: AggregationTemporality.CUMULATIVE,
+ _ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
+ _ObservableGauge: AggregationTemporality.CUMULATIVE,
+ }
+
+ if preferred_temporality is not None:
+ for temporality in preferred_temporality.values():
+ if temporality not in (
+ AggregationTemporality.CUMULATIVE,
+ AggregationTemporality.DELTA,
+ ):
+ raise Exception(
+ f"Invalid temporality value found {temporality}"
+ )
+
+ if preferred_temporality is not None:
+ for typ, temporality in preferred_temporality.items():
+ if typ is Counter:
+ self._instrument_class_temporality[_Counter] = temporality
+ elif typ is UpDownCounter:
+ self._instrument_class_temporality[
+ _UpDownCounter
+ ] = temporality
+ elif typ is Histogram:
+ self._instrument_class_temporality[
+ _Histogram
+ ] = temporality
+ elif typ is ObservableCounter:
+ self._instrument_class_temporality[
+ _ObservableCounter
+ ] = temporality
+ elif typ is ObservableUpDownCounter:
+ self._instrument_class_temporality[
+ _ObservableUpDownCounter
+ ] = temporality
+ elif typ is ObservableGauge:
+ self._instrument_class_temporality[
+ _ObservableGauge
+ ] = temporality
+ else:
+ raise Exception(f"Invalid instrument class found {typ}")
+
+ self._preferred_temporality = preferred_temporality
+ self._instrument_class_aggregation = {
+ _Counter: DefaultAggregation(),
+ _UpDownCounter: DefaultAggregation(),
+ _Histogram: DefaultAggregation(),
+ _ObservableCounter: DefaultAggregation(),
+ _ObservableUpDownCounter: DefaultAggregation(),
+ _ObservableGauge: DefaultAggregation(),
+ }
+
+ if preferred_aggregation is not None:
+ for typ, aggregation in preferred_aggregation.items():
+ if typ is Counter:
+ self._instrument_class_aggregation[_Counter] = aggregation
+ elif typ is UpDownCounter:
+ self._instrument_class_aggregation[
+ _UpDownCounter
+ ] = aggregation
+ elif typ is Histogram:
+ self._instrument_class_aggregation[
+ _Histogram
+ ] = aggregation
+ elif typ is ObservableCounter:
+ self._instrument_class_aggregation[
+ _ObservableCounter
+ ] = aggregation
+ elif typ is ObservableUpDownCounter:
+ self._instrument_class_aggregation[
+ _ObservableUpDownCounter
+ ] = aggregation
+ elif typ is ObservableGauge:
+ self._instrument_class_aggregation[
+ _ObservableGauge
+ ] = aggregation
+ else:
+ raise Exception(f"Invalid instrument class found {typ}")
+
+ @final
+ def collect(self, timeout_millis: float = 10_000) -> None:
+ """Collects the metrics from the internal SDK state and
+ invokes the `_receive_metrics` with the collection.
+
+ Args:
+ timeout_millis: Amount of time in milliseconds before this function
+ raises a timeout error.
+
+ If any of the underlying ``collect`` methods called by this method
+ fails by any reason (including timeout) an exception will be raised
+ detailing the individual errors that caused this function to fail.
+ """
+ if self._collect is None:
+ _logger.warning(
+ "Cannot call collect on a MetricReader until it is registered on a MeterProvider"
+ )
+ return
+
+ metrics = self._collect(self, timeout_millis=timeout_millis)
+
+ if metrics is not None:
+
+ self._receive_metrics(
+ metrics,
+ timeout_millis=timeout_millis,
+ )
+
+ @final
+ def _set_collect_callback(
+ self,
+ func: Callable[
+ [
+ "opentelemetry.sdk.metrics.export.MetricReader",
+ AggregationTemporality,
+ ],
+ Iterable["opentelemetry.sdk.metrics.export.Metric"],
+ ],
+ ) -> None:
+ """This function is internal to the SDK. It should not be called or overridden by users"""
+ self._collect = func
+
+ @abstractmethod
+ def _receive_metrics(
+ self,
+ metrics_data: "opentelemetry.sdk.metrics.export.MetricsData",
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ """Called by `MetricReader.collect` when it receives a batch of metrics"""
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ self.collect(timeout_millis=timeout_millis)
+ return True
+
+ @abstractmethod
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ """Shuts down the MetricReader. This method provides a way
+ for the MetricReader to do any cleanup required. A metric reader can
+ only be shutdown once, any subsequent calls are ignored and return
+ failure status.
+
+ When a `MetricReader` is registered on a
+ :class:`~opentelemetry.sdk.metrics.MeterProvider`,
+ :meth:`~opentelemetry.sdk.metrics.MeterProvider.shutdown` will invoke this
+ automatically.
+ """
+
+
+class InMemoryMetricReader(MetricReader):
+ """Implementation of `MetricReader` that returns its metrics from :func:`get_metrics_data`.
+
+ This is useful for e.g. unit tests.
+ """
+
+ def __init__(
+ self,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[
+ type, "opentelemetry.sdk.metrics.view.Aggregation"
+ ] = None,
+ ) -> None:
+ super().__init__(
+ preferred_temporality=preferred_temporality,
+ preferred_aggregation=preferred_aggregation,
+ )
+ self._lock = RLock()
+ self._metrics_data: (
+ "opentelemetry.sdk.metrics.export.MetricsData"
+ ) = None
+
+ def get_metrics_data(
+ self,
+ ) -> ("opentelemetry.sdk.metrics.export.MetricsData"):
+ """Reads and returns current metrics from the SDK"""
+ with self._lock:
+ self.collect()
+ metrics_data = self._metrics_data
+ self._metrics_data = None
+ return metrics_data
+
+ def _receive_metrics(
+ self,
+ metrics_data: "opentelemetry.sdk.metrics.export.MetricsData",
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ with self._lock:
+ self._metrics_data = metrics_data
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+
+class PeriodicExportingMetricReader(MetricReader):
+ """`PeriodicExportingMetricReader` is an implementation of `MetricReader`
+ that collects metrics based on a user-configurable time interval, and passes the
+ metrics to the configured exporter. If the time interval is set to `math.inf`, the
+ reader will not invoke periodic collection.
+
+ The configured exporter's :py:meth:`~MetricExporter.export` method will not be called
+ concurrently.
+ """
+
+ def __init__(
+ self,
+ exporter: MetricExporter,
+ export_interval_millis: Optional[float] = None,
+ export_timeout_millis: Optional[float] = None,
+ ) -> None:
+ # PeriodicExportingMetricReader defers to exporter for configuration
+ super().__init__(
+ preferred_temporality=exporter._preferred_temporality,
+ preferred_aggregation=exporter._preferred_aggregation,
+ )
+
+ # This lock is held whenever calling self._exporter.export() to prevent concurrent
+ # execution of MetricExporter.export()
+ # https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#exportbatch
+ self._export_lock = Lock()
+
+ self._exporter = exporter
+ if export_interval_millis is None:
+ try:
+ export_interval_millis = float(
+ environ.get(OTEL_METRIC_EXPORT_INTERVAL, 60000)
+ )
+ except ValueError:
+ _logger.warning(
+ "Found invalid value for export interval, using default"
+ )
+ export_interval_millis = 60000
+ if export_timeout_millis is None:
+ try:
+ export_timeout_millis = float(
+ environ.get(OTEL_METRIC_EXPORT_TIMEOUT, 30000)
+ )
+ except ValueError:
+ _logger.warning(
+ "Found invalid value for export timeout, using default"
+ )
+ export_timeout_millis = 30000
+ self._export_interval_millis = export_interval_millis
+ self._export_timeout_millis = export_timeout_millis
+ self._shutdown = False
+ self._shutdown_event = Event()
+ self._shutdown_once = Once()
+ self._daemon_thread = None
+ if (
+ self._export_interval_millis > 0
+ and self._export_interval_millis < math.inf
+ ):
+ self._daemon_thread = Thread(
+ name="OtelPeriodicExportingMetricReader",
+ target=self._ticker,
+ daemon=True,
+ )
+ self._daemon_thread.start()
+ if hasattr(os, "register_at_fork"):
+ os.register_at_fork(
+ after_in_child=self._at_fork_reinit
+ ) # pylint: disable=protected-access
+ elif self._export_interval_millis <= 0:
+ raise ValueError(
+ f"interval value {self._export_interval_millis} is invalid \
+ and needs to be larger than zero and lower than infinity."
+ )
+
+ def _at_fork_reinit(self):
+ self._daemon_thread = Thread(
+ name="OtelPeriodicExportingMetricReader",
+ target=self._ticker,
+ daemon=True,
+ )
+ self._daemon_thread.start()
+
+ def _ticker(self) -> None:
+ interval_secs = self._export_interval_millis / 1e3
+ while not self._shutdown_event.wait(interval_secs):
+ try:
+ self.collect(timeout_millis=self._export_timeout_millis)
+ except MetricsTimeoutError:
+ _logger.warning(
+ "Metric collection timed out. Will try again after %s seconds",
+ interval_secs,
+ exc_info=True,
+ )
+ # one last collection below before shutting down completely
+ self.collect(timeout_millis=self._export_interval_millis)
+
+ def _receive_metrics(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+
+ token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
+ try:
+ with self._export_lock:
+ self._exporter.export(
+ metrics_data, timeout_millis=timeout_millis
+ )
+ except Exception as e: # pylint: disable=broad-except,invalid-name
+ _logger.exception("Exception while exporting metrics %s", str(e))
+ detach(token)
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ deadline_ns = time_ns() + timeout_millis * 10**6
+
+ def _shutdown():
+ self._shutdown = True
+
+ did_set = self._shutdown_once.do_once(_shutdown)
+ if not did_set:
+ _logger.warning("Can't shutdown multiple times")
+ return
+
+ self._shutdown_event.set()
+ if self._daemon_thread:
+ self._daemon_thread.join(
+ timeout=(deadline_ns - time_ns()) / 10**9
+ )
+ self._exporter.shutdown(timeout=(deadline_ns - time_ns()) / 10**6)
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ super().force_flush(timeout_millis=timeout_millis)
+ self._exporter.force_flush(timeout_millis=timeout_millis)
+ return True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/instrument.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/instrument.py
new file mode 100644
index 0000000000..6c0320c479
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/instrument.py
@@ -0,0 +1,246 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-ancestors, unused-import
+
+from logging import getLogger
+from typing import Dict, Generator, Iterable, List, Optional, Union
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics
+from opentelemetry.metrics import CallbackT
+from opentelemetry.metrics import Counter as APICounter
+from opentelemetry.metrics import Histogram as APIHistogram
+from opentelemetry.metrics import ObservableCounter as APIObservableCounter
+from opentelemetry.metrics import ObservableGauge as APIObservableGauge
+from opentelemetry.metrics import (
+ ObservableUpDownCounter as APIObservableUpDownCounter,
+)
+from opentelemetry.metrics import UpDownCounter as APIUpDownCounter
+from opentelemetry.metrics._internal.instrument import CallbackOptions
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+
+_logger = getLogger(__name__)
+
+
+_ERROR_MESSAGE = (
+ "Expected ASCII string of maximum length 63 characters but got {}"
+)
+
+
+class _Synchronous:
+ def __init__(
+ self,
+ name: str,
+ instrumentation_scope: InstrumentationScope,
+ measurement_consumer: "opentelemetry.sdk.metrics.MeasurementConsumer",
+ unit: str = "",
+ description: str = "",
+ ):
+ # pylint: disable=no-member
+ result = self._check_name_unit_description(name, unit, description)
+
+ if result["name"] is None:
+ raise Exception(_ERROR_MESSAGE.format(name))
+
+ if result["unit"] is None:
+ raise Exception(_ERROR_MESSAGE.format(unit))
+
+ name = result["name"]
+ unit = result["unit"]
+ description = result["description"]
+
+ self.name = name.lower()
+ self.unit = unit
+ self.description = description
+ self.instrumentation_scope = instrumentation_scope
+ self._measurement_consumer = measurement_consumer
+ super().__init__(name, unit=unit, description=description)
+
+
+class _Asynchronous:
+ def __init__(
+ self,
+ name: str,
+ instrumentation_scope: InstrumentationScope,
+ measurement_consumer: "opentelemetry.sdk.metrics.MeasurementConsumer",
+ callbacks: Optional[Iterable[CallbackT]] = None,
+ unit: str = "",
+ description: str = "",
+ ):
+ # pylint: disable=no-member
+ result = self._check_name_unit_description(name, unit, description)
+
+ if result["name"] is None:
+ raise Exception(_ERROR_MESSAGE.format(name))
+
+ if result["unit"] is None:
+ raise Exception(_ERROR_MESSAGE.format(unit))
+
+ name = result["name"]
+ unit = result["unit"]
+ description = result["description"]
+
+ self.name = name.lower()
+ self.unit = unit
+ self.description = description
+ self.instrumentation_scope = instrumentation_scope
+ self._measurement_consumer = measurement_consumer
+ super().__init__(name, callbacks, unit=unit, description=description)
+
+ self._callbacks: List[CallbackT] = []
+
+ if callbacks is not None:
+
+ for callback in callbacks:
+
+ if isinstance(callback, Generator):
+
+ # advance generator to it's first yield
+ next(callback)
+
+ def inner(
+ options: CallbackOptions,
+ callback=callback,
+ ) -> Iterable[Measurement]:
+ try:
+ return callback.send(options)
+ except StopIteration:
+ return []
+
+ self._callbacks.append(inner)
+ else:
+ self._callbacks.append(callback)
+
+ def callback(
+ self, callback_options: CallbackOptions
+ ) -> Iterable[Measurement]:
+ for callback in self._callbacks:
+ try:
+ for api_measurement in callback(callback_options):
+ yield Measurement(
+ api_measurement.value,
+ instrument=self,
+ attributes=api_measurement.attributes,
+ )
+ except Exception: # pylint: disable=broad-except
+ _logger.exception(
+ "Callback failed for instrument %s.", self.name
+ )
+
+
+class Counter(_Synchronous, APICounter):
+ def __new__(cls, *args, **kwargs):
+ if cls is Counter:
+ raise TypeError("Counter must be instantiated via a meter.")
+ return super().__new__(cls)
+
+ def add(
+ self, amount: Union[int, float], attributes: Dict[str, str] = None
+ ):
+ if amount < 0:
+ _logger.warning(
+ "Add amount must be non-negative on Counter %s.", self.name
+ )
+ return
+ self._measurement_consumer.consume_measurement(
+ Measurement(amount, self, attributes)
+ )
+
+
+class UpDownCounter(_Synchronous, APIUpDownCounter):
+ def __new__(cls, *args, **kwargs):
+ if cls is UpDownCounter:
+ raise TypeError("UpDownCounter must be instantiated via a meter.")
+ return super().__new__(cls)
+
+ def add(
+ self, amount: Union[int, float], attributes: Dict[str, str] = None
+ ):
+ self._measurement_consumer.consume_measurement(
+ Measurement(amount, self, attributes)
+ )
+
+
+class ObservableCounter(_Asynchronous, APIObservableCounter):
+ def __new__(cls, *args, **kwargs):
+ if cls is ObservableCounter:
+ raise TypeError(
+ "ObservableCounter must be instantiated via a meter."
+ )
+ return super().__new__(cls)
+
+
+class ObservableUpDownCounter(_Asynchronous, APIObservableUpDownCounter):
+ def __new__(cls, *args, **kwargs):
+ if cls is ObservableUpDownCounter:
+ raise TypeError(
+ "ObservableUpDownCounter must be instantiated via a meter."
+ )
+ return super().__new__(cls)
+
+
+class Histogram(_Synchronous, APIHistogram):
+ def __new__(cls, *args, **kwargs):
+ if cls is Histogram:
+ raise TypeError("Histogram must be instantiated via a meter.")
+ return super().__new__(cls)
+
+ def record(
+ self, amount: Union[int, float], attributes: Dict[str, str] = None
+ ):
+ if amount < 0:
+ _logger.warning(
+ "Record amount must be non-negative on Histogram %s.",
+ self.name,
+ )
+ return
+ self._measurement_consumer.consume_measurement(
+ Measurement(amount, self, attributes)
+ )
+
+
+class ObservableGauge(_Asynchronous, APIObservableGauge):
+ def __new__(cls, *args, **kwargs):
+ if cls is ObservableGauge:
+ raise TypeError(
+ "ObservableGauge must be instantiated via a meter."
+ )
+ return super().__new__(cls)
+
+
+# Below classes exist to prevent the direct instantiation
+class _Counter(Counter):
+ pass
+
+
+class _UpDownCounter(UpDownCounter):
+ pass
+
+
+class _ObservableCounter(ObservableCounter):
+ pass
+
+
+class _ObservableUpDownCounter(ObservableUpDownCounter):
+ pass
+
+
+class _Histogram(Histogram):
+ pass
+
+
+class _ObservableGauge(ObservableGauge):
+ pass
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement.py
new file mode 100644
index 0000000000..0dced5bcd3
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement.py
@@ -0,0 +1,30 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import dataclass
+from typing import Union
+
+from opentelemetry.metrics import Instrument
+from opentelemetry.util.types import Attributes
+
+
+@dataclass(frozen=True)
+class Measurement:
+ """
+ Represents a data point reported via the metrics API to the SDK.
+ """
+
+ value: Union[int, float]
+ instrument: Instrument
+ attributes: Attributes = None
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement_consumer.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement_consumer.py
new file mode 100644
index 0000000000..c5e81678dc
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/measurement_consumer.py
@@ -0,0 +1,128 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=unused-import
+
+from abc import ABC, abstractmethod
+from threading import Lock
+from time import time_ns
+from typing import Iterable, List, Mapping, Optional
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics
+import opentelemetry.sdk.metrics._internal.instrument
+import opentelemetry.sdk.metrics._internal.sdk_configuration
+from opentelemetry.metrics._internal.instrument import CallbackOptions
+from opentelemetry.sdk.metrics._internal.exceptions import MetricsTimeoutError
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.metric_reader_storage import (
+ MetricReaderStorage,
+)
+from opentelemetry.sdk.metrics._internal.point import Metric
+
+
+class MeasurementConsumer(ABC):
+ @abstractmethod
+ def consume_measurement(self, measurement: Measurement) -> None:
+ pass
+
+ @abstractmethod
+ def register_asynchronous_instrument(
+ self,
+ instrument: (
+ "opentelemetry.sdk.metrics._internal.instrument_Asynchronous"
+ ),
+ ):
+ pass
+
+ @abstractmethod
+ def collect(
+ self,
+ metric_reader: "opentelemetry.sdk.metrics.MetricReader",
+ timeout_millis: float = 10_000,
+ ) -> Optional[Iterable[Metric]]:
+ pass
+
+
+class SynchronousMeasurementConsumer(MeasurementConsumer):
+ def __init__(
+ self,
+ sdk_config: "opentelemetry.sdk.metrics._internal.SdkConfiguration",
+ ) -> None:
+ self._lock = Lock()
+ self._sdk_config = sdk_config
+ # should never be mutated
+ self._reader_storages: Mapping[
+ "opentelemetry.sdk.metrics.MetricReader", MetricReaderStorage
+ ] = {
+ reader: MetricReaderStorage(
+ sdk_config,
+ reader._instrument_class_temporality,
+ reader._instrument_class_aggregation,
+ )
+ for reader in sdk_config.metric_readers
+ }
+ self._async_instruments: List[
+ "opentelemetry.sdk.metrics._internal.instrument._Asynchronous"
+ ] = []
+
+ def consume_measurement(self, measurement: Measurement) -> None:
+ for reader_storage in self._reader_storages.values():
+ reader_storage.consume_measurement(measurement)
+
+ def register_asynchronous_instrument(
+ self,
+ instrument: (
+ "opentelemetry.sdk.metrics._internal.instrument._Asynchronous"
+ ),
+ ) -> None:
+ with self._lock:
+ self._async_instruments.append(instrument)
+
+ def collect(
+ self,
+ metric_reader: "opentelemetry.sdk.metrics.MetricReader",
+ timeout_millis: float = 10_000,
+ ) -> Optional[Iterable[Metric]]:
+
+ with self._lock:
+ metric_reader_storage = self._reader_storages[metric_reader]
+ # for now, just use the defaults
+ callback_options = CallbackOptions()
+ deadline_ns = time_ns() + timeout_millis * 10**6
+
+ default_timeout_millis = 10000 * 10**6
+
+ for async_instrument in self._async_instruments:
+
+ remaining_time = deadline_ns - time_ns()
+
+ if remaining_time < default_timeout_millis:
+
+ callback_options = CallbackOptions(
+ timeout_millis=remaining_time
+ )
+
+ measurements = async_instrument.callback(callback_options)
+ if time_ns() >= deadline_ns:
+ raise MetricsTimeoutError(
+ "Timed out while executing callback"
+ )
+
+ for measurement in measurements:
+ metric_reader_storage.consume_measurement(measurement)
+
+ result = self._reader_storages[metric_reader].collect()
+
+ return result
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/metric_reader_storage.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/metric_reader_storage.py
new file mode 100644
index 0000000000..700ace8720
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/metric_reader_storage.py
@@ -0,0 +1,315 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import getLogger
+from threading import RLock
+from time import time_ns
+from typing import Dict, List, Optional
+
+from opentelemetry.metrics import (
+ Asynchronous,
+ Counter,
+ Instrument,
+ ObservableCounter,
+)
+from opentelemetry.sdk.metrics._internal._view_instrument_match import (
+ _ViewInstrumentMatch,
+)
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ Aggregation,
+ ExplicitBucketHistogramAggregation,
+ _DropAggregation,
+ _ExplicitBucketHistogramAggregation,
+ _ExponentialBucketHistogramAggregation,
+ _LastValueAggregation,
+ _SumAggregation,
+)
+from opentelemetry.sdk.metrics._internal.export import AggregationTemporality
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.point import (
+ ExponentialHistogram,
+ Gauge,
+ Histogram,
+ Metric,
+ MetricsData,
+ ResourceMetrics,
+ ScopeMetrics,
+ Sum,
+)
+from opentelemetry.sdk.metrics._internal.sdk_configuration import (
+ SdkConfiguration,
+)
+from opentelemetry.sdk.metrics._internal.view import View
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+
+_logger = getLogger(__name__)
+
+_DEFAULT_VIEW = View(instrument_name="")
+
+
+class MetricReaderStorage:
+ """The SDK's storage for a given reader"""
+
+ def __init__(
+ self,
+ sdk_config: SdkConfiguration,
+ instrument_class_temporality: Dict[type, AggregationTemporality],
+ instrument_class_aggregation: Dict[type, Aggregation],
+ ) -> None:
+ self._lock = RLock()
+ self._sdk_config = sdk_config
+ self._instrument_view_instrument_matches: Dict[
+ Instrument, List[_ViewInstrumentMatch]
+ ] = {}
+ self._instrument_class_temporality = instrument_class_temporality
+ self._instrument_class_aggregation = instrument_class_aggregation
+
+ def _get_or_init_view_instrument_match(
+ self, instrument: Instrument
+ ) -> List[_ViewInstrumentMatch]:
+ # Optimistically get the relevant views for the given instrument. Once set for a given
+ # instrument, the mapping will never change
+
+ if instrument in self._instrument_view_instrument_matches:
+ return self._instrument_view_instrument_matches[instrument]
+
+ with self._lock:
+ # double check if it was set before we held the lock
+ if instrument in self._instrument_view_instrument_matches:
+ return self._instrument_view_instrument_matches[instrument]
+
+ # not present, hold the lock and add a new mapping
+ view_instrument_matches = []
+
+ self._handle_view_instrument_match(
+ instrument, view_instrument_matches
+ )
+
+ # if no view targeted the instrument, use the default
+ if not view_instrument_matches:
+ view_instrument_matches.append(
+ _ViewInstrumentMatch(
+ view=_DEFAULT_VIEW,
+ instrument=instrument,
+ instrument_class_aggregation=(
+ self._instrument_class_aggregation
+ ),
+ )
+ )
+ self._instrument_view_instrument_matches[
+ instrument
+ ] = view_instrument_matches
+
+ return view_instrument_matches
+
+ def consume_measurement(self, measurement: Measurement) -> None:
+ for view_instrument_match in self._get_or_init_view_instrument_match(
+ measurement.instrument
+ ):
+ view_instrument_match.consume_measurement(measurement)
+
+ def collect(self) -> Optional[MetricsData]:
+ # Use a list instead of yielding to prevent a slow reader from holding
+ # SDK locks
+
+ # While holding the lock, new _ViewInstrumentMatch can't be added from
+ # another thread (so we are sure we collect all existing view).
+ # However, instruments can still send measurements that will make it
+ # into the individual aggregations; collection will acquire those locks
+ # iteratively to keep locking as fine-grained as possible. One side
+ # effect is that end times can be slightly skewed among the metric
+ # streams produced by the SDK, but we still align the output timestamps
+ # for a single instrument.
+
+ collection_start_nanos = time_ns()
+
+ with self._lock:
+
+ instrumentation_scope_scope_metrics: (
+ Dict[InstrumentationScope, ScopeMetrics]
+ ) = {}
+
+ for (
+ instrument,
+ view_instrument_matches,
+ ) in self._instrument_view_instrument_matches.items():
+ aggregation_temporality = self._instrument_class_temporality[
+ instrument.__class__
+ ]
+
+ metrics: List[Metric] = []
+
+ for view_instrument_match in view_instrument_matches:
+
+ data_points = view_instrument_match.collect(
+ aggregation_temporality, collection_start_nanos
+ )
+
+ if data_points is None:
+ continue
+
+ if isinstance(
+ # pylint: disable=protected-access
+ view_instrument_match._aggregation,
+ _SumAggregation,
+ ):
+ data = Sum(
+ aggregation_temporality=aggregation_temporality,
+ data_points=data_points,
+ is_monotonic=isinstance(
+ instrument, (Counter, ObservableCounter)
+ ),
+ )
+ elif isinstance(
+ # pylint: disable=protected-access
+ view_instrument_match._aggregation,
+ _LastValueAggregation,
+ ):
+ data = Gauge(data_points=data_points)
+ elif isinstance(
+ # pylint: disable=protected-access
+ view_instrument_match._aggregation,
+ _ExplicitBucketHistogramAggregation,
+ ):
+ data = Histogram(
+ data_points=data_points,
+ aggregation_temporality=aggregation_temporality,
+ )
+ elif isinstance(
+ # pylint: disable=protected-access
+ view_instrument_match._aggregation,
+ _DropAggregation,
+ ):
+ continue
+
+ elif isinstance(
+ # pylint: disable=protected-access
+ view_instrument_match._aggregation,
+ _ExponentialBucketHistogramAggregation,
+ ):
+ data = ExponentialHistogram(
+ data_points=data_points,
+ aggregation_temporality=aggregation_temporality,
+ )
+
+ metrics.append(
+ Metric(
+ # pylint: disable=protected-access
+ name=view_instrument_match._name,
+ description=view_instrument_match._description,
+ unit=view_instrument_match._instrument.unit,
+ data=data,
+ )
+ )
+
+ if metrics:
+
+ if instrument.instrumentation_scope not in (
+ instrumentation_scope_scope_metrics
+ ):
+ instrumentation_scope_scope_metrics[
+ instrument.instrumentation_scope
+ ] = ScopeMetrics(
+ scope=instrument.instrumentation_scope,
+ metrics=metrics,
+ schema_url=instrument.instrumentation_scope.schema_url,
+ )
+ else:
+ instrumentation_scope_scope_metrics[
+ instrument.instrumentation_scope
+ ].metrics.extend(metrics)
+
+ if instrumentation_scope_scope_metrics:
+
+ return MetricsData(
+ resource_metrics=[
+ ResourceMetrics(
+ resource=self._sdk_config.resource,
+ scope_metrics=list(
+ instrumentation_scope_scope_metrics.values()
+ ),
+ schema_url=self._sdk_config.resource.schema_url,
+ )
+ ]
+ )
+
+ return None
+
+ def _handle_view_instrument_match(
+ self,
+ instrument: Instrument,
+ view_instrument_matches: List["_ViewInstrumentMatch"],
+ ) -> None:
+ for view in self._sdk_config.views:
+ # pylint: disable=protected-access
+ if not view._match(instrument):
+ continue
+
+ if not self._check_view_instrument_compatibility(view, instrument):
+ continue
+
+ new_view_instrument_match = _ViewInstrumentMatch(
+ view=view,
+ instrument=instrument,
+ instrument_class_aggregation=(
+ self._instrument_class_aggregation
+ ),
+ )
+
+ for (
+ existing_view_instrument_matches
+ ) in self._instrument_view_instrument_matches.values():
+ for (
+ existing_view_instrument_match
+ ) in existing_view_instrument_matches:
+ if existing_view_instrument_match.conflicts(
+ new_view_instrument_match
+ ):
+
+ _logger.warning(
+ "Views %s and %s will cause conflicting "
+ "metrics identities",
+ existing_view_instrument_match._view,
+ new_view_instrument_match._view,
+ )
+
+ view_instrument_matches.append(new_view_instrument_match)
+
+ @staticmethod
+ def _check_view_instrument_compatibility(
+ view: View, instrument: Instrument
+ ) -> bool:
+ """
+ Checks if a view and an instrument are compatible.
+
+ Returns `true` if they are compatible and a `_ViewInstrumentMatch`
+ object should be created, `false` otherwise.
+ """
+
+ result = True
+
+ # pylint: disable=protected-access
+ if isinstance(instrument, Asynchronous) and isinstance(
+ view._aggregation, ExplicitBucketHistogramAggregation
+ ):
+ _logger.warning(
+ "View %s and instrument %s will produce "
+ "semantic errors when matched, the view "
+ "has not been applied.",
+ view,
+ instrument,
+ )
+ result = False
+
+ return result
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
new file mode 100644
index 0000000000..c30705c59a
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/point.py
@@ -0,0 +1,259 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=unused-import
+
+from dataclasses import asdict, dataclass
+from json import dumps, loads
+from typing import Optional, Sequence, Union
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics._internal
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.util.types import Attributes
+
+
+@dataclass(frozen=True)
+class NumberDataPoint:
+ """Single data point in a timeseries that describes the time-varying scalar
+ value of a metric.
+ """
+
+ attributes: Attributes
+ start_time_unix_nano: int
+ time_unix_nano: int
+ value: Union[int, float]
+
+ def to_json(self, indent=4) -> str:
+ return dumps(asdict(self), indent=indent)
+
+
+@dataclass(frozen=True)
+class HistogramDataPoint:
+ """Single data point in a timeseries that describes the time-varying scalar
+ value of a metric.
+ """
+
+ attributes: Attributes
+ start_time_unix_nano: int
+ time_unix_nano: int
+ count: int
+ sum: Union[int, float]
+ bucket_counts: Sequence[int]
+ explicit_bounds: Sequence[float]
+ min: float
+ max: float
+
+ def to_json(self, indent=4) -> str:
+ return dumps(asdict(self), indent=indent)
+
+
+@dataclass(frozen=True)
+class Buckets:
+ offset: int
+ bucket_counts: Sequence[int]
+
+
+@dataclass(frozen=True)
+class ExponentialHistogramDataPoint:
+ """Single data point in a timeseries whose boundaries are defined by an
+ exponential function. This timeseries describes the time-varying scalar
+ value of a metric.
+ """
+
+ attributes: Attributes
+ start_time_unix_nano: int
+ time_unix_nano: int
+ count: int
+ sum: Union[int, float]
+ scale: int
+ zero_count: int
+ positive: Buckets
+ negative: Buckets
+ flags: int
+ min: float
+ max: float
+
+ def to_json(self, indent=4) -> str:
+ return dumps(asdict(self), indent=indent)
+
+
+@dataclass(frozen=True)
+class ExponentialHistogram:
+ """Represents the type of a metric that is calculated by aggregating as an
+ ExponentialHistogram of all reported measurements over a time interval.
+ """
+
+ data_points: Sequence[ExponentialHistogramDataPoint]
+ aggregation_temporality: (
+ "opentelemetry.sdk.metrics.export.AggregationTemporality"
+ )
+
+
+@dataclass(frozen=True)
+class Sum:
+ """Represents the type of a scalar metric that is calculated as a sum of
+ all reported measurements over a time interval."""
+
+ data_points: Sequence[NumberDataPoint]
+ aggregation_temporality: (
+ "opentelemetry.sdk.metrics.export.AggregationTemporality"
+ )
+ is_monotonic: bool
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "data_points": [
+ loads(data_point.to_json(indent=indent))
+ for data_point in self.data_points
+ ],
+ "aggregation_temporality": self.aggregation_temporality,
+ "is_monotonic": self.is_monotonic,
+ },
+ indent=indent,
+ )
+
+
+@dataclass(frozen=True)
+class Gauge:
+ """Represents the type of a scalar metric that always exports the current
+ value for every data point. It should be used for an unknown
+ aggregation."""
+
+ data_points: Sequence[NumberDataPoint]
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "data_points": [
+ loads(data_point.to_json(indent=indent))
+ for data_point in self.data_points
+ ],
+ },
+ indent=indent,
+ )
+
+
+@dataclass(frozen=True)
+class Histogram:
+ """Represents the type of a metric that is calculated by aggregating as a
+ histogram of all reported measurements over a time interval."""
+
+ data_points: Sequence[HistogramDataPoint]
+ aggregation_temporality: (
+ "opentelemetry.sdk.metrics.export.AggregationTemporality"
+ )
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "data_points": [
+ loads(data_point.to_json(indent=indent))
+ for data_point in self.data_points
+ ],
+ "aggregation_temporality": self.aggregation_temporality,
+ },
+ indent=indent,
+ )
+
+
+# pylint: disable=invalid-name
+DataT = Union[Sum, Gauge, Histogram]
+DataPointT = Union[NumberDataPoint, HistogramDataPoint]
+
+
+@dataclass(frozen=True)
+class Metric:
+ """Represents a metric point in the OpenTelemetry data model to be
+ exported."""
+
+ name: str
+ description: Optional[str]
+ unit: Optional[str]
+ data: DataT
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "name": self.name,
+ "description": self.description or "",
+ "unit": self.unit or "",
+ "data": loads(self.data.to_json(indent=indent)),
+ },
+ indent=indent,
+ )
+
+
+@dataclass(frozen=True)
+class ScopeMetrics:
+ """A collection of Metrics produced by a scope"""
+
+ scope: InstrumentationScope
+ metrics: Sequence[Metric]
+ schema_url: str
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "scope": loads(self.scope.to_json(indent=indent)),
+ "metrics": [
+ loads(metric.to_json(indent=indent))
+ for metric in self.metrics
+ ],
+ "schema_url": self.schema_url,
+ },
+ indent=indent,
+ )
+
+
+@dataclass(frozen=True)
+class ResourceMetrics:
+ """A collection of ScopeMetrics from a Resource"""
+
+ resource: Resource
+ scope_metrics: Sequence[ScopeMetrics]
+ schema_url: str
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "resource": loads(self.resource.to_json(indent=indent)),
+ "scope_metrics": [
+ loads(scope_metrics.to_json(indent=indent))
+ for scope_metrics in self.scope_metrics
+ ],
+ "schema_url": self.schema_url,
+ },
+ indent=indent,
+ )
+
+
+@dataclass(frozen=True)
+class MetricsData:
+ """An array of ResourceMetrics"""
+
+ resource_metrics: Sequence[ResourceMetrics]
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "resource_metrics": [
+ loads(resource_metrics.to_json(indent=indent))
+ for resource_metrics in self.resource_metrics
+ ]
+ },
+ indent=indent,
+ )
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/sdk_configuration.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/sdk_configuration.py
new file mode 100644
index 0000000000..9594ab38a7
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/sdk_configuration.py
@@ -0,0 +1,29 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=unused-import
+
+from dataclasses import dataclass
+from typing import Sequence
+
+# This kind of import is needed to avoid Sphinx errors.
+import opentelemetry.sdk.metrics
+import opentelemetry.sdk.resources
+
+
+@dataclass
+class SdkConfiguration:
+ resource: "opentelemetry.sdk.resources.Resource"
+ metric_readers: Sequence["opentelemetry.sdk.metrics.MetricReader"]
+ views: Sequence["opentelemetry.sdk.metrics.View"]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/view.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/view.py
new file mode 100644
index 0000000000..28f7b4fe08
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/view.py
@@ -0,0 +1,171 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from fnmatch import fnmatch
+from logging import getLogger
+from typing import Optional, Set, Type
+
+# FIXME import from typing when support for 3.7 is removed
+from typing_extensions import final
+
+from opentelemetry.metrics import Instrument
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ Aggregation,
+ DefaultAggregation,
+)
+
+_logger = getLogger(__name__)
+
+
+class View:
+ """
+ A `View` configuration parameters can be used for the following
+ purposes:
+
+ 1. Match instruments: When an instrument matches a view, measurements
+ received by that instrument will be processed.
+ 2. Customize metric streams: A metric stream is identified by a match
+ between a view and an instrument and a set of attributes. The metric
+ stream can be customized by certain attributes of the corresponding view.
+
+ The attributes documented next serve one of the previous two purposes.
+
+ Args:
+ instrument_type: This is an instrument matching attribute: the class the
+ instrument must be to match the view.
+
+ instrument_name: This is an instrument matching attribute: the name the
+ instrument must have to match the view. Wild card characters are supported. Wild
+ card characters should not be used with this attribute if the view has also a
+ ``name`` defined.
+
+ meter_name: This is an instrument matching attribute: the name the
+ instrument meter must have to match the view.
+
+ meter_version: This is an instrument matching attribute: the version
+ the instrument meter must have to match the view.
+
+ meter_schema_url: This is an instrument matching attribute: the schema
+ URL the instrument meter must have to match the view.
+
+ name: This is a metric stream customizing attribute: the name of the
+ metric stream. If `None`, the name of the instrument will be used.
+
+ description: This is a metric stream customizing attribute: the
+ description of the metric stream. If `None`, the description of the instrument will
+ be used.
+
+ attribute_keys: This is a metric stream customizing attribute: this is
+ a set of attribute keys. If not `None` then only the measurement attributes that
+ are in ``attribute_keys`` will be used to identify the metric stream.
+
+ aggregation: This is a metric stream customizing attribute: the
+ aggregation instance to use when data is aggregated for the
+ corresponding metrics stream. If `None` an instance of
+ `DefaultAggregation` will be used.
+
+ instrument_unit: This is an instrument matching attribute: the unit the
+ instrument must have to match the view.
+
+ This class is not intended to be subclassed by the user.
+ """
+
+ _default_aggregation = DefaultAggregation()
+
+ def __init__(
+ self,
+ instrument_type: Optional[Type[Instrument]] = None,
+ instrument_name: Optional[str] = None,
+ meter_name: Optional[str] = None,
+ meter_version: Optional[str] = None,
+ meter_schema_url: Optional[str] = None,
+ name: Optional[str] = None,
+ description: Optional[str] = None,
+ attribute_keys: Optional[Set[str]] = None,
+ aggregation: Optional[Aggregation] = None,
+ instrument_unit: Optional[str] = None,
+ ):
+ if (
+ instrument_type
+ is instrument_name
+ is instrument_unit
+ is meter_name
+ is meter_version
+ is meter_schema_url
+ is None
+ ):
+ raise Exception(
+ "Some instrument selection "
+ f"criteria must be provided for View {name}"
+ )
+
+ if (
+ name is not None
+ and instrument_name is not None
+ and ("*" in instrument_name or "?" in instrument_name)
+ ):
+
+ raise Exception(
+ f"View {name} declared with wildcard "
+ "characters in instrument_name"
+ )
+
+ # _name, _description, _aggregation and _attribute_keys will be
+ # accessed when instantiating a _ViewInstrumentMatch.
+ self._name = name
+ self._instrument_type = instrument_type
+ self._instrument_name = instrument_name
+ self._instrument_unit = instrument_unit
+ self._meter_name = meter_name
+ self._meter_version = meter_version
+ self._meter_schema_url = meter_schema_url
+
+ self._description = description
+ self._attribute_keys = attribute_keys
+ self._aggregation = aggregation or self._default_aggregation
+
+ # pylint: disable=too-many-return-statements
+ # pylint: disable=too-many-branches
+ @final
+ def _match(self, instrument: Instrument) -> bool:
+
+ if self._instrument_type is not None:
+ if not isinstance(instrument, self._instrument_type):
+ return False
+
+ if self._instrument_name is not None:
+ if not fnmatch(instrument.name, self._instrument_name):
+ return False
+
+ if self._instrument_unit is not None:
+ if not fnmatch(instrument.unit, self._instrument_unit):
+ return False
+
+ if self._meter_name is not None:
+ if instrument.instrumentation_scope.name != self._meter_name:
+ return False
+
+ if self._meter_version is not None:
+ if instrument.instrumentation_scope.version != self._meter_version:
+ return False
+
+ if self._meter_schema_url is not None:
+ if (
+ instrument.instrumentation_scope.schema_url
+ != self._meter_schema_url
+ ):
+ return False
+
+ return True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/__init__.py
new file mode 100644
index 0000000000..97c31b97ec
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/__init__.py
@@ -0,0 +1,63 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.sdk.metrics._internal.export import (
+ AggregationTemporality,
+ ConsoleMetricExporter,
+ InMemoryMetricReader,
+ MetricExporter,
+ MetricExportResult,
+ MetricReader,
+ PeriodicExportingMetricReader,
+)
+
+# The point module is not in the export directory to avoid a circular import.
+from opentelemetry.sdk.metrics._internal.point import ( # noqa: F401
+ Buckets,
+ DataPointT,
+ DataT,
+ ExponentialHistogram,
+ ExponentialHistogramDataPoint,
+ Gauge,
+ Histogram,
+ HistogramDataPoint,
+ Metric,
+ MetricsData,
+ NumberDataPoint,
+ ResourceMetrics,
+ ScopeMetrics,
+ Sum,
+)
+
+__all__ = [
+ "AggregationTemporality",
+ "ConsoleMetricExporter",
+ "InMemoryMetricReader",
+ "MetricExporter",
+ "MetricExportResult",
+ "MetricReader",
+ "PeriodicExportingMetricReader",
+ "DataPointT",
+ "DataT",
+ "Gauge",
+ "Histogram",
+ "HistogramDataPoint",
+ "Metric",
+ "MetricsData",
+ "NumberDataPoint",
+ "ResourceMetrics",
+ "ScopeMetrics",
+ "Sum",
+]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py
new file mode 100644
index 0000000000..c07adf6cac
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/view/__init__.py
@@ -0,0 +1,35 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ Aggregation,
+ DefaultAggregation,
+ DropAggregation,
+ ExplicitBucketHistogramAggregation,
+ ExponentialBucketHistogramAggregation,
+ LastValueAggregation,
+ SumAggregation,
+)
+from opentelemetry.sdk.metrics._internal.view import View
+
+__all__ = [
+ "Aggregation",
+ "DefaultAggregation",
+ "DropAggregation",
+ "ExplicitBucketHistogramAggregation",
+ "ExponentialBucketHistogramAggregation",
+ "LastValueAggregation",
+ "SumAggregation",
+ "View",
+]
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/py.typed b/opentelemetry-sdk/src/opentelemetry/sdk/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py
new file mode 100644
index 0000000000..852b23f500
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/resources/__init__.py
@@ -0,0 +1,413 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+This package implements `OpenTelemetry Resources
+`_:
+
+ *A Resource is an immutable representation of the entity producing
+ telemetry. For example, a process producing telemetry that is running in
+ a container on Kubernetes has a Pod name, it is in a namespace and
+ possibly is part of a Deployment which also has a name. All three of
+ these attributes can be included in the Resource.*
+
+Resource objects are created with `Resource.create`, which accepts attributes
+(key-values). Resources should NOT be created via constructor, and working with
+`Resource` objects should only be done via the Resource API methods. Resource
+attributes can also be passed at process invocation in the
+:envvar:`OTEL_RESOURCE_ATTRIBUTES` environment variable. You should register
+your resource with the `opentelemetry.sdk.trace.TracerProvider` by passing
+them into their constructors. The `Resource` passed to a provider is available
+to the exporter, which can send on this information as it sees fit.
+
+.. code-block:: python
+
+ trace.set_tracer_provider(
+ TracerProvider(
+ resource=Resource.create({
+ "service.name": "shoppingcart",
+ "service.instance.id": "instance-12",
+ }),
+ ),
+ )
+ print(trace.get_tracer_provider().resource.attributes)
+
+ {'telemetry.sdk.language': 'python',
+ 'telemetry.sdk.name': 'opentelemetry',
+ 'telemetry.sdk.version': '0.13.dev0',
+ 'service.name': 'shoppingcart',
+ 'service.instance.id': 'instance-12'}
+
+Note that the OpenTelemetry project documents certain `"standard attributes"
+`_
+that have prescribed semantic meanings, for example ``service.name`` in the
+above example.
+ """
+
+import abc
+import concurrent.futures
+import logging
+import os
+import sys
+import typing
+from json import dumps
+from os import environ
+from urllib import parse
+
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPERIMENTAL_RESOURCE_DETECTORS,
+ OTEL_RESOURCE_ATTRIBUTES,
+ OTEL_SERVICE_NAME,
+)
+from opentelemetry.semconv.resource import ResourceAttributes
+from opentelemetry.util._importlib_metadata import entry_points, version
+from opentelemetry.util.types import AttributeValue
+
+try:
+ import psutil
+except ImportError:
+ psutil = None
+
+LabelValue = AttributeValue
+Attributes = typing.Dict[str, LabelValue]
+logger = logging.getLogger(__name__)
+
+CLOUD_PROVIDER = ResourceAttributes.CLOUD_PROVIDER
+CLOUD_ACCOUNT_ID = ResourceAttributes.CLOUD_ACCOUNT_ID
+CLOUD_REGION = ResourceAttributes.CLOUD_REGION
+CLOUD_AVAILABILITY_ZONE = ResourceAttributes.CLOUD_AVAILABILITY_ZONE
+CONTAINER_NAME = ResourceAttributes.CONTAINER_NAME
+CONTAINER_ID = ResourceAttributes.CONTAINER_ID
+CONTAINER_IMAGE_NAME = ResourceAttributes.CONTAINER_IMAGE_NAME
+CONTAINER_IMAGE_TAG = ResourceAttributes.CONTAINER_IMAGE_TAG
+DEPLOYMENT_ENVIRONMENT = ResourceAttributes.DEPLOYMENT_ENVIRONMENT
+FAAS_NAME = ResourceAttributes.FAAS_NAME
+FAAS_ID = ResourceAttributes.FAAS_ID
+FAAS_VERSION = ResourceAttributes.FAAS_VERSION
+FAAS_INSTANCE = ResourceAttributes.FAAS_INSTANCE
+HOST_NAME = ResourceAttributes.HOST_NAME
+HOST_TYPE = ResourceAttributes.HOST_TYPE
+HOST_IMAGE_NAME = ResourceAttributes.HOST_IMAGE_NAME
+HOST_IMAGE_ID = ResourceAttributes.HOST_IMAGE_ID
+HOST_IMAGE_VERSION = ResourceAttributes.HOST_IMAGE_VERSION
+KUBERNETES_CLUSTER_NAME = ResourceAttributes.K8S_CLUSTER_NAME
+KUBERNETES_NAMESPACE_NAME = ResourceAttributes.K8S_NAMESPACE_NAME
+KUBERNETES_POD_UID = ResourceAttributes.K8S_POD_UID
+KUBERNETES_POD_NAME = ResourceAttributes.K8S_POD_NAME
+KUBERNETES_CONTAINER_NAME = ResourceAttributes.K8S_CONTAINER_NAME
+KUBERNETES_REPLICA_SET_UID = ResourceAttributes.K8S_REPLICASET_UID
+KUBERNETES_REPLICA_SET_NAME = ResourceAttributes.K8S_REPLICASET_NAME
+KUBERNETES_DEPLOYMENT_UID = ResourceAttributes.K8S_DEPLOYMENT_UID
+KUBERNETES_DEPLOYMENT_NAME = ResourceAttributes.K8S_DEPLOYMENT_NAME
+KUBERNETES_STATEFUL_SET_UID = ResourceAttributes.K8S_STATEFULSET_UID
+KUBERNETES_STATEFUL_SET_NAME = ResourceAttributes.K8S_STATEFULSET_NAME
+KUBERNETES_DAEMON_SET_UID = ResourceAttributes.K8S_DAEMONSET_UID
+KUBERNETES_DAEMON_SET_NAME = ResourceAttributes.K8S_DAEMONSET_NAME
+KUBERNETES_JOB_UID = ResourceAttributes.K8S_JOB_UID
+KUBERNETES_JOB_NAME = ResourceAttributes.K8S_JOB_NAME
+KUBERNETES_CRON_JOB_UID = ResourceAttributes.K8S_CRONJOB_UID
+KUBERNETES_CRON_JOB_NAME = ResourceAttributes.K8S_CRONJOB_NAME
+OS_TYPE = ResourceAttributes.OS_TYPE
+OS_DESCRIPTION = ResourceAttributes.OS_DESCRIPTION
+PROCESS_PID = ResourceAttributes.PROCESS_PID
+PROCESS_PARENT_PID = ResourceAttributes.PROCESS_PARENT_PID
+PROCESS_EXECUTABLE_NAME = ResourceAttributes.PROCESS_EXECUTABLE_NAME
+PROCESS_EXECUTABLE_PATH = ResourceAttributes.PROCESS_EXECUTABLE_PATH
+PROCESS_COMMAND = ResourceAttributes.PROCESS_COMMAND
+PROCESS_COMMAND_LINE = ResourceAttributes.PROCESS_COMMAND_LINE
+PROCESS_COMMAND_ARGS = ResourceAttributes.PROCESS_COMMAND_ARGS
+PROCESS_OWNER = ResourceAttributes.PROCESS_OWNER
+PROCESS_RUNTIME_NAME = ResourceAttributes.PROCESS_RUNTIME_NAME
+PROCESS_RUNTIME_VERSION = ResourceAttributes.PROCESS_RUNTIME_VERSION
+PROCESS_RUNTIME_DESCRIPTION = ResourceAttributes.PROCESS_RUNTIME_DESCRIPTION
+SERVICE_NAME = ResourceAttributes.SERVICE_NAME
+SERVICE_NAMESPACE = ResourceAttributes.SERVICE_NAMESPACE
+SERVICE_INSTANCE_ID = ResourceAttributes.SERVICE_INSTANCE_ID
+SERVICE_VERSION = ResourceAttributes.SERVICE_VERSION
+TELEMETRY_SDK_NAME = ResourceAttributes.TELEMETRY_SDK_NAME
+TELEMETRY_SDK_VERSION = ResourceAttributes.TELEMETRY_SDK_VERSION
+TELEMETRY_AUTO_VERSION = ResourceAttributes.TELEMETRY_AUTO_VERSION
+TELEMETRY_SDK_LANGUAGE = ResourceAttributes.TELEMETRY_SDK_LANGUAGE
+
+_OPENTELEMETRY_SDK_VERSION = version("opentelemetry-sdk")
+
+
+class Resource:
+ """A Resource is an immutable representation of the entity producing telemetry as Attributes."""
+
+ def __init__(
+ self, attributes: Attributes, schema_url: typing.Optional[str] = None
+ ):
+ self._attributes = BoundedAttributes(attributes=attributes)
+ if schema_url is None:
+ schema_url = ""
+ self._schema_url = schema_url
+
+ @staticmethod
+ def create(
+ attributes: typing.Optional[Attributes] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> "Resource":
+ """Creates a new `Resource` from attributes.
+
+ Args:
+ attributes: Optional zero or more key-value pairs.
+ schema_url: Optional URL pointing to the schema
+
+ Returns:
+ The newly-created Resource.
+ """
+
+ if not attributes:
+ attributes = {}
+
+ resource_detectors = []
+
+ resource = _DEFAULT_RESOURCE
+
+ otel_experimental_resource_detectors = environ.get(
+ OTEL_EXPERIMENTAL_RESOURCE_DETECTORS, "otel"
+ ).split(",")
+
+ if "otel" not in otel_experimental_resource_detectors:
+ otel_experimental_resource_detectors.append("otel")
+
+ for resource_detector in otel_experimental_resource_detectors:
+ resource_detectors.append(
+ next(
+ iter(
+ entry_points(
+ group="opentelemetry_resource_detector",
+ name=resource_detector.strip(),
+ )
+ )
+ ).load()()
+ )
+
+ resource = get_aggregated_resources(
+ resource_detectors, _DEFAULT_RESOURCE
+ ).merge(Resource(attributes, schema_url))
+
+ if not resource.attributes.get(SERVICE_NAME, None):
+ default_service_name = "unknown_service"
+ process_executable_name = resource.attributes.get(
+ PROCESS_EXECUTABLE_NAME, None
+ )
+ if process_executable_name:
+ default_service_name += ":" + process_executable_name
+ resource = resource.merge(
+ Resource({SERVICE_NAME: default_service_name}, schema_url)
+ )
+ return resource
+
+ @staticmethod
+ def get_empty() -> "Resource":
+ return _EMPTY_RESOURCE
+
+ @property
+ def attributes(self) -> Attributes:
+ return self._attributes
+
+ @property
+ def schema_url(self) -> str:
+ return self._schema_url
+
+ def merge(self, other: "Resource") -> "Resource":
+ """Merges this resource and an updating resource into a new `Resource`.
+
+ If a key exists on both the old and updating resource, the value of the
+ updating resource will override the old resource value.
+
+ The updating resource's `schema_url` will be used only if the old
+ `schema_url` is empty. Attempting to merge two resources with
+ different, non-empty values for `schema_url` will result in an error
+ and return the old resource.
+
+ Args:
+ other: The other resource to be merged.
+
+ Returns:
+ The newly-created Resource.
+ """
+ merged_attributes = self.attributes.copy()
+ merged_attributes.update(other.attributes)
+
+ if self.schema_url == "":
+ schema_url = other.schema_url
+ elif other.schema_url == "":
+ schema_url = self.schema_url
+ elif self.schema_url == other.schema_url:
+ schema_url = other.schema_url
+ else:
+ logger.error(
+ "Failed to merge resources: The two schemas %s and %s are incompatible",
+ self.schema_url,
+ other.schema_url,
+ )
+ return self
+
+ return Resource(merged_attributes, schema_url)
+
+ def __eq__(self, other: object) -> bool:
+ if not isinstance(other, Resource):
+ return False
+ return (
+ self._attributes == other._attributes
+ and self._schema_url == other._schema_url
+ )
+
+ def __hash__(self):
+ return hash(
+ f"{dumps(self._attributes.copy(), sort_keys=True)}|{self._schema_url}"
+ )
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "attributes": dict(self._attributes),
+ "schema_url": self._schema_url,
+ },
+ indent=indent,
+ )
+
+
+_EMPTY_RESOURCE = Resource({})
+_DEFAULT_RESOURCE = Resource(
+ {
+ TELEMETRY_SDK_LANGUAGE: "python",
+ TELEMETRY_SDK_NAME: "opentelemetry",
+ TELEMETRY_SDK_VERSION: _OPENTELEMETRY_SDK_VERSION,
+ }
+)
+
+
+class ResourceDetector(abc.ABC):
+ def __init__(self, raise_on_error=False):
+ self.raise_on_error = raise_on_error
+
+ @abc.abstractmethod
+ def detect(self) -> "Resource":
+ raise NotImplementedError()
+
+
+class OTELResourceDetector(ResourceDetector):
+ # pylint: disable=no-self-use
+ def detect(self) -> "Resource":
+
+ env_resources_items = environ.get(OTEL_RESOURCE_ATTRIBUTES)
+ env_resource_map = {}
+
+ if env_resources_items:
+ for item in env_resources_items.split(","):
+ try:
+ key, value = item.split("=", maxsplit=1)
+ except ValueError as exc:
+ logger.warning(
+ "Invalid key value resource attribute pair %s: %s",
+ item,
+ exc,
+ )
+ continue
+ value_url_decoded = parse.unquote(value.strip())
+ env_resource_map[key.strip()] = value_url_decoded
+
+ service_name = environ.get(OTEL_SERVICE_NAME)
+ if service_name:
+ env_resource_map[SERVICE_NAME] = service_name
+ return Resource(env_resource_map)
+
+
+class ProcessResourceDetector(ResourceDetector):
+ # pylint: disable=no-self-use
+ def detect(self) -> "Resource":
+ _runtime_version = ".".join(
+ map(
+ str,
+ sys.version_info[:3]
+ if sys.version_info.releaselevel == "final"
+ and not sys.version_info.serial
+ else sys.version_info,
+ )
+ )
+ _process_pid = os.getpid()
+ _process_executable_name = sys.executable
+ _process_executable_path = os.path.dirname(_process_executable_name)
+ _process_command = sys.argv[0]
+ _process_command_line = " ".join(sys.argv)
+ _process_command_args = sys.argv[1:]
+ resource_info = {
+ PROCESS_RUNTIME_DESCRIPTION: sys.version,
+ PROCESS_RUNTIME_NAME: sys.implementation.name,
+ PROCESS_RUNTIME_VERSION: _runtime_version,
+ PROCESS_PID: _process_pid,
+ PROCESS_EXECUTABLE_NAME: _process_executable_name,
+ PROCESS_EXECUTABLE_PATH: _process_executable_path,
+ PROCESS_COMMAND: _process_command,
+ PROCESS_COMMAND_LINE: _process_command_line,
+ PROCESS_COMMAND_ARGS: _process_command_args,
+ }
+ if hasattr(os, "getppid"):
+ # pypy3 does not have getppid()
+ resource_info[PROCESS_PARENT_PID] = os.getppid()
+
+ if psutil is not None:
+ process = psutil.Process()
+ resource_info[PROCESS_OWNER] = process.username()
+
+ return Resource(resource_info)
+
+
+def get_aggregated_resources(
+ detectors: typing.List["ResourceDetector"],
+ initial_resource: typing.Optional[Resource] = None,
+ timeout=5,
+) -> "Resource":
+ """Retrieves resources from detectors in the order that they were passed
+
+ :param detectors: List of resources in order of priority
+ :param initial_resource: Static resource. This has highest priority
+ :param timeout: Number of seconds to wait for each detector to return
+ :return:
+ """
+ detectors_merged_resource = initial_resource or Resource.create()
+
+ with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
+ futures = [executor.submit(detector.detect) for detector in detectors]
+ for detector_ind, future in enumerate(futures):
+ detector = detectors[detector_ind]
+ detected_resource: Resource = _EMPTY_RESOURCE
+ try:
+ detected_resource = future.result(timeout=timeout)
+ except concurrent.futures.TimeoutError as ex:
+ if detector.raise_on_error:
+ raise ex
+ logger.warning(
+ "Detector %s took longer than %s seconds, skipping",
+ detector,
+ timeout,
+ )
+ # pylint: disable=broad-except
+ except Exception as ex:
+ if detector.raise_on_error:
+ raise ex
+ logger.warning(
+ "Exception %s in detector %s, ignoring", ex, detector
+ )
+ finally:
+ detectors_merged_resource = detectors_merged_resource.merge(
+ detected_resource
+ )
+
+ return detectors_merged_resource
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py
new file mode 100644
index 0000000000..6dae70b2f6
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/__init__.py
@@ -0,0 +1,1241 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+import abc
+import atexit
+import concurrent.futures
+import json
+import logging
+import threading
+import traceback
+import typing
+from contextlib import contextmanager
+from os import environ
+from time import time_ns
+from types import MappingProxyType, TracebackType
+from typing import (
+ Any,
+ Callable,
+ Dict,
+ Iterator,
+ List,
+ Optional,
+ Sequence,
+ Tuple,
+ Type,
+ Union,
+)
+from warnings import filterwarnings
+
+from deprecated import deprecated
+
+from opentelemetry import context as context_api
+from opentelemetry import trace as trace_api
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk import util
+from opentelemetry.sdk.environment_variables import (
+ OTEL_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_LINK_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ OTEL_SPAN_EVENT_COUNT_LIMIT,
+ OTEL_SPAN_LINK_COUNT_LIMIT,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import sampling
+from opentelemetry.sdk.trace.id_generator import IdGenerator, RandomIdGenerator
+from opentelemetry.sdk.util import BoundedList
+from opentelemetry.sdk.util.instrumentation import (
+ InstrumentationInfo,
+ InstrumentationScope,
+)
+from opentelemetry.trace import SpanContext
+from opentelemetry.trace.status import Status, StatusCode
+from opentelemetry.util import types
+
+logger = logging.getLogger(__name__)
+
+_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT = 128
+_DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT = 128
+_DEFAULT_OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT = 128
+_DEFAULT_OTEL_LINK_ATTRIBUTE_COUNT_LIMIT = 128
+_DEFAULT_OTEL_SPAN_EVENT_COUNT_LIMIT = 128
+_DEFAULT_OTEL_SPAN_LINK_COUNT_LIMIT = 128
+
+
+_ENV_VALUE_UNSET = ""
+
+
+class SpanProcessor:
+ """Interface which allows hooks for SDK's `Span` start and end method
+ invocations.
+
+ Span processors can be registered directly using
+ :func:`TracerProvider.add_span_processor` and they are invoked
+ in the same order as they were registered.
+ """
+
+ def on_start(
+ self,
+ span: "Span",
+ parent_context: Optional[context_api.Context] = None,
+ ) -> None:
+ """Called when a :class:`opentelemetry.trace.Span` is started.
+
+ This method is called synchronously on the thread that starts the
+ span, therefore it should not block or throw an exception.
+
+ Args:
+ span: The :class:`opentelemetry.trace.Span` that just started.
+ parent_context: The parent context of the span that just started.
+ """
+
+ def on_end(self, span: "ReadableSpan") -> None:
+ """Called when a :class:`opentelemetry.trace.Span` is ended.
+
+ This method is called synchronously on the thread that ends the
+ span, therefore it should not block or throw an exception.
+
+ Args:
+ span: The :class:`opentelemetry.trace.Span` that just ended.
+ """
+
+ def shutdown(self) -> None:
+ """Called when a :class:`opentelemetry.sdk.trace.TracerProvider` is shutdown."""
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Export all ended spans to the configured Exporter that have not yet
+ been exported.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for spans to be
+ exported.
+
+ Returns:
+ False if the timeout is exceeded, True otherwise.
+ """
+
+
+# Temporary fix until https://github.com/PyCQA/pylint/issues/4098 is resolved
+# pylint:disable=no-member
+class SynchronousMultiSpanProcessor(SpanProcessor):
+ """Implementation of class:`SpanProcessor` that forwards all received
+ events to a list of span processors sequentially.
+
+ The underlying span processors are called in sequential order as they were
+ added.
+ """
+
+ _span_processors: Tuple[SpanProcessor, ...]
+
+ def __init__(self):
+ # use a tuple to avoid race conditions when adding a new span and
+ # iterating through it on "on_start" and "on_end".
+ self._span_processors = ()
+ self._lock = threading.Lock()
+
+ def add_span_processor(self, span_processor: SpanProcessor) -> None:
+ """Adds a SpanProcessor to the list handled by this instance."""
+ with self._lock:
+ self._span_processors += (span_processor,)
+
+ def on_start(
+ self,
+ span: "Span",
+ parent_context: Optional[context_api.Context] = None,
+ ) -> None:
+ for sp in self._span_processors:
+ sp.on_start(span, parent_context=parent_context)
+
+ def on_end(self, span: "ReadableSpan") -> None:
+ for sp in self._span_processors:
+ sp.on_end(span)
+
+ def shutdown(self) -> None:
+ """Sequentially shuts down all underlying span processors."""
+ for sp in self._span_processors:
+ sp.shutdown()
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Sequentially calls force_flush on all underlying
+ :class:`SpanProcessor`
+
+ Args:
+ timeout_millis: The maximum amount of time over all span processors
+ to wait for spans to be exported. In case the first n span
+ processors exceeded the timeout followup span processors will be
+ skipped.
+
+ Returns:
+ True if all span processors flushed their spans within the
+ given timeout, False otherwise.
+ """
+ deadline_ns = time_ns() + timeout_millis * 1000000
+ for sp in self._span_processors:
+ current_time_ns = time_ns()
+ if current_time_ns >= deadline_ns:
+ return False
+
+ if not sp.force_flush((deadline_ns - current_time_ns) // 1000000):
+ return False
+
+ return True
+
+
+class ConcurrentMultiSpanProcessor(SpanProcessor):
+ """Implementation of :class:`SpanProcessor` that forwards all received
+ events to a list of span processors in parallel.
+
+ Calls to the underlying span processors are forwarded in parallel by
+ submitting them to a thread pool executor and waiting until each span
+ processor finished its work.
+
+ Args:
+ num_threads: The number of threads managed by the thread pool executor
+ and thus defining how many span processors can work in parallel.
+ """
+
+ def __init__(self, num_threads: int = 2):
+ # use a tuple to avoid race conditions when adding a new span and
+ # iterating through it on "on_start" and "on_end".
+ self._span_processors = () # type: Tuple[SpanProcessor, ...]
+ self._lock = threading.Lock()
+ self._executor = concurrent.futures.ThreadPoolExecutor(
+ max_workers=num_threads
+ )
+
+ def add_span_processor(self, span_processor: SpanProcessor) -> None:
+ """Adds a SpanProcessor to the list handled by this instance."""
+ with self._lock:
+ self._span_processors += (span_processor,)
+
+ def _submit_and_await(
+ self,
+ func: Callable[[SpanProcessor], Callable[..., None]],
+ *args: Any,
+ **kwargs: Any,
+ ):
+ futures = []
+ for sp in self._span_processors:
+ future = self._executor.submit(func(sp), *args, **kwargs)
+ futures.append(future)
+ for future in futures:
+ future.result()
+
+ def on_start(
+ self,
+ span: "Span",
+ parent_context: Optional[context_api.Context] = None,
+ ) -> None:
+ self._submit_and_await(
+ lambda sp: sp.on_start, span, parent_context=parent_context
+ )
+
+ def on_end(self, span: "ReadableSpan") -> None:
+ self._submit_and_await(lambda sp: sp.on_end, span)
+
+ def shutdown(self) -> None:
+ """Shuts down all underlying span processors in parallel."""
+ self._submit_and_await(lambda sp: sp.shutdown)
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Calls force_flush on all underlying span processors in parallel.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for spans to be
+ exported.
+
+ Returns:
+ True if all span processors flushed their spans within the given
+ timeout, False otherwise.
+ """
+ futures = []
+ for sp in self._span_processors: # type: SpanProcessor
+ future = self._executor.submit(sp.force_flush, timeout_millis)
+ futures.append(future)
+
+ timeout_sec = timeout_millis / 1e3
+ done_futures, not_done_futures = concurrent.futures.wait(
+ futures, timeout_sec
+ )
+ if not_done_futures:
+ return False
+
+ for future in done_futures:
+ if not future.result():
+ return False
+
+ return True
+
+
+class EventBase(abc.ABC):
+ def __init__(self, name: str, timestamp: Optional[int] = None) -> None:
+ self._name = name
+ if timestamp is None:
+ self._timestamp = time_ns()
+ else:
+ self._timestamp = timestamp
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+ @property
+ def timestamp(self) -> int:
+ return self._timestamp
+
+ @property
+ @abc.abstractmethod
+ def attributes(self) -> types.Attributes:
+ pass
+
+
+class Event(EventBase):
+ """A text annotation with a set of attributes. The attributes of an event
+ are immutable.
+
+ Args:
+ name: Name of the event.
+ attributes: Attributes of the event.
+ timestamp: Timestamp of the event. If `None` it will filled
+ automatically.
+ """
+
+ def __init__(
+ self,
+ name: str,
+ attributes: types.Attributes = None,
+ timestamp: Optional[int] = None,
+ limit: Optional[int] = _DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ ) -> None:
+ super().__init__(name, timestamp)
+ self._attributes = attributes
+
+ @property
+ def attributes(self) -> types.Attributes:
+ return self._attributes
+
+
+def _check_span_ended(func):
+ def wrapper(self, *args, **kwargs):
+ already_ended = False
+ with self._lock: # pylint: disable=protected-access
+ if self._end_time is None: # pylint: disable=protected-access
+ func(self, *args, **kwargs)
+ else:
+ already_ended = True
+
+ if already_ended:
+ logger.warning("Tried calling %s on an ended span.", func.__name__)
+
+ return wrapper
+
+
+class ReadableSpan:
+ """Provides read-only access to span attributes.
+
+ Users should NOT be creating these objects directly. `ReadableSpan`s are created as
+ a direct result from using the tracing pipeline via the `Tracer`.
+
+ """
+
+ def __init__(
+ self,
+ name: str,
+ context: Optional[trace_api.SpanContext] = None,
+ parent: Optional[trace_api.SpanContext] = None,
+ resource: Optional[Resource] = None,
+ attributes: types.Attributes = None,
+ events: Sequence[Event] = (),
+ links: Sequence[trace_api.Link] = (),
+ kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL,
+ instrumentation_info: Optional[InstrumentationInfo] = None,
+ status: Status = Status(StatusCode.UNSET),
+ start_time: Optional[int] = None,
+ end_time: Optional[int] = None,
+ instrumentation_scope: Optional[InstrumentationScope] = None,
+ ) -> None:
+ self._name = name
+ self._context = context
+ self._kind = kind
+ self._instrumentation_info = instrumentation_info
+ self._instrumentation_scope = instrumentation_scope
+ self._parent = parent
+ self._start_time = start_time
+ self._end_time = end_time
+ self._attributes = attributes
+ self._events = events
+ self._links = links
+ if resource is None:
+ self._resource = Resource.create({})
+ else:
+ self._resource = resource
+ self._status = status
+
+ @property
+ def dropped_attributes(self) -> int:
+ if isinstance(self._attributes, BoundedAttributes):
+ return self._attributes.dropped
+ return 0
+
+ @property
+ def dropped_events(self) -> int:
+ if isinstance(self._events, BoundedList):
+ return self._events.dropped
+ return 0
+
+ @property
+ def dropped_links(self) -> int:
+ if isinstance(self._links, BoundedList):
+ return self._links.dropped
+ return 0
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+ def get_span_context(self):
+ return self._context
+
+ @property
+ def context(self):
+ return self._context
+
+ @property
+ def kind(self) -> trace_api.SpanKind:
+ return self._kind
+
+ @property
+ def parent(self) -> Optional[trace_api.SpanContext]:
+ return self._parent
+
+ @property
+ def start_time(self) -> Optional[int]:
+ return self._start_time
+
+ @property
+ def end_time(self) -> Optional[int]:
+ return self._end_time
+
+ @property
+ def status(self) -> trace_api.Status:
+ return self._status
+
+ @property
+ def attributes(self) -> types.Attributes:
+ return MappingProxyType(self._attributes or {})
+
+ @property
+ def events(self) -> Sequence[Event]:
+ return tuple(event for event in self._events)
+
+ @property
+ def links(self) -> Sequence[trace_api.Link]:
+ return tuple(link for link in self._links)
+
+ @property
+ def resource(self) -> Resource:
+ return self._resource
+
+ @property
+ @deprecated(
+ version="1.11.1", reason="You should use instrumentation_scope"
+ )
+ def instrumentation_info(self) -> Optional[InstrumentationInfo]:
+ return self._instrumentation_info
+
+ @property
+ def instrumentation_scope(self) -> Optional[InstrumentationScope]:
+ return self._instrumentation_scope
+
+ def to_json(self, indent: int = 4):
+ parent_id = None
+ if self.parent is not None:
+ parent_id = f"0x{trace_api.format_span_id(self.parent.span_id)}"
+
+ start_time = None
+ if self._start_time:
+ start_time = util.ns_to_iso_str(self._start_time)
+
+ end_time = None
+ if self._end_time:
+ end_time = util.ns_to_iso_str(self._end_time)
+
+ status = {
+ "status_code": str(self._status.status_code.name),
+ }
+ if self._status.description:
+ status["description"] = self._status.description
+
+ f_span = {
+ "name": self._name,
+ "context": self._format_context(self._context)
+ if self._context
+ else None,
+ "kind": str(self.kind),
+ "parent_id": parent_id,
+ "start_time": start_time,
+ "end_time": end_time,
+ "status": status,
+ "attributes": self._format_attributes(self._attributes),
+ "events": self._format_events(self._events),
+ "links": self._format_links(self._links),
+ "resource": json.loads(self.resource.to_json()),
+ }
+
+ return json.dumps(f_span, indent=indent)
+
+ @staticmethod
+ def _format_context(context: SpanContext) -> Dict[str, str]:
+ return {
+ "trace_id": f"0x{trace_api.format_trace_id(context.trace_id)}",
+ "span_id": f"0x{trace_api.format_span_id(context.span_id)}",
+ "trace_state": repr(context.trace_state),
+ }
+
+ @staticmethod
+ def _format_attributes(
+ attributes: types.Attributes,
+ ) -> Optional[Dict[str, Any]]:
+ if attributes is not None and not isinstance(attributes, dict):
+ return dict(attributes)
+ return attributes
+
+ @staticmethod
+ def _format_events(events: Sequence[Event]) -> List[Dict[str, Any]]:
+ return [
+ {
+ "name": event.name,
+ "timestamp": util.ns_to_iso_str(event.timestamp),
+ "attributes": Span._format_attributes( # pylint: disable=protected-access
+ event.attributes
+ ),
+ }
+ for event in events
+ ]
+
+ @staticmethod
+ def _format_links(links: Sequence[trace_api.Link]) -> List[Dict[str, Any]]:
+ return [
+ {
+ "context": Span._format_context( # pylint: disable=protected-access
+ link.context
+ ),
+ "attributes": Span._format_attributes( # pylint: disable=protected-access
+ link.attributes
+ ),
+ }
+ for link in links
+ ]
+
+
+class SpanLimits:
+ """The limits that should be enforce on recorded data such as events, links, attributes etc.
+
+ This class does not enforce any limits itself. It only provides an a way read limits from env,
+ default values and from user provided arguments.
+
+ All limit arguments must be either a non-negative integer, ``None`` or ``SpanLimits.UNSET``.
+
+ - All limit arguments are optional.
+ - If a limit argument is not set, the class will try to read its value from the corresponding
+ environment variable.
+ - If the environment variable is not set, the default value, if any, will be used.
+
+ Limit precedence:
+
+ - If a model specific limit is set, it will be used.
+ - Else if the corresponding global limit is set, it will be used.
+ - Else if the model specific limit has a default value, the default value will be used.
+ - Else if the global limit has a default value, the default value will be used.
+
+ Args:
+ max_attributes: Maximum number of attributes that can be added to a span, event, and link.
+ Environment variable: OTEL_ATTRIBUTE_COUNT_LIMIT
+ Default: {_DEFAULT_ATTRIBUTE_COUNT_LIMIT}
+ max_events: Maximum number of events that can be added to a Span.
+ Environment variable: OTEL_SPAN_EVENT_COUNT_LIMIT
+ Default: {_DEFAULT_SPAN_EVENT_COUNT_LIMIT}
+ max_links: Maximum number of links that can be added to a Span.
+ Environment variable: OTEL_SPAN_LINK_COUNT_LIMIT
+ Default: {_DEFAULT_SPAN_LINK_COUNT_LIMIT}
+ max_span_attributes: Maximum number of attributes that can be added to a Span.
+ Environment variable: OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT
+ Default: {_DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT}
+ max_event_attributes: Maximum number of attributes that can be added to an Event.
+ Default: {_DEFAULT_OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT}
+ max_link_attributes: Maximum number of attributes that can be added to a Link.
+ Default: {_DEFAULT_OTEL_LINK_ATTRIBUTE_COUNT_LIMIT}
+ max_attribute_length: Maximum length an attribute value can have. Values longer than
+ the specified length will be truncated.
+ max_span_attribute_length: Maximum length a span attribute value can have. Values longer than
+ the specified length will be truncated.
+ """
+
+ UNSET = -1
+
+ def __init__(
+ self,
+ max_attributes: Optional[int] = None,
+ max_events: Optional[int] = None,
+ max_links: Optional[int] = None,
+ max_span_attributes: Optional[int] = None,
+ max_event_attributes: Optional[int] = None,
+ max_link_attributes: Optional[int] = None,
+ max_attribute_length: Optional[int] = None,
+ max_span_attribute_length: Optional[int] = None,
+ ):
+
+ # span events and links count
+ self.max_events = self._from_env_if_absent(
+ max_events,
+ OTEL_SPAN_EVENT_COUNT_LIMIT,
+ _DEFAULT_OTEL_SPAN_EVENT_COUNT_LIMIT,
+ )
+ self.max_links = self._from_env_if_absent(
+ max_links,
+ OTEL_SPAN_LINK_COUNT_LIMIT,
+ _DEFAULT_OTEL_SPAN_LINK_COUNT_LIMIT,
+ )
+
+ # attribute count
+ global_max_attributes = self._from_env_if_absent(
+ max_attributes, OTEL_ATTRIBUTE_COUNT_LIMIT
+ )
+ self.max_attributes = (
+ global_max_attributes
+ if global_max_attributes is not None
+ else _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT
+ )
+
+ self.max_span_attributes = self._from_env_if_absent(
+ max_span_attributes,
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ global_max_attributes
+ if global_max_attributes is not None
+ else _DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.max_event_attributes = self._from_env_if_absent(
+ max_event_attributes,
+ OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT,
+ global_max_attributes
+ if global_max_attributes is not None
+ else _DEFAULT_OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.max_link_attributes = self._from_env_if_absent(
+ max_link_attributes,
+ OTEL_LINK_ATTRIBUTE_COUNT_LIMIT,
+ global_max_attributes
+ if global_max_attributes is not None
+ else _DEFAULT_OTEL_LINK_ATTRIBUTE_COUNT_LIMIT,
+ )
+
+ # attribute length
+ self.max_attribute_length = self._from_env_if_absent(
+ max_attribute_length,
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ )
+ self.max_span_attribute_length = self._from_env_if_absent(
+ max_span_attribute_length,
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ # use global attribute length limit as default
+ self.max_attribute_length,
+ )
+
+ def __repr__(self):
+ return f"{type(self).__name__}(max_span_attributes={self.max_span_attributes}, max_events_attributes={self.max_event_attributes}, max_link_attributes={self.max_link_attributes}, max_attributes={self.max_attributes}, max_events={self.max_events}, max_links={self.max_links}, max_attribute_length={self.max_attribute_length})"
+
+ @classmethod
+ def _from_env_if_absent(
+ cls, value: Optional[int], env_var: str, default: Optional[int] = None
+ ) -> Optional[int]:
+ if value == cls.UNSET:
+ return None
+
+ err_msg = "{0} must be a non-negative integer but got {}"
+
+ # if no value is provided for the limit, try to load it from env
+ if value is None:
+ # return default value if env var is not set
+ if env_var not in environ:
+ return default
+
+ str_value = environ.get(env_var, "").strip().lower()
+ if str_value == _ENV_VALUE_UNSET:
+ return None
+
+ try:
+ value = int(str_value)
+ except ValueError:
+ raise ValueError(err_msg.format(env_var, str_value))
+
+ if value < 0:
+ raise ValueError(err_msg.format(env_var, value))
+ return value
+
+
+_UnsetLimits = SpanLimits(
+ max_attributes=SpanLimits.UNSET,
+ max_events=SpanLimits.UNSET,
+ max_links=SpanLimits.UNSET,
+ max_span_attributes=SpanLimits.UNSET,
+ max_event_attributes=SpanLimits.UNSET,
+ max_link_attributes=SpanLimits.UNSET,
+ max_attribute_length=SpanLimits.UNSET,
+ max_span_attribute_length=SpanLimits.UNSET,
+)
+
+# not removed for backward compat. please use SpanLimits instead.
+SPAN_ATTRIBUTE_COUNT_LIMIT = (
+ SpanLimits._from_env_if_absent( # pylint: disable=protected-access
+ None,
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ _DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ )
+)
+
+
+class Span(trace_api.Span, ReadableSpan):
+ """See `opentelemetry.trace.Span`.
+
+ Users should create `Span` objects via the `Tracer` instead of this
+ constructor.
+
+ Args:
+ name: The name of the operation this span represents
+ context: The immutable span context
+ parent: This span's parent's `opentelemetry.trace.SpanContext`, or
+ None if this is a root span
+ sampler: The sampler used to create this span
+ trace_config: TODO
+ resource: Entity producing telemetry
+ attributes: The span's attributes to be exported
+ events: Timestamped events to be exported
+ links: Links to other spans to be exported
+ span_processor: `SpanProcessor` to invoke when starting and ending
+ this `Span`.
+ limits: `SpanLimits` instance that was passed to the `TracerProvider`
+ """
+
+ def __new__(cls, *args, **kwargs):
+ if cls is Span:
+ raise TypeError("Span must be instantiated via a tracer.")
+ return super().__new__(cls)
+
+ # pylint: disable=too-many-locals
+ def __init__(
+ self,
+ name: str,
+ context: trace_api.SpanContext,
+ parent: Optional[trace_api.SpanContext] = None,
+ sampler: Optional[sampling.Sampler] = None,
+ trace_config: None = None, # TODO
+ resource: Resource = None,
+ attributes: types.Attributes = None,
+ events: Sequence[Event] = None,
+ links: Sequence[trace_api.Link] = (),
+ kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL,
+ span_processor: SpanProcessor = SpanProcessor(),
+ instrumentation_info: InstrumentationInfo = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ limits=_UnsetLimits,
+ instrumentation_scope: InstrumentationScope = None,
+ ) -> None:
+ if resource is None:
+ resource = Resource.create({})
+ super().__init__(
+ name=name,
+ context=context,
+ parent=parent,
+ kind=kind,
+ resource=resource,
+ instrumentation_info=instrumentation_info,
+ instrumentation_scope=instrumentation_scope,
+ )
+ self._sampler = sampler
+ self._trace_config = trace_config
+ self._record_exception = record_exception
+ self._set_status_on_exception = set_status_on_exception
+ self._span_processor = span_processor
+ self._limits = limits
+ self._lock = threading.Lock()
+ self._attributes = BoundedAttributes(
+ self._limits.max_span_attributes,
+ attributes,
+ immutable=False,
+ max_value_len=self._limits.max_span_attribute_length,
+ )
+ self._events = self._new_events()
+ if events:
+ for event in events:
+ event._attributes = BoundedAttributes(
+ self._limits.max_event_attributes,
+ event.attributes,
+ max_value_len=self._limits.max_attribute_length,
+ )
+ self._events.append(event)
+
+ if links is None:
+ self._links = self._new_links()
+ else:
+ for link in links:
+ link._attributes = BoundedAttributes(
+ self._limits.max_link_attributes,
+ link.attributes,
+ max_value_len=self._limits.max_attribute_length,
+ )
+ self._links = BoundedList.from_seq(self._limits.max_links, links)
+
+ def __repr__(self):
+ return f'{type(self).__name__}(name="{self._name}", context={self._context})'
+
+ def _new_events(self):
+ return BoundedList(self._limits.max_events)
+
+ def _new_links(self):
+ return BoundedList(self._limits.max_links)
+
+ def get_span_context(self):
+ return self._context
+
+ def set_attributes(
+ self, attributes: Dict[str, types.AttributeValue]
+ ) -> None:
+ with self._lock:
+ if self._end_time is not None:
+ logger.warning("Setting attribute on ended span.")
+ return
+
+ for key, value in attributes.items():
+ self._attributes[key] = value
+
+ def set_attribute(self, key: str, value: types.AttributeValue) -> None:
+ return self.set_attributes({key: value})
+
+ @_check_span_ended
+ def _add_event(self, event: EventBase) -> None:
+ self._events.append(event)
+
+ def add_event(
+ self,
+ name: str,
+ attributes: types.Attributes = None,
+ timestamp: Optional[int] = None,
+ ) -> None:
+ attributes = BoundedAttributes(
+ self._limits.max_event_attributes,
+ attributes,
+ max_value_len=self._limits.max_attribute_length,
+ )
+ self._add_event(
+ Event(
+ name=name,
+ attributes=attributes,
+ timestamp=timestamp,
+ )
+ )
+
+ def _readable_span(self) -> ReadableSpan:
+ return ReadableSpan(
+ name=self._name,
+ context=self._context,
+ parent=self._parent,
+ resource=self._resource,
+ attributes=self._attributes,
+ events=self._events,
+ links=self._links,
+ kind=self.kind,
+ status=self._status,
+ start_time=self._start_time,
+ end_time=self._end_time,
+ instrumentation_info=self._instrumentation_info,
+ instrumentation_scope=self._instrumentation_scope,
+ )
+
+ def start(
+ self,
+ start_time: Optional[int] = None,
+ parent_context: Optional[context_api.Context] = None,
+ ) -> None:
+ with self._lock:
+ if self._start_time is not None:
+ logger.warning("Calling start() on a started span.")
+ return
+ self._start_time = (
+ start_time if start_time is not None else time_ns()
+ )
+
+ self._span_processor.on_start(self, parent_context=parent_context)
+
+ def end(self, end_time: Optional[int] = None) -> None:
+ with self._lock:
+ if self._start_time is None:
+ raise RuntimeError("Calling end() on a not started span.")
+ if self._end_time is not None:
+ logger.warning("Calling end() on an ended span.")
+ return
+
+ self._end_time = end_time if end_time is not None else time_ns()
+
+ self._span_processor.on_end(self._readable_span())
+
+ @_check_span_ended
+ def update_name(self, name: str) -> None:
+ self._name = name
+
+ def is_recording(self) -> bool:
+ return self._end_time is None
+
+ @_check_span_ended
+ def set_status(
+ self,
+ status: typing.Union[Status, StatusCode],
+ description: typing.Optional[str] = None,
+ ) -> None:
+ # Ignore future calls if status is already set to OK
+ # Ignore calls to set to StatusCode.UNSET
+ if isinstance(status, Status):
+ if (
+ self._status
+ and self._status.status_code is StatusCode.OK
+ or status.status_code is StatusCode.UNSET
+ ):
+ return
+ if description is not None:
+ logger.warning(
+ "Description %s ignored. Use either `Status` or `(StatusCode, Description)`",
+ description,
+ )
+ self._status = status
+ elif isinstance(status, StatusCode):
+ if (
+ self._status
+ and self._status.status_code is StatusCode.OK
+ or status is StatusCode.UNSET
+ ):
+ return
+ self._status = Status(status, description)
+
+ def __exit__(
+ self,
+ exc_type: Optional[Type[BaseException]],
+ exc_val: Optional[BaseException],
+ exc_tb: Optional[TracebackType],
+ ) -> None:
+ """Ends context manager and calls `end` on the `Span`."""
+ if exc_val is not None and self.is_recording():
+ # Record the exception as an event
+ # pylint:disable=protected-access
+ if self._record_exception:
+ self.record_exception(exception=exc_val, escaped=True)
+ # Records status if span is used as context manager
+ # i.e. with tracer.start_span() as span:
+ if self._set_status_on_exception:
+ self.set_status(
+ Status(
+ status_code=StatusCode.ERROR,
+ description=f"{exc_type.__name__}: {exc_val}",
+ )
+ )
+
+ super().__exit__(exc_type, exc_val, exc_tb)
+
+ def record_exception(
+ self,
+ exception: Exception,
+ attributes: types.Attributes = None,
+ timestamp: Optional[int] = None,
+ escaped: bool = False,
+ ) -> None:
+ """Records an exception as a span event."""
+ try:
+ stacktrace = traceback.format_exc()
+ except Exception: # pylint: disable=broad-except
+ # workaround for python 3.4, format_exc can raise
+ # an AttributeError if the __context__ on
+ # an exception is None
+ stacktrace = "Exception occurred on stacktrace formatting"
+ _attributes = {
+ "exception.type": exception.__class__.__name__,
+ "exception.message": str(exception),
+ "exception.stacktrace": stacktrace,
+ "exception.escaped": str(escaped),
+ }
+ if attributes:
+ _attributes.update(attributes)
+ self.add_event(
+ name="exception", attributes=_attributes, timestamp=timestamp
+ )
+
+
+class _Span(Span):
+ """Protected implementation of `opentelemetry.trace.Span`.
+
+ This constructor exists to prevent the instantiation of the `Span` class
+ by other mechanisms than through the `Tracer`.
+ """
+
+
+class Tracer(trace_api.Tracer):
+ """See `opentelemetry.trace.Tracer`."""
+
+ def __init__(
+ self,
+ sampler: sampling.Sampler,
+ resource: Resource,
+ span_processor: Union[
+ SynchronousMultiSpanProcessor, ConcurrentMultiSpanProcessor
+ ],
+ id_generator: IdGenerator,
+ instrumentation_info: InstrumentationInfo,
+ span_limits: SpanLimits,
+ instrumentation_scope: InstrumentationScope,
+ ) -> None:
+ self.sampler = sampler
+ self.resource = resource
+ self.span_processor = span_processor
+ self.id_generator = id_generator
+ self.instrumentation_info = instrumentation_info
+ self._span_limits = span_limits
+ self._instrumentation_scope = instrumentation_scope
+
+ @contextmanager
+ def start_as_current_span(
+ self,
+ name: str,
+ context: Optional[context_api.Context] = None,
+ kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: Sequence[trace_api.Link] = (),
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ end_on_exit: bool = True,
+ ) -> Iterator[trace_api.Span]:
+ span = self.start_span(
+ name=name,
+ context=context,
+ kind=kind,
+ attributes=attributes,
+ links=links,
+ start_time=start_time,
+ record_exception=record_exception,
+ set_status_on_exception=set_status_on_exception,
+ )
+ with trace_api.use_span(
+ span,
+ end_on_exit=end_on_exit,
+ record_exception=record_exception,
+ set_status_on_exception=set_status_on_exception,
+ ) as span_context:
+ yield span_context
+
+ def start_span( # pylint: disable=too-many-locals
+ self,
+ name: str,
+ context: Optional[context_api.Context] = None,
+ kind: trace_api.SpanKind = trace_api.SpanKind.INTERNAL,
+ attributes: types.Attributes = None,
+ links: Sequence[trace_api.Link] = (),
+ start_time: Optional[int] = None,
+ record_exception: bool = True,
+ set_status_on_exception: bool = True,
+ ) -> trace_api.Span:
+
+ parent_span_context = trace_api.get_current_span(
+ context
+ ).get_span_context()
+
+ if parent_span_context is not None and not isinstance(
+ parent_span_context, trace_api.SpanContext
+ ):
+ raise TypeError(
+ "parent_span_context must be a SpanContext or None."
+ )
+
+ # is_valid determines root span
+ if parent_span_context is None or not parent_span_context.is_valid:
+ parent_span_context = None
+ trace_id = self.id_generator.generate_trace_id()
+ else:
+ trace_id = parent_span_context.trace_id
+
+ # The sampler decides whether to create a real or no-op span at the
+ # time of span creation. No-op spans do not record events, and are not
+ # exported.
+ # The sampler may also add attributes to the newly-created span, e.g.
+ # to include information about the sampling result.
+ # The sampler may also modify the parent span context's tracestate
+ sampling_result = self.sampler.should_sample(
+ context, trace_id, name, kind, attributes, links
+ )
+
+ trace_flags = (
+ trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED)
+ if sampling_result.decision.is_sampled()
+ else trace_api.TraceFlags(trace_api.TraceFlags.DEFAULT)
+ )
+ span_context = trace_api.SpanContext(
+ trace_id,
+ self.id_generator.generate_span_id(),
+ is_remote=False,
+ trace_flags=trace_flags,
+ trace_state=sampling_result.trace_state,
+ )
+
+ # Only record if is_recording() is true
+ if sampling_result.decision.is_recording():
+ # pylint:disable=protected-access
+ span = _Span(
+ name=name,
+ context=span_context,
+ parent=parent_span_context,
+ sampler=self.sampler,
+ resource=self.resource,
+ attributes=sampling_result.attributes.copy(),
+ span_processor=self.span_processor,
+ kind=kind,
+ links=links,
+ instrumentation_info=self.instrumentation_info,
+ record_exception=record_exception,
+ set_status_on_exception=set_status_on_exception,
+ limits=self._span_limits,
+ instrumentation_scope=self._instrumentation_scope,
+ )
+ span.start(start_time=start_time, parent_context=context)
+ else:
+ span = trace_api.NonRecordingSpan(context=span_context)
+ return span
+
+
+class TracerProvider(trace_api.TracerProvider):
+ """See `opentelemetry.trace.TracerProvider`."""
+
+ def __init__(
+ self,
+ sampler: sampling.Sampler = None,
+ resource: Resource = None,
+ shutdown_on_exit: bool = True,
+ active_span_processor: Union[
+ SynchronousMultiSpanProcessor, ConcurrentMultiSpanProcessor
+ ] = None,
+ id_generator: IdGenerator = None,
+ span_limits: SpanLimits = None,
+ ):
+ self._active_span_processor = (
+ active_span_processor or SynchronousMultiSpanProcessor()
+ )
+ if id_generator is None:
+ self.id_generator = RandomIdGenerator()
+ else:
+ self.id_generator = id_generator
+ if resource is None:
+ self._resource = Resource.create({})
+ else:
+ self._resource = resource
+ if not sampler:
+ sampler = sampling._get_from_env_or_default()
+ self.sampler = sampler
+ self._span_limits = span_limits or SpanLimits()
+ self._atexit_handler = None
+
+ if shutdown_on_exit:
+ self._atexit_handler = atexit.register(self.shutdown)
+
+ @property
+ def resource(self) -> Resource:
+ return self._resource
+
+ def get_tracer(
+ self,
+ instrumenting_module_name: str,
+ instrumenting_library_version: typing.Optional[str] = None,
+ schema_url: typing.Optional[str] = None,
+ ) -> "trace_api.Tracer":
+ if not instrumenting_module_name: # Reject empty strings too.
+ instrumenting_module_name = ""
+ logger.error("get_tracer called with missing module name.")
+ if instrumenting_library_version is None:
+ instrumenting_library_version = ""
+
+ filterwarnings(
+ "ignore",
+ message=(
+ r"Call to deprecated method __init__. \(You should use "
+ r"InstrumentationScope\) -- Deprecated since version 1.11.1."
+ ),
+ category=DeprecationWarning,
+ module="opentelemetry.sdk.trace",
+ )
+
+ instrumentation_info = InstrumentationInfo(
+ instrumenting_module_name,
+ instrumenting_library_version,
+ schema_url,
+ )
+
+ return Tracer(
+ self.sampler,
+ self.resource,
+ self._active_span_processor,
+ self.id_generator,
+ instrumentation_info,
+ self._span_limits,
+ InstrumentationScope(
+ instrumenting_module_name,
+ instrumenting_library_version,
+ schema_url,
+ ),
+ )
+
+ def add_span_processor(self, span_processor: SpanProcessor) -> None:
+ """Registers a new :class:`SpanProcessor` for this `TracerProvider`.
+
+ The span processors are invoked in the same order they are registered.
+ """
+
+ # no lock here because add_span_processor is thread safe for both
+ # SynchronousMultiSpanProcessor and ConcurrentMultiSpanProcessor.
+ self._active_span_processor.add_span_processor(span_processor)
+
+ def shutdown(self):
+ """Shut down the span processors added to the tracer provider."""
+ self._active_span_processor.shutdown()
+ if self._atexit_handler is not None:
+ atexit.unregister(self._atexit_handler)
+ self._atexit_handler = None
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Requests the active span processor to process all spans that have not
+ yet been processed.
+
+ By default force flush is called sequentially on all added span
+ processors. This means that span processors further back in the list
+ have less time to flush their spans.
+ To have span processors flush their spans in parallel it is possible to
+ initialize the tracer provider with an instance of
+ `ConcurrentMultiSpanProcessor` at the cost of using multiple threads.
+
+ Args:
+ timeout_millis: The maximum amount of time to wait for spans to be
+ processed.
+
+ Returns:
+ False if the timeout is exceeded, True otherwise.
+ """
+ return self._active_span_processor.force_flush(timeout_millis)
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py
new file mode 100644
index 0000000000..7f56a30172
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py
@@ -0,0 +1,527 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import collections
+import logging
+import os
+import sys
+import threading
+import typing
+from enum import Enum
+from os import environ, linesep
+from time import time_ns
+from typing import Optional
+
+from opentelemetry.context import (
+ _SUPPRESS_INSTRUMENTATION_KEY,
+ Context,
+ attach,
+ detach,
+ set_value,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_BSP_EXPORT_TIMEOUT,
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE,
+ OTEL_BSP_MAX_QUEUE_SIZE,
+ OTEL_BSP_SCHEDULE_DELAY,
+)
+from opentelemetry.sdk.trace import ReadableSpan, Span, SpanProcessor
+from opentelemetry.util._once import Once
+
+_DEFAULT_SCHEDULE_DELAY_MILLIS = 5000
+_DEFAULT_MAX_EXPORT_BATCH_SIZE = 512
+_DEFAULT_EXPORT_TIMEOUT_MILLIS = 30000
+_DEFAULT_MAX_QUEUE_SIZE = 2048
+_ENV_VAR_INT_VALUE_ERROR_MESSAGE = (
+ "Unable to parse value for %s as integer. Defaulting to %s."
+)
+
+logger = logging.getLogger(__name__)
+
+
+class SpanExportResult(Enum):
+ SUCCESS = 0
+ FAILURE = 1
+
+
+class SpanExporter:
+ """Interface for exporting spans.
+
+ Interface to be implemented by services that want to export spans recorded
+ in their own format.
+
+ To export data this MUST be registered to the :class`opentelemetry.sdk.trace.Tracer` using a
+ `SimpleSpanProcessor` or a `BatchSpanProcessor`.
+ """
+
+ def export(
+ self, spans: typing.Sequence[ReadableSpan]
+ ) -> "SpanExportResult":
+ """Exports a batch of telemetry data.
+
+ Args:
+ spans: The list of `opentelemetry.trace.Span` objects to be exported
+
+ Returns:
+ The result of the export
+ """
+
+ def shutdown(self) -> None:
+ """Shuts down the exporter.
+
+ Called when the SDK is shut down.
+ """
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ """Hint to ensure that the export of any spans the exporter has received
+ prior to the call to ForceFlush SHOULD be completed as soon as possible, preferably
+ before returning from this method.
+ """
+
+
+class SimpleSpanProcessor(SpanProcessor):
+ """Simple SpanProcessor implementation.
+
+ SimpleSpanProcessor is an implementation of `SpanProcessor` that
+ passes ended spans directly to the configured `SpanExporter`.
+ """
+
+ def __init__(self, span_exporter: SpanExporter):
+ self.span_exporter = span_exporter
+
+ def on_start(
+ self, span: Span, parent_context: typing.Optional[Context] = None
+ ) -> None:
+ pass
+
+ def on_end(self, span: ReadableSpan) -> None:
+ if not span.context.trace_flags.sampled:
+ return
+ token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
+ try:
+ self.span_exporter.export((span,))
+ # pylint: disable=broad-except
+ except Exception:
+ logger.exception("Exception while exporting Span.")
+ detach(token)
+
+ def shutdown(self) -> None:
+ self.span_exporter.shutdown()
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ # pylint: disable=unused-argument
+ return True
+
+
+class _FlushRequest:
+ """Represents a request for the BatchSpanProcessor to flush spans."""
+
+ __slots__ = ["event", "num_spans"]
+
+ def __init__(self):
+ self.event = threading.Event()
+ self.num_spans = 0
+
+
+_BSP_RESET_ONCE = Once()
+
+
+class BatchSpanProcessor(SpanProcessor):
+ """Batch span processor implementation.
+
+ `BatchSpanProcessor` is an implementation of `SpanProcessor` that
+ batches ended spans and pushes them to the configured `SpanExporter`.
+
+ `BatchSpanProcessor` is configurable with the following environment
+ variables which correspond to constructor parameters:
+
+ - :envvar:`OTEL_BSP_SCHEDULE_DELAY`
+ - :envvar:`OTEL_BSP_MAX_QUEUE_SIZE`
+ - :envvar:`OTEL_BSP_MAX_EXPORT_BATCH_SIZE`
+ - :envvar:`OTEL_BSP_EXPORT_TIMEOUT`
+ """
+
+ def __init__(
+ self,
+ span_exporter: SpanExporter,
+ max_queue_size: int = None,
+ schedule_delay_millis: float = None,
+ max_export_batch_size: int = None,
+ export_timeout_millis: float = None,
+ ):
+ if max_queue_size is None:
+ max_queue_size = BatchSpanProcessor._default_max_queue_size()
+
+ if schedule_delay_millis is None:
+ schedule_delay_millis = (
+ BatchSpanProcessor._default_schedule_delay_millis()
+ )
+
+ if max_export_batch_size is None:
+ max_export_batch_size = (
+ BatchSpanProcessor._default_max_export_batch_size()
+ )
+
+ if export_timeout_millis is None:
+ export_timeout_millis = (
+ BatchSpanProcessor._default_export_timeout_millis()
+ )
+
+ BatchSpanProcessor._validate_arguments(
+ max_queue_size, schedule_delay_millis, max_export_batch_size
+ )
+
+ self.span_exporter = span_exporter
+ self.queue = collections.deque(
+ [], max_queue_size
+ ) # type: typing.Deque[Span]
+ self.worker_thread = threading.Thread(
+ name="OtelBatchSpanProcessor", target=self.worker, daemon=True
+ )
+ self.condition = threading.Condition(threading.Lock())
+ self._flush_request = None # type: typing.Optional[_FlushRequest]
+ self.schedule_delay_millis = schedule_delay_millis
+ self.max_export_batch_size = max_export_batch_size
+ self.max_queue_size = max_queue_size
+ self.export_timeout_millis = export_timeout_millis
+ self.done = False
+ # flag that indicates that spans are being dropped
+ self._spans_dropped = False
+ # precallocated list to send spans to exporter
+ self.spans_list = [
+ None
+ ] * self.max_export_batch_size # type: typing.List[typing.Optional[Span]]
+ self.worker_thread.start()
+ # Only available in *nix since py37.
+ if hasattr(os, "register_at_fork"):
+ os.register_at_fork(
+ after_in_child=self._at_fork_reinit
+ ) # pylint: disable=protected-access
+ self._pid = os.getpid()
+
+ def on_start(
+ self, span: Span, parent_context: typing.Optional[Context] = None
+ ) -> None:
+ pass
+
+ def on_end(self, span: ReadableSpan) -> None:
+ if self.done:
+ logger.warning("Already shutdown, dropping span.")
+ return
+ if not span.context.trace_flags.sampled:
+ return
+ if self._pid != os.getpid():
+ _BSP_RESET_ONCE.do_once(self._at_fork_reinit)
+
+ if len(self.queue) == self.max_queue_size:
+ if not self._spans_dropped:
+ logger.warning("Queue is full, likely spans will be dropped.")
+ self._spans_dropped = True
+
+ self.queue.appendleft(span)
+
+ if len(self.queue) >= self.max_export_batch_size:
+ with self.condition:
+ self.condition.notify()
+
+ def _at_fork_reinit(self):
+ self.condition = threading.Condition(threading.Lock())
+ self.queue.clear()
+
+ # worker_thread is local to a process, only the thread that issued fork continues
+ # to exist. A new worker thread must be started in child process.
+ self.worker_thread = threading.Thread(
+ name="OtelBatchSpanProcessor", target=self.worker, daemon=True
+ )
+ self.worker_thread.start()
+ self._pid = os.getpid()
+
+ def worker(self):
+ timeout = self.schedule_delay_millis / 1e3
+ flush_request = None # type: typing.Optional[_FlushRequest]
+ while not self.done:
+ with self.condition:
+ if self.done:
+ # done flag may have changed, avoid waiting
+ break
+ flush_request = self._get_and_unset_flush_request()
+ if (
+ len(self.queue) < self.max_export_batch_size
+ and flush_request is None
+ ):
+
+ self.condition.wait(timeout)
+ flush_request = self._get_and_unset_flush_request()
+ if not self.queue:
+ # spurious notification, let's wait again, reset timeout
+ timeout = self.schedule_delay_millis / 1e3
+ self._notify_flush_request_finished(flush_request)
+ flush_request = None
+ continue
+ if self.done:
+ # missing spans will be sent when calling flush
+ break
+
+ # subtract the duration of this export call to the next timeout
+ start = time_ns()
+ self._export(flush_request)
+ end = time_ns()
+ duration = (end - start) / 1e9
+ timeout = self.schedule_delay_millis / 1e3 - duration
+
+ self._notify_flush_request_finished(flush_request)
+ flush_request = None
+
+ # there might have been a new flush request while export was running
+ # and before the done flag switched to true
+ with self.condition:
+ shutdown_flush_request = self._get_and_unset_flush_request()
+
+ # be sure that all spans are sent
+ self._drain_queue()
+ self._notify_flush_request_finished(flush_request)
+ self._notify_flush_request_finished(shutdown_flush_request)
+
+ def _get_and_unset_flush_request(
+ self,
+ ) -> typing.Optional[_FlushRequest]:
+ """Returns the current flush request and makes it invisible to the
+ worker thread for subsequent calls.
+ """
+ flush_request = self._flush_request
+ self._flush_request = None
+ if flush_request is not None:
+ flush_request.num_spans = len(self.queue)
+ return flush_request
+
+ @staticmethod
+ def _notify_flush_request_finished(
+ flush_request: typing.Optional[_FlushRequest],
+ ):
+ """Notifies the flush initiator(s) waiting on the given request/event
+ that the flush operation was finished.
+ """
+ if flush_request is not None:
+ flush_request.event.set()
+
+ def _get_or_create_flush_request(self) -> _FlushRequest:
+ """Either returns the current active flush event or creates a new one.
+
+ The flush event will be visible and read by the worker thread before an
+ export operation starts. Callers of a flush operation may wait on the
+ returned event to be notified when the flush/export operation was
+ finished.
+
+ This method is not thread-safe, i.e. callers need to take care about
+ synchronization/locking.
+ """
+ if self._flush_request is None:
+ self._flush_request = _FlushRequest()
+ return self._flush_request
+
+ def _export(self, flush_request: typing.Optional[_FlushRequest]):
+ """Exports spans considering the given flush_request.
+
+ In case of a given flush_requests spans are exported in batches until
+ the number of exported spans reached or exceeded the number of spans in
+ the flush request.
+ In no flush_request was given at most max_export_batch_size spans are
+ exported.
+ """
+ if not flush_request:
+ self._export_batch()
+ return
+
+ num_spans = flush_request.num_spans
+ while self.queue:
+ num_exported = self._export_batch()
+ num_spans -= num_exported
+
+ if num_spans <= 0:
+ break
+
+ def _export_batch(self) -> int:
+ """Exports at most max_export_batch_size spans and returns the number of
+ exported spans.
+ """
+ idx = 0
+ # currently only a single thread acts as consumer, so queue.pop() will
+ # not raise an exception
+ while idx < self.max_export_batch_size and self.queue:
+ self.spans_list[idx] = self.queue.pop()
+ idx += 1
+ token = attach(set_value(_SUPPRESS_INSTRUMENTATION_KEY, True))
+ try:
+ # Ignore type b/c the Optional[None]+slicing is too "clever"
+ # for mypy
+ self.span_exporter.export(self.spans_list[:idx]) # type: ignore
+ except Exception: # pylint: disable=broad-except
+ logger.exception("Exception while exporting Span batch.")
+ detach(token)
+
+ # clean up list
+ for index in range(idx):
+ self.spans_list[index] = None
+ return idx
+
+ def _drain_queue(self):
+ """Export all elements until queue is empty.
+
+ Can only be called from the worker thread context because it invokes
+ `export` that is not thread safe.
+ """
+ while self.queue:
+ self._export_batch()
+
+ def force_flush(self, timeout_millis: int = None) -> bool:
+
+ if timeout_millis is None:
+ timeout_millis = self.export_timeout_millis
+
+ if self.done:
+ logger.warning("Already shutdown, ignoring call to force_flush().")
+ return True
+
+ with self.condition:
+ flush_request = self._get_or_create_flush_request()
+ # signal the worker thread to flush and wait for it to finish
+ self.condition.notify_all()
+
+ # wait for token to be processed
+ ret = flush_request.event.wait(timeout_millis / 1e3)
+ if not ret:
+ logger.warning("Timeout was exceeded in force_flush().")
+ return ret
+
+ def shutdown(self) -> None:
+ # signal the worker thread to finish and then wait for it
+ self.done = True
+ with self.condition:
+ self.condition.notify_all()
+ self.worker_thread.join()
+ self.span_exporter.shutdown()
+
+ @staticmethod
+ def _default_max_queue_size():
+ try:
+ return int(
+ environ.get(OTEL_BSP_MAX_QUEUE_SIZE, _DEFAULT_MAX_QUEUE_SIZE)
+ )
+ except ValueError:
+ logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BSP_MAX_QUEUE_SIZE,
+ _DEFAULT_MAX_QUEUE_SIZE,
+ )
+ return _DEFAULT_MAX_QUEUE_SIZE
+
+ @staticmethod
+ def _default_schedule_delay_millis():
+ try:
+ return int(
+ environ.get(
+ OTEL_BSP_SCHEDULE_DELAY, _DEFAULT_SCHEDULE_DELAY_MILLIS
+ )
+ )
+ except ValueError:
+ logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BSP_SCHEDULE_DELAY,
+ _DEFAULT_SCHEDULE_DELAY_MILLIS,
+ )
+ return _DEFAULT_SCHEDULE_DELAY_MILLIS
+
+ @staticmethod
+ def _default_max_export_batch_size():
+ try:
+ return int(
+ environ.get(
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE,
+ _DEFAULT_MAX_EXPORT_BATCH_SIZE,
+ )
+ )
+ except ValueError:
+ logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE,
+ _DEFAULT_MAX_EXPORT_BATCH_SIZE,
+ )
+ return _DEFAULT_MAX_EXPORT_BATCH_SIZE
+
+ @staticmethod
+ def _default_export_timeout_millis():
+ try:
+ return int(
+ environ.get(
+ OTEL_BSP_EXPORT_TIMEOUT, _DEFAULT_EXPORT_TIMEOUT_MILLIS
+ )
+ )
+ except ValueError:
+ logger.exception(
+ _ENV_VAR_INT_VALUE_ERROR_MESSAGE,
+ OTEL_BSP_EXPORT_TIMEOUT,
+ _DEFAULT_EXPORT_TIMEOUT_MILLIS,
+ )
+ return _DEFAULT_EXPORT_TIMEOUT_MILLIS
+
+ @staticmethod
+ def _validate_arguments(
+ max_queue_size, schedule_delay_millis, max_export_batch_size
+ ):
+ if max_queue_size <= 0:
+ raise ValueError("max_queue_size must be a positive integer.")
+
+ if schedule_delay_millis <= 0:
+ raise ValueError("schedule_delay_millis must be positive.")
+
+ if max_export_batch_size <= 0:
+ raise ValueError(
+ "max_export_batch_size must be a positive integer."
+ )
+
+ if max_export_batch_size > max_queue_size:
+ raise ValueError(
+ "max_export_batch_size must be less than or equal to max_queue_size."
+ )
+
+
+class ConsoleSpanExporter(SpanExporter):
+ """Implementation of :class:`SpanExporter` that prints spans to the
+ console.
+
+ This class can be used for diagnostic purposes. It prints the exported
+ spans to the console STDOUT.
+ """
+
+ def __init__(
+ self,
+ service_name: Optional[str] = None,
+ out: typing.IO = sys.stdout,
+ formatter: typing.Callable[
+ [ReadableSpan], str
+ ] = lambda span: span.to_json()
+ + linesep,
+ ):
+ self.out = out
+ self.formatter = formatter
+ self.service_name = service_name
+
+ def export(self, spans: typing.Sequence[ReadableSpan]) -> SpanExportResult:
+ for span in spans:
+ self.out.write(self.formatter(span))
+ self.out.flush()
+ return SpanExportResult.SUCCESS
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ return True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py
new file mode 100644
index 0000000000..c28ecfd214
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/in_memory_span_exporter.py
@@ -0,0 +1,61 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import threading
+import typing
+
+from opentelemetry.sdk.trace import ReadableSpan
+from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
+
+
+class InMemorySpanExporter(SpanExporter):
+ """Implementation of :class:`.SpanExporter` that stores spans in memory.
+
+ This class can be used for testing purposes. It stores the exported spans
+ in a list in memory that can be retrieved using the
+ :func:`.get_finished_spans` method.
+ """
+
+ def __init__(self) -> None:
+ self._finished_spans: typing.List[ReadableSpan] = []
+ self._stopped = False
+ self._lock = threading.Lock()
+
+ def clear(self) -> None:
+ """Clear list of collected spans."""
+ with self._lock:
+ self._finished_spans.clear()
+
+ def get_finished_spans(self) -> typing.Tuple[ReadableSpan, ...]:
+ """Get list of collected spans."""
+ with self._lock:
+ return tuple(self._finished_spans)
+
+ def export(self, spans: typing.Sequence[ReadableSpan]) -> SpanExportResult:
+ """Stores a list of spans in memory."""
+ if self._stopped:
+ return SpanExportResult.FAILURE
+ with self._lock:
+ self._finished_spans.extend(spans)
+ return SpanExportResult.SUCCESS
+
+ def shutdown(self) -> None:
+ """Shut downs the exporter.
+
+ Calls to export after the exporter has been shut down will fail.
+ """
+ self._stopped = True
+
+ def force_flush(self, timeout_millis: int = 30000) -> bool:
+ return True
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/id_generator.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/id_generator.py
new file mode 100644
index 0000000000..62b12a9492
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/id_generator.py
@@ -0,0 +1,52 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import abc
+import random
+
+
+class IdGenerator(abc.ABC):
+ @abc.abstractmethod
+ def generate_span_id(self) -> int:
+ """Get a new span ID.
+
+ Returns:
+ A 64-bit int for use as a span ID
+ """
+
+ @abc.abstractmethod
+ def generate_trace_id(self) -> int:
+ """Get a new trace ID.
+
+ Implementations should at least make the 64 least significant bits
+ uniformly random. Samplers like the `TraceIdRatioBased` sampler rely on
+ this randomness to make sampling decisions.
+
+ See `the specification on TraceIdRatioBased `_.
+
+ Returns:
+ A 128-bit int for use as a trace ID
+ """
+
+
+class RandomIdGenerator(IdGenerator):
+ """The default ID generator for TracerProvider which randomly generates all
+ bits when generating IDs.
+ """
+
+ def generate_span_id(self) -> int:
+ return random.getrandbits(64)
+
+ def generate_trace_id(self) -> int:
+ return random.getrandbits(128)
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py
new file mode 100644
index 0000000000..0236fac6b6
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/sampling.py
@@ -0,0 +1,450 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+For general information about sampling, see `the specification `_.
+
+OpenTelemetry provides two types of samplers:
+
+- `StaticSampler`
+- `TraceIdRatioBased`
+
+A `StaticSampler` always returns the same sampling result regardless of the conditions. Both possible StaticSamplers are already created:
+
+- Always sample spans: ALWAYS_ON
+- Never sample spans: ALWAYS_OFF
+
+A `TraceIdRatioBased` sampler makes a random sampling result based on the sampling probability given.
+
+If the span being sampled has a parent, `ParentBased` will respect the parent delegate sampler. Otherwise, it returns the sampling result from the given root sampler.
+
+Currently, sampling results are always made during the creation of the span. However, this might not always be the case in the future (see `OTEP #115 `_).
+
+Custom samplers can be created by subclassing `Sampler` and implementing `Sampler.should_sample` as well as `Sampler.get_description`.
+
+Samplers are able to modify the `opentelemetry.trace.span.TraceState` of the parent of the span being created. For custom samplers, it is suggested to implement `Sampler.should_sample` to utilize the
+parent span context's `opentelemetry.trace.span.TraceState` and pass into the `SamplingResult` instead of the explicit trace_state field passed into the parameter of `Sampler.should_sample`.
+
+To use a sampler, pass it into the tracer provider constructor. For example:
+
+.. code:: python
+
+ from opentelemetry import trace
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import (
+ ConsoleSpanExporter,
+ SimpleSpanProcessor,
+ )
+ from opentelemetry.sdk.trace.sampling import TraceIdRatioBased
+
+ # sample 1 in every 1000 traces
+ sampler = TraceIdRatioBased(1/1000)
+
+ # set the sampler onto the global tracer provider
+ trace.set_tracer_provider(TracerProvider(sampler=sampler))
+
+ # set up an exporter for sampled spans
+ trace.get_tracer_provider().add_span_processor(
+ SimpleSpanProcessor(ConsoleSpanExporter())
+ )
+
+ # created spans will now be sampled by the TraceIdRatioBased sampler
+ with trace.get_tracer(__name__).start_as_current_span("Test Span"):
+ ...
+
+The tracer sampler can also be configured via environment variables ``OTEL_TRACES_SAMPLER`` and ``OTEL_TRACES_SAMPLER_ARG`` (only if applicable).
+The list of built-in values for ``OTEL_TRACES_SAMPLER`` are:
+
+ * always_on - Sampler that always samples spans, regardless of the parent span's sampling decision.
+ * always_off - Sampler that never samples spans, regardless of the parent span's sampling decision.
+ * traceidratio - Sampler that samples probabalistically based on rate.
+ * parentbased_always_on - (default) Sampler that respects its parent span's sampling decision, but otherwise always samples.
+ * parentbased_always_off - Sampler that respects its parent span's sampling decision, but otherwise never samples.
+ * parentbased_traceidratio - Sampler that respects its parent span's sampling decision, but otherwise samples probabalistically based on rate.
+
+Sampling probability can be set with ``OTEL_TRACES_SAMPLER_ARG`` if the sampler is traceidratio or parentbased_traceidratio. Rate must be in the range [0.0,1.0]. When not provided rate will be set to
+1.0 (maximum rate possible).
+
+Prev example but with environment variables. Please make sure to set the env ``OTEL_TRACES_SAMPLER=traceidratio`` and ``OTEL_TRACES_SAMPLER_ARG=0.001``.
+
+.. code:: python
+
+ from opentelemetry import trace
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.sdk.trace.export import (
+ ConsoleSpanExporter,
+ SimpleSpanProcessor,
+ )
+
+ trace.set_tracer_provider(TracerProvider())
+
+ # set up an exporter for sampled spans
+ trace.get_tracer_provider().add_span_processor(
+ SimpleSpanProcessor(ConsoleSpanExporter())
+ )
+
+ # created spans will now be sampled by the TraceIdRatioBased sampler with rate 1/1000.
+ with trace.get_tracer(__name__).start_as_current_span("Test Span"):
+ ...
+
+When utilizing a configurator, you can configure a custom sampler. In order to create a configurable custom sampler, create an entry point for the custom sampler
+factory method or function under the entry point group, ``opentelemetry_traces_sampler``. The custom sampler factory method must be of type ``Callable[[str], Sampler]``, taking a single string argument and
+returning a Sampler object. The single input will come from the string value of the ``OTEL_TRACES_SAMPLER_ARG`` environment variable. If ``OTEL_TRACES_SAMPLER_ARG`` is not configured, the input will
+be an empty string. For example:
+
+.. code:: python
+
+ setup(
+ ...
+ entry_points={
+ ...
+ "opentelemetry_traces_sampler": [
+ "custom_sampler_name = path.to.sampler.factory.method:CustomSamplerFactory.get_sampler"
+ ]
+ }
+ )
+ # ...
+ class CustomRatioSampler(Sampler):
+ def __init__(rate):
+ # ...
+ # ...
+ class CustomSamplerFactory:
+ @staticmethod
+ get_sampler(sampler_argument):
+ try:
+ rate = float(sampler_argument)
+ return CustomSampler(rate)
+ except ValueError: # In case argument is empty string.
+ return CustomSampler(0.5)
+
+In order to configure you application with a custom sampler's entry point, set the ``OTEL_TRACES_SAMPLER`` environment variable to the key name of the entry point. For example, to configured the
+above sampler, set ``OTEL_TRACES_SAMPLER=custom_sampler_name`` and ``OTEL_TRACES_SAMPLER_ARG=0.5``.
+"""
+import abc
+import enum
+import os
+from logging import getLogger
+from types import MappingProxyType
+from typing import Optional, Sequence
+
+# pylint: disable=unused-import
+from opentelemetry.context import Context
+from opentelemetry.sdk.environment_variables import (
+ OTEL_TRACES_SAMPLER,
+ OTEL_TRACES_SAMPLER_ARG,
+)
+from opentelemetry.trace import Link, SpanKind, get_current_span
+from opentelemetry.trace.span import TraceState
+from opentelemetry.util.types import Attributes
+
+_logger = getLogger(__name__)
+
+
+class Decision(enum.Enum):
+ # IsRecording() == false, span will not be recorded and all events and attributes will be dropped.
+ DROP = 0
+ # IsRecording() == true, but Sampled flag MUST NOT be set.
+ RECORD_ONLY = 1
+ # IsRecording() == true AND Sampled flag` MUST be set.
+ RECORD_AND_SAMPLE = 2
+
+ def is_recording(self):
+ return self in (Decision.RECORD_ONLY, Decision.RECORD_AND_SAMPLE)
+
+ def is_sampled(self):
+ return self is Decision.RECORD_AND_SAMPLE
+
+
+class SamplingResult:
+ """A sampling result as applied to a newly-created Span.
+
+ Args:
+ decision: A sampling decision based off of whether the span is recorded
+ and the sampled flag in trace flags in the span context.
+ attributes: Attributes to add to the `opentelemetry.trace.Span`.
+ trace_state: The tracestate used for the `opentelemetry.trace.Span`.
+ Could possibly have been modified by the sampler.
+ """
+
+ def __repr__(self) -> str:
+ return f"{type(self).__name__}({str(self.decision)}, attributes={str(self.attributes)})"
+
+ def __init__(
+ self,
+ decision: Decision,
+ attributes: "Attributes" = None,
+ trace_state: "TraceState" = None,
+ ) -> None:
+ self.decision = decision
+ if attributes is None:
+ self.attributes = MappingProxyType({})
+ else:
+ self.attributes = MappingProxyType(attributes)
+ self.trace_state = trace_state
+
+
+class Sampler(abc.ABC):
+ @abc.abstractmethod
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence["Link"] = None,
+ trace_state: "TraceState" = None,
+ ) -> "SamplingResult":
+ pass
+
+ @abc.abstractmethod
+ def get_description(self) -> str:
+ pass
+
+
+class StaticSampler(Sampler):
+ """Sampler that always returns the same decision."""
+
+ def __init__(self, decision: "Decision"):
+ self._decision = decision
+
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence["Link"] = None,
+ trace_state: "TraceState" = None,
+ ) -> "SamplingResult":
+ if self._decision is Decision.DROP:
+ attributes = None
+ return SamplingResult(
+ self._decision,
+ attributes,
+ _get_parent_trace_state(parent_context),
+ )
+
+ def get_description(self) -> str:
+ if self._decision is Decision.DROP:
+ return "AlwaysOffSampler"
+ return "AlwaysOnSampler"
+
+
+ALWAYS_OFF = StaticSampler(Decision.DROP)
+"""Sampler that never samples spans, regardless of the parent span's sampling decision."""
+
+ALWAYS_ON = StaticSampler(Decision.RECORD_AND_SAMPLE)
+"""Sampler that always samples spans, regardless of the parent span's sampling decision."""
+
+
+class TraceIdRatioBased(Sampler):
+ """
+ Sampler that makes sampling decisions probabilistically based on `rate`.
+
+ Args:
+ rate: Probability (between 0 and 1) that a span will be sampled
+ """
+
+ def __init__(self, rate: float):
+ if rate < 0.0 or rate > 1.0:
+ raise ValueError("Probability must be in range [0.0, 1.0].")
+ self._rate = rate
+ self._bound = self.get_bound_for_rate(self._rate)
+
+ # For compatibility with 64 bit trace IDs, the sampler checks the 64
+ # low-order bits of the trace ID to decide whether to sample a given trace.
+ TRACE_ID_LIMIT = (1 << 64) - 1
+
+ @classmethod
+ def get_bound_for_rate(cls, rate: float) -> int:
+ return round(rate * (cls.TRACE_ID_LIMIT + 1))
+
+ @property
+ def rate(self) -> float:
+ return self._rate
+
+ @property
+ def bound(self) -> int:
+ return self._bound
+
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence["Link"] = None,
+ trace_state: "TraceState" = None,
+ ) -> "SamplingResult":
+ decision = Decision.DROP
+ if trace_id & self.TRACE_ID_LIMIT < self.bound:
+ decision = Decision.RECORD_AND_SAMPLE
+ if decision is Decision.DROP:
+ attributes = None
+ return SamplingResult(
+ decision,
+ attributes,
+ _get_parent_trace_state(parent_context),
+ )
+
+ def get_description(self) -> str:
+ return f"TraceIdRatioBased{{{self._rate}}}"
+
+
+class ParentBased(Sampler):
+ """
+ If a parent is set, applies the respective delegate sampler.
+ Otherwise, uses the root provided at initialization to make a
+ decision.
+
+ Args:
+ root: Sampler called for spans with no parent (root spans).
+ remote_parent_sampled: Sampler called for a remote sampled parent.
+ remote_parent_not_sampled: Sampler called for a remote parent that is
+ not sampled.
+ local_parent_sampled: Sampler called for a local sampled parent.
+ local_parent_not_sampled: Sampler called for a local parent that is
+ not sampled.
+ """
+
+ def __init__(
+ self,
+ root: Sampler,
+ remote_parent_sampled: Sampler = ALWAYS_ON,
+ remote_parent_not_sampled: Sampler = ALWAYS_OFF,
+ local_parent_sampled: Sampler = ALWAYS_ON,
+ local_parent_not_sampled: Sampler = ALWAYS_OFF,
+ ):
+ self._root = root
+ self._remote_parent_sampled = remote_parent_sampled
+ self._remote_parent_not_sampled = remote_parent_not_sampled
+ self._local_parent_sampled = local_parent_sampled
+ self._local_parent_not_sampled = local_parent_not_sampled
+
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence["Link"] = None,
+ trace_state: "TraceState" = None,
+ ) -> "SamplingResult":
+ parent_span_context = get_current_span(
+ parent_context
+ ).get_span_context()
+ # default to the root sampler
+ sampler = self._root
+ # respect the sampling and remote flag of the parent if present
+ if parent_span_context is not None and parent_span_context.is_valid:
+ if parent_span_context.is_remote:
+ if parent_span_context.trace_flags.sampled:
+ sampler = self._remote_parent_sampled
+ else:
+ sampler = self._remote_parent_not_sampled
+ else:
+ if parent_span_context.trace_flags.sampled:
+ sampler = self._local_parent_sampled
+ else:
+ sampler = self._local_parent_not_sampled
+
+ return sampler.should_sample(
+ parent_context=parent_context,
+ trace_id=trace_id,
+ name=name,
+ kind=kind,
+ attributes=attributes,
+ links=links,
+ )
+
+ def get_description(self):
+ return f"ParentBased{{root:{self._root.get_description()},remoteParentSampled:{self._remote_parent_sampled.get_description()},remoteParentNotSampled:{self._remote_parent_not_sampled.get_description()},localParentSampled:{self._local_parent_sampled.get_description()},localParentNotSampled:{self._local_parent_not_sampled.get_description()}}}"
+
+
+DEFAULT_OFF = ParentBased(ALWAYS_OFF)
+"""Sampler that respects its parent span's sampling decision, but otherwise never samples."""
+
+DEFAULT_ON = ParentBased(ALWAYS_ON)
+"""Sampler that respects its parent span's sampling decision, but otherwise always samples."""
+
+
+class ParentBasedTraceIdRatio(ParentBased):
+ """
+ Sampler that respects its parent span's sampling decision, but otherwise
+ samples probabalistically based on `rate`.
+ """
+
+ def __init__(self, rate: float):
+ root = TraceIdRatioBased(rate=rate)
+ super().__init__(root=root)
+
+
+class _AlwaysOff(StaticSampler):
+ def __init__(self, _):
+ super().__init__(Decision.DROP)
+
+
+class _AlwaysOn(StaticSampler):
+ def __init__(self, _):
+ super().__init__(Decision.RECORD_AND_SAMPLE)
+
+
+class _ParentBasedAlwaysOff(ParentBased):
+ def __init__(self, _):
+ super().__init__(ALWAYS_OFF)
+
+
+class _ParentBasedAlwaysOn(ParentBased):
+ def __init__(self, _):
+ super().__init__(ALWAYS_ON)
+
+
+_KNOWN_SAMPLERS = {
+ "always_on": ALWAYS_ON,
+ "always_off": ALWAYS_OFF,
+ "parentbased_always_on": DEFAULT_ON,
+ "parentbased_always_off": DEFAULT_OFF,
+ "traceidratio": TraceIdRatioBased,
+ "parentbased_traceidratio": ParentBasedTraceIdRatio,
+}
+
+
+def _get_from_env_or_default() -> Sampler:
+ trace_sampler = os.getenv(
+ OTEL_TRACES_SAMPLER, "parentbased_always_on"
+ ).lower()
+ if trace_sampler not in _KNOWN_SAMPLERS:
+ _logger.warning("Couldn't recognize sampler %s.", trace_sampler)
+ trace_sampler = "parentbased_always_on"
+
+ if trace_sampler in ("traceidratio", "parentbased_traceidratio"):
+ try:
+ rate = float(os.getenv(OTEL_TRACES_SAMPLER_ARG))
+ except (ValueError, TypeError):
+ _logger.warning("Could not convert TRACES_SAMPLER_ARG to float.")
+ rate = 1.0
+ return _KNOWN_SAMPLERS[trace_sampler](rate)
+
+ return _KNOWN_SAMPLERS[trace_sampler]
+
+
+def _get_parent_trace_state(parent_context) -> Optional["TraceState"]:
+ parent_span_context = get_current_span(parent_context).get_span_context()
+ if parent_span_context is None or not parent_span_context.is_valid:
+ return None
+ return parent_span_context.trace_state
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.py
new file mode 100644
index 0000000000..e1857d8e62
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.py
@@ -0,0 +1,150 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import datetime
+import threading
+from collections import OrderedDict, deque
+from collections.abc import MutableMapping, Sequence
+from typing import Optional
+
+from deprecated import deprecated
+
+
+def ns_to_iso_str(nanoseconds):
+ """Get an ISO 8601 string from time_ns value."""
+ ts = datetime.datetime.utcfromtimestamp(nanoseconds / 1e9)
+ return ts.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
+
+
+def get_dict_as_key(labels):
+ """Converts a dict to be used as a unique key"""
+ return tuple(
+ sorted(
+ map(
+ lambda kv: (kv[0], tuple(kv[1]))
+ if isinstance(kv[1], list)
+ else kv,
+ labels.items(),
+ )
+ )
+ )
+
+
+class BoundedList(Sequence):
+ """An append only list with a fixed max size.
+
+ Calls to `append` and `extend` will drop the oldest elements if there is
+ not enough room.
+ """
+
+ def __init__(self, maxlen: Optional[int]):
+ self.dropped = 0
+ self._dq = deque(maxlen=maxlen) # type: deque
+ self._lock = threading.Lock()
+
+ def __repr__(self):
+ return f"{type(self).__name__}({list(self._dq)}, maxlen={self._dq.maxlen})"
+
+ def __getitem__(self, index):
+ return self._dq[index]
+
+ def __len__(self):
+ return len(self._dq)
+
+ def __iter__(self):
+ with self._lock:
+ return iter(deque(self._dq))
+
+ def append(self, item):
+ with self._lock:
+ if (
+ self._dq.maxlen is not None
+ and len(self._dq) == self._dq.maxlen
+ ):
+ self.dropped += 1
+ self._dq.append(item)
+
+ def extend(self, seq):
+ with self._lock:
+ if self._dq.maxlen is not None:
+ to_drop = len(seq) + len(self._dq) - self._dq.maxlen
+ if to_drop > 0:
+ self.dropped += to_drop
+ self._dq.extend(seq)
+
+ @classmethod
+ def from_seq(cls, maxlen, seq):
+ seq = tuple(seq)
+ bounded_list = cls(maxlen)
+ bounded_list.extend(seq)
+ return bounded_list
+
+
+@deprecated(version="1.4.0") # type: ignore
+class BoundedDict(MutableMapping):
+ """An ordered dict with a fixed max capacity.
+
+ Oldest elements are dropped when the dict is full and a new element is
+ added.
+ """
+
+ def __init__(self, maxlen: Optional[int]):
+ if maxlen is not None:
+ if not isinstance(maxlen, int):
+ raise ValueError
+ if maxlen < 0:
+ raise ValueError
+ self.maxlen = maxlen
+ self.dropped = 0
+ self._dict = OrderedDict() # type: OrderedDict
+ self._lock = threading.Lock() # type: threading.Lock
+
+ def __repr__(self):
+ return (
+ f"{type(self).__name__}({dict(self._dict)}, maxlen={self.maxlen})"
+ )
+
+ def __getitem__(self, key):
+ return self._dict[key]
+
+ def __setitem__(self, key, value):
+ with self._lock:
+ if self.maxlen is not None and self.maxlen == 0:
+ self.dropped += 1
+ return
+
+ if key in self._dict:
+ del self._dict[key]
+ elif self.maxlen is not None and len(self._dict) == self.maxlen:
+ del self._dict[next(iter(self._dict.keys()))]
+ self.dropped += 1
+ self._dict[key] = value
+
+ def __delitem__(self, key):
+ del self._dict[key]
+
+ def __iter__(self):
+ with self._lock:
+ return iter(self._dict.copy())
+
+ def __len__(self):
+ return len(self._dict)
+
+ @classmethod
+ def from_map(cls, maxlen, mapping):
+ mapping = OrderedDict(mapping)
+ bounded_dict = cls(maxlen)
+ for key, value in mapping.items():
+ bounded_dict[key] = value
+ return bounded_dict
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.pyi b/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.pyi
new file mode 100644
index 0000000000..d42e0f018f
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/util/__init__.pyi
@@ -0,0 +1,73 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import (
+ Iterable,
+ Iterator,
+ Mapping,
+ MutableMapping,
+ Sequence,
+ TypeVar,
+ overload,
+)
+
+from opentelemetry.util.types import AttributesAsKey, AttributeValue
+
+_T = TypeVar("_T")
+_KT = TypeVar("_KT")
+_VT = TypeVar("_VT")
+
+def ns_to_iso_str(nanoseconds: int) -> str: ...
+def get_dict_as_key(
+ labels: Mapping[str, AttributeValue]
+) -> AttributesAsKey: ...
+
+class BoundedList(Sequence[_T]):
+ """An append only list with a fixed max size.
+
+ Calls to `append` and `extend` will drop the oldest elements if there is
+ not enough room.
+ """
+
+ dropped: int
+ def __init__(self, maxlen: int): ...
+ def insert(self, index: int, value: _T) -> None: ...
+ @overload
+ def __getitem__(self, i: int) -> _T: ...
+ @overload
+ def __getitem__(self, s: slice) -> Sequence[_T]: ...
+ def __len__(self) -> int: ...
+ def append(self, item: _T): ...
+ def extend(self, seq: Sequence[_T]): ...
+ @classmethod
+ def from_seq(cls, maxlen: int, seq: Iterable[_T]) -> BoundedList[_T]: ...
+
+class BoundedDict(MutableMapping[_KT, _VT]):
+ """An ordered dict with a fixed max capacity.
+
+ Oldest elements are dropped when the dict is full and a new element is
+ added.
+ """
+
+ dropped: int
+ def __init__(self, maxlen: int): ...
+ def __getitem__(self, k: _KT) -> _VT: ...
+ def __setitem__(self, k: _KT, v: _VT) -> None: ...
+ def __delitem__(self, v: _KT) -> None: ...
+ def __iter__(self) -> Iterator[_KT]: ...
+ def __len__(self) -> int: ...
+ @classmethod
+ def from_map(
+ cls, maxlen: int, mapping: Mapping[_KT, _VT]
+ ) -> BoundedDict[_KT, _VT]: ...
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/util/instrumentation.py b/opentelemetry-sdk/src/opentelemetry/sdk/util/instrumentation.py
new file mode 100644
index 0000000000..085d3fd874
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/util/instrumentation.py
@@ -0,0 +1,143 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from json import dumps
+from typing import Optional
+
+from deprecated import deprecated
+
+
+class InstrumentationInfo:
+ """Immutable information about an instrumentation library module.
+
+ See `opentelemetry.trace.TracerProvider.get_tracer` for the meaning of these
+ properties.
+ """
+
+ __slots__ = ("_name", "_version", "_schema_url")
+
+ @deprecated(version="1.11.1", reason="You should use InstrumentationScope")
+ def __init__(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ):
+ self._name = name
+ self._version = version
+ if schema_url is None:
+ schema_url = ""
+ self._schema_url = schema_url
+
+ def __repr__(self):
+ return f"{type(self).__name__}({self._name}, {self._version}, {self._schema_url})"
+
+ def __hash__(self):
+ return hash((self._name, self._version, self._schema_url))
+
+ def __eq__(self, value):
+ return type(value) is type(self) and (
+ self._name,
+ self._version,
+ self._schema_url,
+ ) == (value._name, value._version, value._schema_url)
+
+ def __lt__(self, value):
+ if type(value) is not type(self):
+ return NotImplemented
+ return (self._name, self._version, self._schema_url) < (
+ value._name,
+ value._version,
+ value._schema_url,
+ )
+
+ @property
+ def schema_url(self) -> Optional[str]:
+ return self._schema_url
+
+ @property
+ def version(self) -> Optional[str]:
+ return self._version
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+
+class InstrumentationScope:
+ """A logical unit of the application code with which the emitted telemetry can be
+ associated.
+
+ See `opentelemetry.trace.TracerProvider.get_tracer` for the meaning of these
+ properties.
+ """
+
+ __slots__ = ("_name", "_version", "_schema_url")
+
+ def __init__(
+ self,
+ name: str,
+ version: Optional[str] = None,
+ schema_url: Optional[str] = None,
+ ) -> None:
+ self._name = name
+ self._version = version
+ if schema_url is None:
+ schema_url = ""
+ self._schema_url = schema_url
+
+ def __repr__(self) -> str:
+ return f"{type(self).__name__}({self._name}, {self._version}, {self._schema_url})"
+
+ def __hash__(self) -> int:
+ return hash((self._name, self._version, self._schema_url))
+
+ def __eq__(self, value: object) -> bool:
+ if not isinstance(value, InstrumentationScope):
+ return NotImplemented
+ return (self._name, self._version, self._schema_url) == (
+ value._name,
+ value._version,
+ value._schema_url,
+ )
+
+ def __lt__(self, value: object) -> bool:
+ if not isinstance(value, InstrumentationScope):
+ return NotImplemented
+ return (self._name, self._version, self._schema_url) < (
+ value._name,
+ value._version,
+ value._schema_url,
+ )
+
+ @property
+ def schema_url(self) -> Optional[str]:
+ return self._schema_url
+
+ @property
+ def version(self) -> Optional[str]:
+ return self._version
+
+ @property
+ def name(self) -> str:
+ return self._name
+
+ def to_json(self, indent=4) -> str:
+ return dumps(
+ {
+ "name": self._name,
+ "version": self._version,
+ "schema_url": self._schema_url,
+ },
+ indent=indent,
+ )
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/version.py b/opentelemetry-sdk/src/opentelemetry/sdk/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/opentelemetry-sdk/tests/__init__.py b/opentelemetry-sdk/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-sdk/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-sdk/tests/conftest.py b/opentelemetry-sdk/tests/conftest.py
new file mode 100644
index 0000000000..92fd7a734d
--- /dev/null
+++ b/opentelemetry-sdk/tests/conftest.py
@@ -0,0 +1,27 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from os import environ
+
+from opentelemetry.environment_variables import OTEL_PYTHON_CONTEXT
+
+
+def pytest_sessionstart(session):
+ # pylint: disable=unused-argument
+ environ[OTEL_PYTHON_CONTEXT] = "contextvars_context"
+
+
+def pytest_sessionfinish(session):
+ # pylint: disable=unused-argument
+ environ.pop(OTEL_PYTHON_CONTEXT)
diff --git a/opentelemetry-sdk/tests/context/__init__.py b/opentelemetry-sdk/tests/context/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-sdk/tests/context/test_asyncio.py b/opentelemetry-sdk/tests/context/test_asyncio.py
new file mode 100644
index 0000000000..7c5288a274
--- /dev/null
+++ b/opentelemetry-sdk/tests/context/test_asyncio.py
@@ -0,0 +1,102 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import unittest
+from unittest.mock import patch
+
+from opentelemetry import context
+from opentelemetry.context.contextvars_context import ContextVarsRuntimeContext
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.trace import export
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+
+_SPAN_NAMES = [
+ "test_span1",
+ "test_span2",
+ "test_span3",
+ "test_span4",
+ "test_span5",
+]
+
+
+def stop_loop_when(loop, cond_func, timeout=5.0):
+ """Registers a periodic callback that stops the loop when cond_func() == True.
+ Compatible with both Tornado and asyncio.
+ """
+ if cond_func() or timeout <= 0.0:
+ loop.stop()
+ return
+
+ timeout -= 0.1
+ loop.call_later(0.1, stop_loop_when, loop, cond_func, timeout)
+
+
+class TestAsyncio(unittest.TestCase):
+ async def task(self, name):
+ with self.tracer.start_as_current_span(name):
+ context.set_value("say", "bar")
+
+ def submit_another_task(self, name):
+ self.loop.create_task(self.task(name))
+
+ def setUp(self):
+ self.token = context.attach(context.Context())
+ self.tracer_provider = trace.TracerProvider()
+ self.tracer = self.tracer_provider.get_tracer(__name__)
+ self.memory_exporter = InMemorySpanExporter()
+ span_processor = export.SimpleSpanProcessor(self.memory_exporter)
+ self.tracer_provider.add_span_processor(span_processor)
+ self.loop = asyncio.get_event_loop()
+
+ def tearDown(self):
+ context.detach(self.token)
+
+ @patch(
+ "opentelemetry.context._RUNTIME_CONTEXT", ContextVarsRuntimeContext()
+ )
+ def test_with_asyncio(self):
+ with self.tracer.start_as_current_span("asyncio_test"):
+ for name in _SPAN_NAMES:
+ self.submit_another_task(name)
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.memory_exporter.get_finished_spans()) >= 5,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+ span_list = self.memory_exporter.get_finished_spans()
+ span_names_list = [span.name for span in span_list]
+ expected = [
+ "test_span1",
+ "test_span2",
+ "test_span3",
+ "test_span4",
+ "test_span5",
+ "asyncio_test",
+ ]
+ self.assertCountEqual(span_names_list, expected)
+ span_names_list.sort()
+ expected.sort()
+ self.assertListEqual(span_names_list, expected)
+ expected_parent = next(
+ span for span in span_list if span.name == "asyncio_test"
+ )
+ for span in span_list:
+ if span is expected_parent:
+ continue
+ self.assertEqual(span.parent, expected_parent.context)
diff --git a/opentelemetry-sdk/tests/error_handler/__init__.py b/opentelemetry-sdk/tests/error_handler/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-sdk/tests/error_handler/test_error_handler.py b/opentelemetry-sdk/tests/error_handler/test_error_handler.py
new file mode 100644
index 0000000000..116771dc9a
--- /dev/null
+++ b/opentelemetry-sdk/tests/error_handler/test_error_handler.py
@@ -0,0 +1,132 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# pylint: disable=broad-except
+
+from logging import ERROR
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from opentelemetry.sdk.error_handler import (
+ ErrorHandler,
+ GlobalErrorHandler,
+ logger,
+)
+
+
+class TestErrorHandler(TestCase):
+ @patch("opentelemetry.sdk.error_handler.entry_points")
+ def test_default_error_handler(self, mock_entry_points):
+
+ with self.assertLogs(logger, ERROR):
+ with GlobalErrorHandler():
+ raise Exception("some exception")
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.sdk.error_handler.entry_points")
+ def test_plugin_error_handler(self, mock_entry_points):
+ class ZeroDivisionErrorHandler(ErrorHandler, ZeroDivisionError):
+ # pylint: disable=arguments-differ
+
+ _handle = Mock()
+
+ class AssertionErrorHandler(ErrorHandler, AssertionError):
+ # pylint: disable=arguments-differ
+
+ _handle = Mock()
+
+ mock_entry_point_zero_division_error_handler = Mock()
+ mock_entry_point_zero_division_error_handler.configure_mock(
+ **{"load.return_value": ZeroDivisionErrorHandler}
+ )
+ mock_entry_point_assertion_error_handler = Mock()
+ mock_entry_point_assertion_error_handler.configure_mock(
+ **{"load.return_value": AssertionErrorHandler}
+ )
+
+ mock_entry_points.configure_mock(
+ **{
+ "return_value": [
+ mock_entry_point_zero_division_error_handler,
+ mock_entry_point_assertion_error_handler,
+ ]
+ }
+ )
+
+ error = ZeroDivisionError()
+
+ with GlobalErrorHandler():
+ raise error
+
+ # pylint: disable=protected-access
+ ZeroDivisionErrorHandler._handle.assert_called_with(error)
+
+ error = AssertionError()
+
+ with GlobalErrorHandler():
+ raise error
+
+ AssertionErrorHandler._handle.assert_called_with(error)
+
+ @patch("opentelemetry.sdk.error_handler.entry_points")
+ def test_error_in_handler(self, mock_entry_points):
+ class ErrorErrorHandler(ErrorHandler, ZeroDivisionError):
+ # pylint: disable=arguments-differ
+
+ def _handle(self, error: Exception):
+ assert False
+
+ mock_entry_point_error_error_handler = Mock()
+ mock_entry_point_error_error_handler.configure_mock(
+ **{"load.return_value": ErrorErrorHandler}
+ )
+
+ mock_entry_points.configure_mock(
+ **{"return_value": [mock_entry_point_error_error_handler]}
+ )
+
+ error = ZeroDivisionError()
+
+ with self.assertLogs(logger, ERROR):
+ with GlobalErrorHandler():
+ raise error
+
+ # pylint: disable=no-self-use
+ @patch("opentelemetry.sdk.error_handler.entry_points")
+ def test_plugin_error_handler_context_manager(self, mock_entry_points):
+
+ mock_error_handler_instance = Mock()
+
+ class MockErrorHandlerClass(IndexError):
+ def __new__(cls):
+ return mock_error_handler_instance
+
+ mock_entry_point_error_handler = Mock()
+ mock_entry_point_error_handler.configure_mock(
+ **{"load.return_value": MockErrorHandlerClass}
+ )
+
+ mock_entry_points.configure_mock(
+ **{"return_value": [mock_entry_point_error_handler]}
+ )
+
+ error = IndexError()
+
+ with GlobalErrorHandler():
+ raise error
+
+ with GlobalErrorHandler():
+ pass
+
+ # pylint: disable=protected-access
+ mock_error_handler_instance._handle.assert_called_once_with(error)
diff --git a/opentelemetry-sdk/tests/logs/__init__.py b/opentelemetry-sdk/tests/logs/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-sdk/tests/logs/test_export.py b/opentelemetry-sdk/tests/logs/test_export.py
new file mode 100644
index 0000000000..2828504eaa
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_export.py
@@ -0,0 +1,544 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=protected-access
+import logging
+import multiprocessing
+import os
+import time
+import unittest
+from concurrent.futures import ThreadPoolExecutor
+from unittest.mock import Mock, patch
+
+from opentelemetry._logs import SeverityNumber
+from opentelemetry.sdk import trace
+from opentelemetry.sdk._logs import (
+ LogData,
+ LoggerProvider,
+ LoggingHandler,
+ LogRecord,
+)
+from opentelemetry.sdk._logs._internal.export import _logger
+from opentelemetry.sdk._logs.export import (
+ BatchLogRecordProcessor,
+ ConsoleLogExporter,
+ InMemoryLogExporter,
+ SimpleLogRecordProcessor,
+)
+from opentelemetry.sdk.environment_variables import (
+ OTEL_BLRP_EXPORT_TIMEOUT,
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE,
+ OTEL_BLRP_MAX_QUEUE_SIZE,
+ OTEL_BLRP_SCHEDULE_DELAY,
+)
+from opentelemetry.sdk.resources import Resource as SDKResource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase
+from opentelemetry.trace import TraceFlags
+from opentelemetry.trace.span import INVALID_SPAN_CONTEXT
+
+
+class TestSimpleLogRecordProcessor(unittest.TestCase):
+ def test_simple_log_record_processor_default_level(self):
+ exporter = InMemoryLogExporter()
+ logger_provider = LoggerProvider()
+
+ logger_provider.add_log_record_processor(
+ SimpleLogRecordProcessor(exporter)
+ )
+
+ logger = logging.getLogger("default_level")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=logger_provider))
+
+ logger.warning("Something is wrong")
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 1)
+ warning_log_record = finished_logs[0].log_record
+ self.assertEqual(warning_log_record.body, "Something is wrong")
+ self.assertEqual(warning_log_record.severity_text, "WARNING")
+ self.assertEqual(
+ warning_log_record.severity_number, SeverityNumber.WARN
+ )
+
+ def test_simple_log_record_processor_custom_level(self):
+ exporter = InMemoryLogExporter()
+ logger_provider = LoggerProvider()
+
+ logger_provider.add_log_record_processor(
+ SimpleLogRecordProcessor(exporter)
+ )
+
+ logger = logging.getLogger("custom_level")
+ logger.propagate = False
+ logger.setLevel(logging.ERROR)
+ logger.addHandler(LoggingHandler(logger_provider=logger_provider))
+
+ logger.warning("Warning message")
+ logger.debug("Debug message")
+ logger.error("Error message")
+ logger.critical("Critical message")
+ finished_logs = exporter.get_finished_logs()
+ # Make sure only level >= logging.CRITICAL logs are recorded
+ self.assertEqual(len(finished_logs), 2)
+ critical_log_record = finished_logs[0].log_record
+ fatal_log_record = finished_logs[1].log_record
+ self.assertEqual(critical_log_record.body, "Error message")
+ self.assertEqual(critical_log_record.severity_text, "ERROR")
+ self.assertEqual(
+ critical_log_record.severity_number, SeverityNumber.ERROR
+ )
+ self.assertEqual(fatal_log_record.body, "Critical message")
+ self.assertEqual(fatal_log_record.severity_text, "CRITICAL")
+ self.assertEqual(
+ fatal_log_record.severity_number, SeverityNumber.FATAL
+ )
+
+ def test_simple_log_record_processor_trace_correlation(self):
+ exporter = InMemoryLogExporter()
+ logger_provider = LoggerProvider()
+
+ logger_provider.add_log_record_processor(
+ SimpleLogRecordProcessor(exporter)
+ )
+
+ logger = logging.getLogger("trace_correlation")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=logger_provider))
+
+ logger.warning("Warning message")
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 1)
+ log_record = finished_logs[0].log_record
+ self.assertEqual(log_record.body, "Warning message")
+ self.assertEqual(log_record.severity_text, "WARNING")
+ self.assertEqual(log_record.severity_number, SeverityNumber.WARN)
+ self.assertEqual(log_record.trace_id, INVALID_SPAN_CONTEXT.trace_id)
+ self.assertEqual(log_record.span_id, INVALID_SPAN_CONTEXT.span_id)
+ self.assertEqual(
+ log_record.trace_flags, INVALID_SPAN_CONTEXT.trace_flags
+ )
+ exporter.clear()
+
+ tracer = trace.TracerProvider().get_tracer(__name__)
+ with tracer.start_as_current_span("test") as span:
+ logger.critical("Critical message within span")
+
+ finished_logs = exporter.get_finished_logs()
+ log_record = finished_logs[0].log_record
+ self.assertEqual(log_record.body, "Critical message within span")
+ self.assertEqual(log_record.severity_text, "CRITICAL")
+ self.assertEqual(log_record.severity_number, SeverityNumber.FATAL)
+ span_context = span.get_span_context()
+ self.assertEqual(log_record.trace_id, span_context.trace_id)
+ self.assertEqual(log_record.span_id, span_context.span_id)
+ self.assertEqual(log_record.trace_flags, span_context.trace_flags)
+
+ def test_simple_log_record_processor_shutdown(self):
+ exporter = InMemoryLogExporter()
+ logger_provider = LoggerProvider()
+
+ logger_provider.add_log_record_processor(
+ SimpleLogRecordProcessor(exporter)
+ )
+
+ logger = logging.getLogger("shutdown")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=logger_provider))
+
+ logger.warning("Something is wrong")
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 1)
+ warning_log_record = finished_logs[0].log_record
+ self.assertEqual(warning_log_record.body, "Something is wrong")
+ self.assertEqual(warning_log_record.severity_text, "WARNING")
+ self.assertEqual(
+ warning_log_record.severity_number, SeverityNumber.WARN
+ )
+ exporter.clear()
+ logger_provider.shutdown()
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Log after shutdown")
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 0)
+
+ def test_simple_log_record_processor_different_msg_types(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("different_msg_types")
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ logger.warning("warning message: %s", "possible upcoming heatwave")
+ logger.error("Very high rise in temperatures across the globe")
+ logger.critical("Temperature hits high 420 C in Hyderabad")
+ logger.warning(["list", "of", "strings"])
+ logger.error({"key": "value"})
+ log_record_processor.shutdown()
+
+ finished_logs = exporter.get_finished_logs()
+ expected = [
+ ("warning message: possible upcoming heatwave", "WARNING"),
+ ("Very high rise in temperatures across the globe", "ERROR"),
+ (
+ "Temperature hits high 420 C in Hyderabad",
+ "CRITICAL",
+ ),
+ (["list", "of", "strings"], "WARNING"),
+ ({"key": "value"}, "ERROR"),
+ ]
+ emitted = [
+ (item.log_record.body, item.log_record.severity_text)
+ for item in finished_logs
+ ]
+ self.assertEqual(expected, emitted)
+
+
+class TestBatchLogRecordProcessor(ConcurrencyTestBase):
+ def test_emit_call_log_record(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = Mock(wraps=BatchLogRecordProcessor(exporter))
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("emit_call")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ logger.error("error")
+ self.assertEqual(log_record_processor.emit.call_count, 1)
+
+ def test_args(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(
+ exporter,
+ max_queue_size=1024,
+ schedule_delay_millis=2500,
+ max_export_batch_size=256,
+ export_timeout_millis=15000,
+ )
+ self.assertEqual(log_record_processor._exporter, exporter)
+ self.assertEqual(log_record_processor._max_queue_size, 1024)
+ self.assertEqual(log_record_processor._schedule_delay_millis, 2500)
+ self.assertEqual(log_record_processor._max_export_batch_size, 256)
+ self.assertEqual(log_record_processor._export_timeout_millis, 15000)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_BLRP_MAX_QUEUE_SIZE: "1024",
+ OTEL_BLRP_SCHEDULE_DELAY: "2500",
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE: "256",
+ OTEL_BLRP_EXPORT_TIMEOUT: "15000",
+ },
+ )
+ def test_env_vars(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+ self.assertEqual(log_record_processor._exporter, exporter)
+ self.assertEqual(log_record_processor._max_queue_size, 1024)
+ self.assertEqual(log_record_processor._schedule_delay_millis, 2500)
+ self.assertEqual(log_record_processor._max_export_batch_size, 256)
+ self.assertEqual(log_record_processor._export_timeout_millis, 15000)
+
+ def test_args_defaults(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+ self.assertEqual(log_record_processor._exporter, exporter)
+ self.assertEqual(log_record_processor._max_queue_size, 2048)
+ self.assertEqual(log_record_processor._schedule_delay_millis, 5000)
+ self.assertEqual(log_record_processor._max_export_batch_size, 512)
+ self.assertEqual(log_record_processor._export_timeout_millis, 30000)
+
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_BLRP_MAX_QUEUE_SIZE: "a",
+ OTEL_BLRP_SCHEDULE_DELAY: " ",
+ OTEL_BLRP_MAX_EXPORT_BATCH_SIZE: "One",
+ OTEL_BLRP_EXPORT_TIMEOUT: "@",
+ },
+ )
+ def test_args_env_var_value_error(self):
+ exporter = InMemoryLogExporter()
+ _logger.disabled = True
+ log_record_processor = BatchLogRecordProcessor(exporter)
+ _logger.disabled = False
+ self.assertEqual(log_record_processor._exporter, exporter)
+ self.assertEqual(log_record_processor._max_queue_size, 2048)
+ self.assertEqual(log_record_processor._schedule_delay_millis, 5000)
+ self.assertEqual(log_record_processor._max_export_batch_size, 512)
+ self.assertEqual(log_record_processor._export_timeout_millis, 30000)
+
+ def test_args_none_defaults(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(
+ exporter,
+ max_queue_size=None,
+ schedule_delay_millis=None,
+ max_export_batch_size=None,
+ export_timeout_millis=None,
+ )
+ self.assertEqual(log_record_processor._exporter, exporter)
+ self.assertEqual(log_record_processor._max_queue_size, 2048)
+ self.assertEqual(log_record_processor._schedule_delay_millis, 5000)
+ self.assertEqual(log_record_processor._max_export_batch_size, 512)
+ self.assertEqual(log_record_processor._export_timeout_millis, 30000)
+
+ def test_validation_negative_max_queue_size(self):
+ exporter = InMemoryLogExporter()
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ max_queue_size=0,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ max_queue_size=-1,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ schedule_delay_millis=0,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ schedule_delay_millis=-1,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ max_export_batch_size=0,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ max_export_batch_size=-1,
+ )
+ self.assertRaises(
+ ValueError,
+ BatchLogRecordProcessor,
+ exporter,
+ max_queue_size=100,
+ max_export_batch_size=101,
+ )
+
+ def test_shutdown(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("shutdown")
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("warning message: %s", "possible upcoming heatwave")
+ with self.assertLogs(level=logging.WARNING):
+ logger.error("Very high rise in temperatures across the globe")
+ with self.assertLogs(level=logging.WARNING):
+ logger.critical("Temperature hits high 420 C in Hyderabad")
+
+ log_record_processor.shutdown()
+ self.assertTrue(exporter._stopped)
+
+ finished_logs = exporter.get_finished_logs()
+ expected = [
+ ("warning message: possible upcoming heatwave", "WARNING"),
+ ("Very high rise in temperatures across the globe", "ERROR"),
+ (
+ "Temperature hits high 420 C in Hyderabad",
+ "CRITICAL",
+ ),
+ ]
+ emitted = [
+ (item.log_record.body, item.log_record.severity_text)
+ for item in finished_logs
+ ]
+ self.assertEqual(expected, emitted)
+
+ def test_force_flush(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("force_flush")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ logger.critical("Earth is burning")
+ log_record_processor.force_flush()
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 1)
+ log_record = finished_logs[0].log_record
+ self.assertEqual(log_record.body, "Earth is burning")
+ self.assertEqual(log_record.severity_number, SeverityNumber.FATAL)
+
+ def test_log_record_processor_too_many_logs(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("many_logs")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ for log_no in range(1000):
+ logger.critical("Log no: %s", log_no)
+
+ self.assertTrue(log_record_processor.force_flush())
+ finised_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finised_logs), 1000)
+
+ def test_with_multiple_threads(self):
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(exporter)
+
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("threads")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ def bulk_log_and_flush(num_logs):
+ for _ in range(num_logs):
+ logger.critical("Critical message")
+ self.assertTrue(log_record_processor.force_flush())
+
+ with ThreadPoolExecutor(max_workers=69) as executor:
+ futures = []
+ for idx in range(69):
+ future = executor.submit(bulk_log_and_flush, idx + 1)
+ futures.append(future)
+
+ executor.shutdown()
+
+ finished_logs = exporter.get_finished_logs()
+ self.assertEqual(len(finished_logs), 2415)
+
+ @unittest.skipUnless(
+ hasattr(os, "fork"),
+ "needs *nix",
+ )
+ def test_batch_log_record_processor_fork(self):
+ # pylint: disable=invalid-name
+ exporter = InMemoryLogExporter()
+ log_record_processor = BatchLogRecordProcessor(
+ exporter,
+ max_export_batch_size=64,
+ schedule_delay_millis=10,
+ )
+ provider = LoggerProvider()
+ provider.add_log_record_processor(log_record_processor)
+
+ logger = logging.getLogger("test-fork")
+ logger.propagate = False
+ logger.addHandler(LoggingHandler(logger_provider=provider))
+
+ logger.critical("yolo")
+ time.sleep(0.5) # give some time for the exporter to upload
+
+ self.assertTrue(log_record_processor.force_flush())
+ self.assertEqual(len(exporter.get_finished_logs()), 1)
+ exporter.clear()
+
+ multiprocessing.set_start_method("fork")
+
+ def child(conn):
+ def _target():
+ logger.critical("Critical message child")
+
+ self.run_with_many_threads(_target, 100)
+
+ time.sleep(0.5)
+
+ logs = exporter.get_finished_logs()
+ conn.send(len(logs) == 100)
+ conn.close()
+
+ parent_conn, child_conn = multiprocessing.Pipe()
+ p = multiprocessing.Process(target=child, args=(child_conn,))
+ p.start()
+ self.assertTrue(parent_conn.recv())
+ p.join()
+
+ log_record_processor.shutdown()
+
+
+class TestConsoleLogExporter(unittest.TestCase):
+ def test_export(self): # pylint: disable=no-self-use
+ """Check that the console exporter prints log records."""
+ log_data = LogData(
+ log_record=LogRecord(
+ timestamp=int(time.time() * 1e9),
+ trace_id=2604504634922341076776623263868986797,
+ span_id=5213367945872657620,
+ trace_flags=TraceFlags(0x01),
+ severity_text="WARN",
+ severity_number=SeverityNumber.WARN,
+ body="Zhengzhou, We have a heaviest rains in 1000 years",
+ resource=SDKResource({"key": "value"}),
+ attributes={"a": 1, "b": "c"},
+ ),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+ exporter = ConsoleLogExporter()
+ # Mocking stdout interferes with debugging and test reporting, mock on
+ # the exporter instance instead.
+
+ with patch.object(exporter, "out") as mock_stdout:
+ exporter.export([log_data])
+ mock_stdout.write.assert_called_once_with(
+ log_data.log_record.to_json() + os.linesep
+ )
+
+ self.assertEqual(mock_stdout.write.call_count, 1)
+ self.assertEqual(mock_stdout.flush.call_count, 1)
+
+ def test_export_custom(self): # pylint: disable=no-self-use
+ """Check that console exporter uses custom io, formatter."""
+ mock_record_str = Mock(str)
+
+ def formatter(record): # pylint: disable=unused-argument
+ return mock_record_str
+
+ mock_stdout = Mock()
+ exporter = ConsoleLogExporter(out=mock_stdout, formatter=formatter)
+ log_data = LogData(
+ log_record=LogRecord(),
+ instrumentation_scope=InstrumentationScope(
+ "first_name", "first_version"
+ ),
+ )
+ exporter.export([log_data])
+ mock_stdout.write.assert_called_once_with(mock_record_str)
diff --git a/opentelemetry-sdk/tests/logs/test_handler.py b/opentelemetry-sdk/tests/logs/test_handler.py
new file mode 100644
index 0000000000..e126cac172
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_handler.py
@@ -0,0 +1,197 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import logging
+import unittest
+from unittest.mock import Mock
+
+from opentelemetry._logs import NoOpLoggerProvider, SeverityNumber
+from opentelemetry._logs import get_logger as APIGetLogger
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk import trace
+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
+from opentelemetry.semconv.trace import SpanAttributes
+from opentelemetry.trace import INVALID_SPAN_CONTEXT
+
+
+def get_logger(level=logging.NOTSET, logger_provider=None):
+ logger = logging.getLogger(__name__)
+ handler = LoggingHandler(level=level, logger_provider=logger_provider)
+ logger.addHandler(handler)
+ return logger
+
+
+class TestLoggingHandler(unittest.TestCase):
+ def test_handler_default_log_level(self):
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+ # Make sure debug messages are ignored by default
+ logger.debug("Debug message")
+ self.assertEqual(emitter_mock.emit.call_count, 0)
+ # Assert emit gets called for warning message
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Warning message")
+ self.assertEqual(emitter_mock.emit.call_count, 1)
+
+ def test_handler_custom_log_level(self):
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(
+ level=logging.ERROR, logger_provider=emitter_provider_mock
+ )
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Warning message test custom log level")
+ # Make sure any log with level < ERROR is ignored
+ self.assertEqual(emitter_mock.emit.call_count, 0)
+ with self.assertLogs(level=logging.ERROR):
+ logger.error("Mumbai, we have a major problem")
+ with self.assertLogs(level=logging.CRITICAL):
+ logger.critical("No Time For Caution")
+ self.assertEqual(emitter_mock.emit.call_count, 2)
+
+ # pylint: disable=protected-access
+ def test_log_record_emit_noop(self):
+ noop_logger_provder = NoOpLoggerProvider()
+ logger_mock = APIGetLogger(
+ __name__, logger_provider=noop_logger_provder
+ )
+ logger = logging.getLogger(__name__)
+ handler_mock = Mock(spec=LoggingHandler)
+ handler_mock._logger = logger_mock
+ handler_mock.level = logging.WARNING
+ logger.addHandler(handler_mock)
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Warning message")
+ handler_mock._translate.assert_not_called()
+
+ def test_log_record_no_span_context(self):
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+ # Assert emit gets called for warning message
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Warning message")
+ args, _ = emitter_mock.emit.call_args_list[0]
+ log_record = args[0]
+
+ self.assertIsNotNone(log_record)
+ self.assertEqual(log_record.trace_id, INVALID_SPAN_CONTEXT.trace_id)
+ self.assertEqual(log_record.span_id, INVALID_SPAN_CONTEXT.span_id)
+ self.assertEqual(
+ log_record.trace_flags, INVALID_SPAN_CONTEXT.trace_flags
+ )
+
+ def test_log_record_user_attributes(self):
+ """Attributes can be injected into logs by adding them to the LogRecord"""
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+ # Assert emit gets called for warning message
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Warning message", extra={"http.status_code": 200})
+ args, _ = emitter_mock.emit.call_args_list[0]
+ log_record = args[0]
+
+ self.assertIsNotNone(log_record)
+ self.assertEqual(log_record.attributes, {"http.status_code": 200})
+ self.assertTrue(isinstance(log_record.attributes, BoundedAttributes))
+
+ def test_log_record_exception(self):
+ """Exception information will be included in attributes"""
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+ try:
+ raise ZeroDivisionError("division by zero")
+ except ZeroDivisionError:
+ with self.assertLogs(level=logging.ERROR):
+ logger.exception("Zero Division Error")
+ args, _ = emitter_mock.emit.call_args_list[0]
+ log_record = args[0]
+
+ self.assertIsNotNone(log_record)
+ self.assertEqual(log_record.body, "Zero Division Error")
+ self.assertEqual(
+ log_record.attributes[SpanAttributes.EXCEPTION_TYPE],
+ ZeroDivisionError.__name__,
+ )
+ self.assertEqual(
+ log_record.attributes[SpanAttributes.EXCEPTION_MESSAGE],
+ "division by zero",
+ )
+ stack_trace = log_record.attributes[
+ SpanAttributes.EXCEPTION_STACKTRACE
+ ]
+ self.assertIsInstance(stack_trace, str)
+ self.assertTrue("Traceback" in stack_trace)
+ self.assertTrue("ZeroDivisionError" in stack_trace)
+ self.assertTrue("division by zero" in stack_trace)
+ self.assertTrue(__file__ in stack_trace)
+
+ def test_log_exc_info_false(self):
+ """Exception information will be included in attributes"""
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+ try:
+ raise ZeroDivisionError("division by zero")
+ except ZeroDivisionError:
+ with self.assertLogs(level=logging.ERROR):
+ logger.error("Zero Division Error", exc_info=False)
+ args, _ = emitter_mock.emit.call_args_list[0]
+ log_record = args[0]
+
+ self.assertIsNotNone(log_record)
+ self.assertEqual(log_record.body, "Zero Division Error")
+ self.assertNotIn(SpanAttributes.EXCEPTION_TYPE, log_record.attributes)
+ self.assertNotIn(
+ SpanAttributes.EXCEPTION_MESSAGE, log_record.attributes
+ )
+ self.assertNotIn(
+ SpanAttributes.EXCEPTION_STACKTRACE, log_record.attributes
+ )
+
+ def test_log_record_trace_correlation(self):
+ emitter_provider_mock = Mock(spec=LoggerProvider)
+ emitter_mock = APIGetLogger(
+ __name__, logger_provider=emitter_provider_mock
+ )
+ logger = get_logger(logger_provider=emitter_provider_mock)
+
+ tracer = trace.TracerProvider().get_tracer(__name__)
+ with tracer.start_as_current_span("test") as span:
+ with self.assertLogs(level=logging.CRITICAL):
+ logger.critical("Critical message within span")
+
+ args, _ = emitter_mock.emit.call_args_list[0]
+ log_record = args[0]
+ self.assertEqual(log_record.body, "Critical message within span")
+ self.assertEqual(log_record.severity_text, "CRITICAL")
+ self.assertEqual(log_record.severity_number, SeverityNumber.FATAL)
+ span_context = span.get_span_context()
+ self.assertEqual(log_record.trace_id, span_context.trace_id)
+ self.assertEqual(log_record.span_id, span_context.span_id)
+ self.assertEqual(log_record.trace_flags, span_context.trace_flags)
diff --git a/opentelemetry-sdk/tests/logs/test_log_limits.py b/opentelemetry-sdk/tests/logs/test_log_limits.py
new file mode 100644
index 0000000000..c2135b6569
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_log_limits.py
@@ -0,0 +1,40 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry.sdk._logs import LogLimits
+from opentelemetry.sdk._logs._internal import (
+ _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT,
+)
+
+
+class TestLogLimits(unittest.TestCase):
+ def test_log_limits_repr_unset(self):
+ expected = f"LogLimits(max_attributes={_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT}, max_attribute_length=None)"
+ limits = str(LogLimits())
+
+ self.assertEqual(expected, limits)
+
+ def test_log_limits_max_attributes(self):
+ expected = 1
+ limits = LogLimits(max_attributes=1)
+
+ self.assertEqual(expected, limits.max_attributes)
+
+ def test_log_limits_max_attribute_length(self):
+ expected = 1
+ limits = LogLimits(max_attribute_length=1)
+
+ self.assertEqual(expected, limits.max_attribute_length)
diff --git a/opentelemetry-sdk/tests/logs/test_log_record.py b/opentelemetry-sdk/tests/logs/test_log_record.py
new file mode 100644
index 0000000000..1f0bd785a8
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_log_record.py
@@ -0,0 +1,107 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import unittest
+
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk._logs import LogLimits, LogRecord
+
+
+class TestLogRecord(unittest.TestCase):
+ def test_log_record_to_json(self):
+ expected = json.dumps(
+ {
+ "body": "a log line",
+ "severity_number": "None",
+ "severity_text": None,
+ "attributes": None,
+ "dropped_attributes": 0,
+ "timestamp": "1970-01-01T00:00:00.000000Z",
+ "trace_id": "",
+ "span_id": "",
+ "trace_flags": None,
+ "resource": "",
+ },
+ indent=4,
+ )
+ actual = LogRecord(
+ timestamp=0,
+ body="a log line",
+ ).to_json()
+ self.assertEqual(expected, actual)
+
+ def test_log_record_bounded_attributes(self):
+ attr = {"key": "value"}
+
+ result = LogRecord(timestamp=0, body="a log line", attributes=attr)
+
+ self.assertTrue(isinstance(result.attributes, BoundedAttributes))
+
+ def test_log_record_dropped_attributes_empty_limits(self):
+ attr = {"key": "value"}
+
+ result = LogRecord(timestamp=0, body="a log line", attributes=attr)
+
+ self.assertTrue(result.dropped_attributes == 0)
+
+ def test_log_record_dropped_attributes_set_limits_max_attribute(self):
+ attr = {"key": "value", "key2": "value2"}
+ limits = LogLimits(
+ max_attributes=1,
+ )
+
+ result = LogRecord(
+ timestamp=0, body="a log line", attributes=attr, limits=limits
+ )
+ self.assertTrue(result.dropped_attributes == 1)
+
+ def test_log_record_dropped_attributes_set_limits_max_attribute_length(
+ self,
+ ):
+ attr = {"key": "value", "key2": "value2"}
+ expected = {"key": "v", "key2": "v"}
+ limits = LogLimits(
+ max_attribute_length=1,
+ )
+
+ result = LogRecord(
+ timestamp=0, body="a log line", attributes=attr, limits=limits
+ )
+ self.assertTrue(result.dropped_attributes == 0)
+ self.assertEqual(expected, result.attributes)
+
+ def test_log_record_dropped_attributes_set_limits(self):
+ attr = {"key": "value", "key2": "value2"}
+ expected = {"key2": "v"}
+ limits = LogLimits(
+ max_attributes=1,
+ max_attribute_length=1,
+ )
+
+ result = LogRecord(
+ timestamp=0, body="a log line", attributes=attr, limits=limits
+ )
+ self.assertTrue(result.dropped_attributes == 1)
+ self.assertEqual(expected, result.attributes)
+
+ def test_log_record_dropped_attributes_unset_limits(self):
+ attr = {"key": "value", "key2": "value2"}
+ limits = LogLimits()
+
+ result = LogRecord(
+ timestamp=0, body="a log line", attributes=attr, limits=limits
+ )
+ self.assertTrue(result.dropped_attributes == 0)
+ self.assertEqual(attr, result.attributes)
diff --git a/opentelemetry-sdk/tests/logs/test_logs.py b/opentelemetry-sdk/tests/logs/test_logs.py
new file mode 100644
index 0000000000..935b5ee249
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_logs.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=protected-access
+
+import unittest
+from unittest.mock import patch
+
+from opentelemetry.sdk._logs import LoggerProvider
+from opentelemetry.sdk._logs._internal import (
+ SynchronousMultiLogRecordProcessor,
+)
+from opentelemetry.sdk.resources import Resource
+
+
+class TestLoggerProvider(unittest.TestCase):
+ def test_resource(self):
+ """
+ `LoggerProvider` provides a way to allow a `Resource` to be specified.
+ """
+
+ logger_provider_0 = LoggerProvider()
+ logger_provider_1 = LoggerProvider()
+
+ self.assertEqual(
+ logger_provider_0.resource,
+ logger_provider_1.resource,
+ )
+ self.assertIsInstance(logger_provider_0.resource, Resource)
+ self.assertIsInstance(logger_provider_1.resource, Resource)
+
+ resource = Resource({"key": "value"})
+ self.assertIs(LoggerProvider(resource=resource).resource, resource)
+
+ def test_get_logger(self):
+ """
+ `LoggerProvider.get_logger` arguments are used to create an
+ `InstrumentationScope` object on the created `Logger`.
+ """
+
+ logger = LoggerProvider().get_logger(
+ "name",
+ version="version",
+ schema_url="schema_url",
+ )
+
+ self.assertEqual(logger._instrumentation_scope.name, "name")
+ self.assertEqual(logger._instrumentation_scope.version, "version")
+ self.assertEqual(
+ logger._instrumentation_scope.schema_url, "schema_url"
+ )
+
+ @patch.object(Resource, "create")
+ def test_logger_provider_init(self, resource_patch):
+ logger_provider = LoggerProvider()
+ resource_patch.assert_called_once()
+ self.assertIsNotNone(logger_provider._resource)
+ self.assertTrue(
+ isinstance(
+ logger_provider._multi_log_record_processor,
+ SynchronousMultiLogRecordProcessor,
+ )
+ )
+ self.assertIsNotNone(logger_provider._at_exit_handler)
diff --git a/opentelemetry-sdk/tests/logs/test_multi_log_processor.py b/opentelemetry-sdk/tests/logs/test_multi_log_processor.py
new file mode 100644
index 0000000000..7f4bbc32c1
--- /dev/null
+++ b/opentelemetry-sdk/tests/logs/test_multi_log_processor.py
@@ -0,0 +1,197 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint:disable=protected-access,no-self-use,no-member
+
+import logging
+import threading
+import time
+import unittest
+from abc import ABC, abstractmethod
+from unittest.mock import Mock
+
+from opentelemetry._logs import SeverityNumber
+from opentelemetry.sdk._logs._internal import (
+ ConcurrentMultiLogRecordProcessor,
+ LoggerProvider,
+ LoggingHandler,
+ LogRecord,
+ LogRecordProcessor,
+ SynchronousMultiLogRecordProcessor,
+)
+
+
+class AnotherLogRecordProcessor(LogRecordProcessor):
+ def __init__(self, exporter, logs_list):
+ self._exporter = exporter
+ self._log_list = logs_list
+ self._closed = False
+
+ def emit(self, log_data):
+ if self._closed:
+ return
+ self._log_list.append(
+ (log_data.log_record.body, log_data.log_record.severity_text)
+ )
+
+ def shutdown(self):
+ self._closed = True
+ self._exporter.shutdown()
+
+ def force_flush(self, timeout_millis=30000):
+ self._log_list.clear()
+ return True
+
+
+class TestLogRecordProcessor(unittest.TestCase):
+ def test_log_record_processor(self):
+ provider = LoggerProvider()
+ handler = LoggingHandler(logger_provider=provider)
+
+ logs_list_1 = []
+ processor1 = AnotherLogRecordProcessor(Mock(), logs_list_1)
+ logs_list_2 = []
+ processor2 = AnotherLogRecordProcessor(Mock(), logs_list_2)
+
+ logger = logging.getLogger("test.span.processor")
+ logger.addHandler(handler)
+
+ # Test no proessor added
+ with self.assertLogs(level=logging.CRITICAL):
+ logger.critical("Odisha, we have another major cyclone")
+
+ self.assertEqual(len(logs_list_1), 0)
+ self.assertEqual(len(logs_list_2), 0)
+
+ # Add one processor
+ provider.add_log_record_processor(processor1)
+ with self.assertLogs(level=logging.WARNING):
+ logger.warning("Brace yourself")
+ with self.assertLogs(level=logging.ERROR):
+ logger.error("Some error message")
+
+ expected_list_1 = [
+ ("Brace yourself", "WARNING"),
+ ("Some error message", "ERROR"),
+ ]
+ self.assertEqual(logs_list_1, expected_list_1)
+
+ # Add another processor
+ provider.add_log_record_processor(processor2)
+ with self.assertLogs(level=logging.CRITICAL):
+ logger.critical("Something disastrous")
+ expected_list_1.append(("Something disastrous", "CRITICAL"))
+
+ expected_list_2 = [("Something disastrous", "CRITICAL")]
+
+ self.assertEqual(logs_list_1, expected_list_1)
+ self.assertEqual(logs_list_2, expected_list_2)
+
+
+class MultiLogRecordProcessorTestBase(ABC):
+ @abstractmethod
+ def _get_multi_log_record_processor(self):
+ pass
+
+ def make_record(self):
+ return LogRecord(
+ timestamp=1622300111608942000,
+ severity_text="WARNING",
+ severity_number=SeverityNumber.WARN,
+ body="Warning message",
+ )
+
+ def test_on_emit(self):
+ multi_log_record_processor = self._get_multi_log_record_processor()
+ mocks = [Mock(spec=LogRecordProcessor) for _ in range(5)]
+ for mock in mocks:
+ multi_log_record_processor.add_log_record_processor(mock)
+ record = self.make_record()
+ multi_log_record_processor.emit(record)
+ for mock in mocks:
+ mock.emit.assert_called_with(record)
+ multi_log_record_processor.shutdown()
+
+ def test_on_shutdown(self):
+ multi_log_record_processor = self._get_multi_log_record_processor()
+ mocks = [Mock(spec=LogRecordProcessor) for _ in range(5)]
+ for mock in mocks:
+ multi_log_record_processor.add_log_record_processor(mock)
+ multi_log_record_processor.shutdown()
+ for mock in mocks:
+ mock.shutdown.assert_called_once_with()
+
+ def test_on_force_flush(self):
+ multi_log_record_processor = self._get_multi_log_record_processor()
+ mocks = [Mock(spec=LogRecordProcessor) for _ in range(5)]
+ for mock in mocks:
+ multi_log_record_processor.add_log_record_processor(mock)
+ ret_value = multi_log_record_processor.force_flush(100)
+
+ self.assertTrue(ret_value)
+ for mock_processor in mocks:
+ self.assertEqual(1, mock_processor.force_flush.call_count)
+
+
+class TestSynchronousMultiLogRecordProcessor(
+ MultiLogRecordProcessorTestBase, unittest.TestCase
+):
+ def _get_multi_log_record_processor(self):
+ return SynchronousMultiLogRecordProcessor()
+
+ def test_force_flush_delayed(self):
+ multi_log_record_processor = SynchronousMultiLogRecordProcessor()
+
+ def delay(_):
+ time.sleep(0.09)
+
+ mock_processor1 = Mock(spec=LogRecordProcessor)
+ mock_processor1.force_flush = Mock(side_effect=delay)
+ multi_log_record_processor.add_log_record_processor(mock_processor1)
+ mock_processor2 = Mock(spec=LogRecordProcessor)
+ multi_log_record_processor.add_log_record_processor(mock_processor2)
+
+ ret_value = multi_log_record_processor.force_flush(50)
+ self.assertFalse(ret_value)
+ self.assertEqual(mock_processor1.force_flush.call_count, 1)
+ self.assertEqual(mock_processor2.force_flush.call_count, 0)
+
+
+class TestConcurrentMultiLogRecordProcessor(
+ MultiLogRecordProcessorTestBase, unittest.TestCase
+):
+ def _get_multi_log_record_processor(self):
+ return ConcurrentMultiLogRecordProcessor()
+
+ def test_force_flush_delayed(self):
+ multi_log_record_processor = ConcurrentMultiLogRecordProcessor()
+ wait_event = threading.Event()
+
+ def delay(_):
+ wait_event.wait()
+
+ mock1 = Mock(spec=LogRecordProcessor)
+ mock1.force_flush = Mock(side_effect=delay)
+ mocks = [Mock(LogRecordProcessor) for _ in range(5)]
+ mocks = [mock1] + mocks
+ for mock_processor in mocks:
+ multi_log_record_processor.add_log_record_processor(mock_processor)
+
+ ret_value = multi_log_record_processor.force_flush(50)
+ wait_event.set()
+
+ self.assertFalse(ret_value)
+ for mock in mocks:
+ self.assertEqual(1, mock.force_flush.call_count)
+ multi_log_record_processor.shutdown()
diff --git a/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponent_mapping.py b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponent_mapping.py
new file mode 100644
index 0000000000..96ba399181
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponent_mapping.py
@@ -0,0 +1,392 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import inf
+from sys import float_info, version_info
+from unittest.mock import patch
+
+from pytest import mark
+
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.errors import (
+ MappingUnderflowError,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.exponent_mapping import (
+ ExponentMapping,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.ieee_754 import (
+ MAX_NORMAL_EXPONENT,
+ MAX_NORMAL_VALUE,
+ MIN_NORMAL_EXPONENT,
+ MIN_NORMAL_VALUE,
+)
+from opentelemetry.test import TestCase
+
+if version_info >= (3, 9):
+ from math import nextafter
+
+
+def right_boundary(scale: int, index: int) -> float:
+ result = 2**index
+
+ for _ in range(scale, 0):
+ result = result * result
+
+ return result
+
+
+class TestExponentMapping(TestCase):
+ def test_singleton(self):
+
+ self.assertIs(ExponentMapping(-3), ExponentMapping(-3))
+ self.assertIsNot(ExponentMapping(-3), ExponentMapping(-5))
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal.exponential_histogram.mapping."
+ "exponent_mapping.ExponentMapping._mappings",
+ new={},
+ )
+ @patch(
+ "opentelemetry.sdk.metrics._internal.exponential_histogram.mapping."
+ "exponent_mapping.ExponentMapping._init"
+ )
+ def test_init_called_once(self, mock_init):
+
+ ExponentMapping(-3)
+ ExponentMapping(-3)
+
+ mock_init.assert_called_once()
+
+ def test_exponent_mapping_0(self):
+
+ with self.assertNotRaises(Exception):
+ ExponentMapping(0)
+
+ def test_exponent_mapping_zero(self):
+
+ exponent_mapping = ExponentMapping(0)
+
+ # This is the equivalent to 1.1 in hexadecimal
+ hex_1_1 = 1 + (1 / 16)
+
+ # Testing with values near +inf
+ self.assertEqual(
+ exponent_mapping.map_to_index(MAX_NORMAL_VALUE),
+ MAX_NORMAL_EXPONENT,
+ )
+ self.assertEqual(exponent_mapping.map_to_index(MAX_NORMAL_VALUE), 1023)
+ self.assertEqual(exponent_mapping.map_to_index(2**1023), 1022)
+ self.assertEqual(exponent_mapping.map_to_index(2**1022), 1021)
+ self.assertEqual(
+ exponent_mapping.map_to_index(hex_1_1 * (2**1023)), 1023
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(hex_1_1 * (2**1022)), 1022
+ )
+
+ # Testing with values near 1
+ self.assertEqual(exponent_mapping.map_to_index(4), 1)
+ self.assertEqual(exponent_mapping.map_to_index(3), 1)
+ self.assertEqual(exponent_mapping.map_to_index(2), 0)
+ self.assertEqual(exponent_mapping.map_to_index(1), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.75), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.51), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.5), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.26), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.25), -3)
+ self.assertEqual(exponent_mapping.map_to_index(0.126), -3)
+ self.assertEqual(exponent_mapping.map_to_index(0.125), -4)
+
+ # Testing with values near 0
+ self.assertEqual(exponent_mapping.map_to_index(2**-1022), -1023)
+ self.assertEqual(
+ exponent_mapping.map_to_index(hex_1_1 * (2**-1022)), -1022
+ )
+ self.assertEqual(exponent_mapping.map_to_index(2**-1021), -1022)
+ self.assertEqual(
+ exponent_mapping.map_to_index(hex_1_1 * (2**-1021)), -1021
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(2**-1022), MIN_NORMAL_EXPONENT - 1
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(2**-1021), MIN_NORMAL_EXPONENT
+ )
+ # The smallest subnormal value is 2 ** -1074 = 5e-324.
+ # This value is also the result of:
+ # s = 1
+ # while s / 2:
+ # s = s / 2
+ # s == 5e-324
+ self.assertEqual(
+ exponent_mapping.map_to_index(2**-1074), MIN_NORMAL_EXPONENT - 1
+ )
+
+ def test_exponent_mapping_min_scale(self):
+
+ exponent_mapping = ExponentMapping(ExponentMapping._min_scale)
+ self.assertEqual(exponent_mapping.map_to_index(1.000001), 0)
+ self.assertEqual(exponent_mapping.map_to_index(1), -1)
+ self.assertEqual(exponent_mapping.map_to_index(float_info.max), 0)
+ self.assertEqual(exponent_mapping.map_to_index(float_info.min), -1)
+
+ def test_invalid_scale(self):
+ with self.assertRaises(Exception):
+ ExponentMapping(1)
+
+ with self.assertRaises(Exception):
+ ExponentMapping(ExponentMapping._min_scale - 1)
+
+ def test_exponent_mapping_neg_one(self):
+ exponent_mapping = ExponentMapping(-1)
+ self.assertEqual(exponent_mapping.map_to_index(17), 2)
+ self.assertEqual(exponent_mapping.map_to_index(16), 1)
+ self.assertEqual(exponent_mapping.map_to_index(15), 1)
+ self.assertEqual(exponent_mapping.map_to_index(9), 1)
+ self.assertEqual(exponent_mapping.map_to_index(8), 1)
+ self.assertEqual(exponent_mapping.map_to_index(5), 1)
+ self.assertEqual(exponent_mapping.map_to_index(4), 0)
+ self.assertEqual(exponent_mapping.map_to_index(3), 0)
+ self.assertEqual(exponent_mapping.map_to_index(2), 0)
+ self.assertEqual(exponent_mapping.map_to_index(1.5), 0)
+ self.assertEqual(exponent_mapping.map_to_index(1), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.75), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.5), -1)
+ self.assertEqual(exponent_mapping.map_to_index(0.25), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.20), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.13), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.125), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.10), -2)
+ self.assertEqual(exponent_mapping.map_to_index(0.0625), -3)
+ self.assertEqual(exponent_mapping.map_to_index(0.06), -3)
+
+ def test_exponent_mapping_neg_four(self):
+ exponent_mapping = ExponentMapping(-4)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x1)), -1)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x10)), 0)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x100)), 0)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x1000)), 0)
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x10000)), 0
+ ) # base == 2 ** 16
+ self.assertEqual(exponent_mapping.map_to_index(float(0x100000)), 1)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x1000000)), 1)
+ self.assertEqual(exponent_mapping.map_to_index(float(0x10000000)), 1)
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x100000000)), 1
+ ) # base == 2 ** 32
+
+ self.assertEqual(exponent_mapping.map_to_index(float(0x1000000000)), 2)
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x10000000000)), 2
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x100000000000)), 2
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x1000000000000)), 2
+ ) # base == 2 ** 48
+
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x10000000000000)), 3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x100000000000000)), 3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x1000000000000000)), 3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x10000000000000000)), 3
+ ) # base == 2 ** 64
+
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x100000000000000000)), 4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x1000000000000000000)), 4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x10000000000000000000)), 4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x100000000000000000000)), 4
+ ) # base == 2 ** 80
+ self.assertEqual(
+ exponent_mapping.map_to_index(float(0x1000000000000000000000)), 5
+ )
+
+ self.assertEqual(exponent_mapping.map_to_index(1 / float(0x1)), -1)
+ self.assertEqual(exponent_mapping.map_to_index(1 / float(0x10)), -1)
+ self.assertEqual(exponent_mapping.map_to_index(1 / float(0x100)), -1)
+ self.assertEqual(exponent_mapping.map_to_index(1 / float(0x1000)), -1)
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x10000)), -2
+ ) # base == 2 ** -16
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x100000)), -2
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x1000000)), -2
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x10000000)), -2
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x100000000)), -3
+ ) # base == 2 ** -32
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x1000000000)), -3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x10000000000)), -3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x100000000000)), -3
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x1000000000000)), -4
+ ) # base == 2 ** -32
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x10000000000000)), -4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x100000000000000)), -4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x1000000000000000)), -4
+ )
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x10000000000000000)), -5
+ ) # base == 2 ** -64
+ self.assertEqual(
+ exponent_mapping.map_to_index(1 / float(0x100000000000000000)), -5
+ )
+
+ self.assertEqual(exponent_mapping.map_to_index(float_info.max), 63)
+ self.assertEqual(exponent_mapping.map_to_index(2**1023), 63)
+ self.assertEqual(exponent_mapping.map_to_index(2**1019), 63)
+ self.assertEqual(exponent_mapping.map_to_index(2**1009), 63)
+ self.assertEqual(exponent_mapping.map_to_index(2**1008), 62)
+ self.assertEqual(exponent_mapping.map_to_index(2**1007), 62)
+ self.assertEqual(exponent_mapping.map_to_index(2**1000), 62)
+ self.assertEqual(exponent_mapping.map_to_index(2**993), 62)
+ self.assertEqual(exponent_mapping.map_to_index(2**992), 61)
+ self.assertEqual(exponent_mapping.map_to_index(2**991), 61)
+
+ self.assertEqual(exponent_mapping.map_to_index(2**-1074), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1073), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1072), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1057), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1056), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1041), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1040), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1025), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1024), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1023), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1022), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1009), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1008), -64)
+ self.assertEqual(exponent_mapping.map_to_index(2**-1007), -63)
+ self.assertEqual(exponent_mapping.map_to_index(2**-993), -63)
+ self.assertEqual(exponent_mapping.map_to_index(2**-992), -63)
+ self.assertEqual(exponent_mapping.map_to_index(2**-991), -62)
+ self.assertEqual(exponent_mapping.map_to_index(2**-977), -62)
+ self.assertEqual(exponent_mapping.map_to_index(2**-976), -62)
+ self.assertEqual(exponent_mapping.map_to_index(2**-975), -61)
+
+ def test_exponent_index_max(self):
+
+ for scale in range(
+ ExponentMapping._min_scale, ExponentMapping._max_scale
+ ):
+ exponent_mapping = ExponentMapping(scale)
+
+ index = exponent_mapping.map_to_index(MAX_NORMAL_VALUE)
+
+ max_index = ((MAX_NORMAL_EXPONENT + 1) >> -scale) - 1
+
+ self.assertEqual(index, max_index)
+
+ boundary = exponent_mapping.get_lower_boundary(index)
+
+ self.assertEqual(boundary, right_boundary(scale, max_index))
+
+ with self.assertRaises(Exception):
+ exponent_mapping.get_lower_boundary(index + 1)
+
+ @mark.skipif(
+ version_info < (3, 9),
+ reason="math.nextafter is only available for Python >= 3.9",
+ )
+ def test_exponent_index_min(self):
+ for scale in range(
+ ExponentMapping._min_scale, ExponentMapping._max_scale + 1
+ ):
+ exponent_mapping = ExponentMapping(scale)
+
+ min_index = exponent_mapping.map_to_index(MIN_NORMAL_VALUE)
+ boundary = exponent_mapping.get_lower_boundary(min_index)
+
+ correct_min_index = MIN_NORMAL_EXPONENT >> -scale
+
+ if MIN_NORMAL_EXPONENT % (1 << -scale) == 0:
+ correct_min_index -= 1
+
+ # We do not check for correct_min_index to be greater than the
+ # smallest integer because the smallest integer in Python is -inf.
+
+ self.assertEqual(correct_min_index, min_index)
+
+ correct_boundary = right_boundary(scale, correct_min_index)
+
+ self.assertEqual(correct_boundary, boundary)
+ self.assertGreater(
+ right_boundary(scale, correct_min_index + 1), boundary
+ )
+
+ self.assertEqual(
+ correct_min_index,
+ exponent_mapping.map_to_index(MIN_NORMAL_VALUE / 2),
+ )
+ self.assertEqual(
+ correct_min_index,
+ exponent_mapping.map_to_index(MIN_NORMAL_VALUE / 3),
+ )
+ self.assertEqual(
+ correct_min_index,
+ exponent_mapping.map_to_index(MIN_NORMAL_VALUE / 100),
+ )
+ self.assertEqual(
+ correct_min_index, exponent_mapping.map_to_index(2**-1050)
+ )
+ self.assertEqual(
+ correct_min_index, exponent_mapping.map_to_index(2**-1073)
+ )
+ self.assertEqual(
+ correct_min_index,
+ exponent_mapping.map_to_index(1.1 * (2**-1073)),
+ )
+ self.assertEqual(
+ correct_min_index, exponent_mapping.map_to_index(2**-1074)
+ )
+
+ with self.assertRaises(MappingUnderflowError):
+ exponent_mapping.get_lower_boundary(min_index - 1)
+
+ self.assertEqual(
+ exponent_mapping.map_to_index(
+ nextafter(MIN_NORMAL_VALUE, inf)
+ ),
+ MIN_NORMAL_EXPONENT >> -scale,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponential_bucket_histogram_aggregation.py b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponential_bucket_histogram_aggregation.py
new file mode 100644
index 0000000000..311f00a0b0
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_exponential_bucket_histogram_aggregation.py
@@ -0,0 +1,1018 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from itertools import permutations
+from logging import WARNING
+from math import ldexp
+from sys import float_info
+from types import MethodType
+from unittest.mock import Mock, patch
+
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ AggregationTemporality,
+ _ExponentialBucketHistogramAggregation,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.buckets import (
+ Buckets,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.exponent_mapping import (
+ ExponentMapping,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.ieee_754 import (
+ MAX_NORMAL_EXPONENT,
+ MIN_NORMAL_EXPONENT,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.logarithm_mapping import (
+ LogarithmMapping,
+)
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics.view import (
+ ExponentialBucketHistogramAggregation,
+)
+from opentelemetry.test import TestCase
+
+
+def get_counts(buckets: Buckets) -> int:
+
+ counts = []
+
+ for index in range(len(buckets)):
+ counts.append(buckets[index])
+
+ return counts
+
+
+def center_val(mapping: ExponentMapping, index: int) -> float:
+ return (
+ mapping.get_lower_boundary(index)
+ + mapping.get_lower_boundary(index + 1)
+ ) / 2
+
+
+def swap(
+ first: _ExponentialBucketHistogramAggregation,
+ second: _ExponentialBucketHistogramAggregation,
+):
+
+ for attribute in [
+ "_positive",
+ "_negative",
+ "_sum",
+ "_count",
+ "_zero_count",
+ "_min",
+ "_max",
+ "_mapping",
+ ]:
+ temp = getattr(first, attribute)
+ setattr(first, attribute, getattr(second, attribute))
+ setattr(second, attribute, temp)
+
+
+class TestExponentialBucketHistogramAggregation(TestCase):
+ @patch("opentelemetry.sdk.metrics._internal.aggregation.LogarithmMapping")
+ def test_create_aggregation(self, mock_logarithm_mapping):
+ exponential_bucket_histogram_aggregation = (
+ ExponentialBucketHistogramAggregation()
+ )._create_aggregation(Mock(), Mock(), Mock())
+
+ self.assertEqual(
+ exponential_bucket_histogram_aggregation._max_scale, 20
+ )
+
+ mock_logarithm_mapping.assert_called_with(20)
+
+ exponential_bucket_histogram_aggregation = (
+ ExponentialBucketHistogramAggregation(max_scale=10)
+ )._create_aggregation(Mock(), Mock(), Mock())
+
+ self.assertEqual(
+ exponential_bucket_histogram_aggregation._max_scale, 10
+ )
+
+ mock_logarithm_mapping.assert_called_with(10)
+
+ with self.assertLogs(level=WARNING):
+ exponential_bucket_histogram_aggregation = (
+ ExponentialBucketHistogramAggregation(max_scale=100)
+ )._create_aggregation(Mock(), Mock(), Mock())
+
+ self.assertEqual(
+ exponential_bucket_histogram_aggregation._max_scale, 100
+ )
+
+ mock_logarithm_mapping.assert_called_with(100)
+
+ def assertInEpsilon(self, first, second, epsilon):
+ self.assertLessEqual(first, (second * (1 + epsilon)))
+ self.assertGreaterEqual(first, (second * (1 - epsilon)))
+
+ def require_equal(self, a, b):
+
+ if a._sum == 0 or b._sum == 0:
+ self.assertAlmostEqual(a._sum, b._sum, 1e-6)
+ else:
+ self.assertInEpsilon(a._sum, b._sum, 1e-6)
+
+ self.assertEqual(a._count, b._count)
+ self.assertEqual(a._zero_count, b._zero_count)
+
+ self.assertEqual(a._mapping.scale, b._mapping.scale)
+
+ self.assertEqual(len(a._positive), len(b._positive))
+ self.assertEqual(len(a._negative), len(b._negative))
+
+ for index in range(len(a._positive)):
+ self.assertEqual(a._positive[index], b._positive[index])
+
+ for index in range(len(a._negative)):
+ self.assertEqual(a._negative[index], b._negative[index])
+
+ def test_alternating_growth_0(self):
+ """
+ Tests insertion of [2, 4, 1]. The index of 2 (i.e., 0) becomes
+ `indexBase`, the 4 goes to its right and the 1 goes in the last
+ position of the backing array. With 3 binary orders of magnitude
+ and MaxSize=4, this must finish with scale=0; with minimum value 1
+ this must finish with offset=-1 (all scales).
+
+ """
+
+ # The corresponding Go test is TestAlternatingGrowth1 where:
+ # agg := NewFloat64(NewConfig(WithMaxSize(4)))
+ # agg is an instance of github.com/lightstep/otel-launcher-go/lightstep/sdk/metric/aggregator/histogram/structure.Histogram[float64]
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=4)
+ )
+
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(4, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(1, Mock()))
+
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.offset, -1
+ )
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, 0)
+ self.assertEqual(
+ get_counts(exponential_histogram_aggregation._positive), [1, 1, 1]
+ )
+
+ def test_alternating_growth_1(self):
+ """
+ Tests insertion of [2, 2, 4, 1, 8, 0.5]. The test proceeds as¶
+ above but then downscales once further to scale=-1, thus index -1¶
+ holds range [0.25, 1.0), index 0 holds range [1.0, 4), index 1¶
+ holds range [4, 16).¶
+ """
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=4)
+ )
+
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(1, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(8, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(0.5, Mock()))
+
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.offset, -1
+ )
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, -1)
+ self.assertEqual(
+ get_counts(exponential_histogram_aggregation._positive), [2, 3, 1]
+ )
+
+ def test_permutations(self):
+ """
+ Tests that every permutation of certain sequences with maxSize=2
+ results¶ in the same scale=-1 histogram.
+ """
+
+ for test_values, expected in [
+ [
+ [0.5, 1.0, 2.0],
+ {
+ "scale": -1,
+ "offset": -1,
+ "len": 2,
+ "at_0": 2,
+ "at_1": 1,
+ },
+ ],
+ [
+ [1.0, 2.0, 4.0],
+ {
+ "scale": -1,
+ "offset": -1,
+ "len": 2,
+ "at_0": 1,
+ "at_1": 2,
+ },
+ ],
+ [
+ [0.25, 0.5, 1],
+ {
+ "scale": -1,
+ "offset": -2,
+ "len": 2,
+ "at_0": 1,
+ "at_1": 2,
+ },
+ ],
+ ]:
+
+ for permutation in permutations(test_values):
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(), Mock(), max_size=2
+ )
+ )
+
+ for value in permutation:
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+
+ self.assertEqual(
+ exponential_histogram_aggregation._mapping.scale,
+ expected["scale"],
+ )
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.offset,
+ expected["offset"],
+ )
+ self.assertEqual(
+ len(exponential_histogram_aggregation._positive),
+ expected["len"],
+ )
+ self.assertEqual(
+ exponential_histogram_aggregation._positive[0],
+ expected["at_0"],
+ )
+ self.assertEqual(
+ exponential_histogram_aggregation._positive[1],
+ expected["at_1"],
+ )
+
+ def test_ascending_sequence(self):
+
+ for max_size in [3, 4, 6, 9]:
+ for offset in range(-5, 6):
+ for init_scale in [0, 4]:
+ self.ascending_sequence_test(max_size, offset, init_scale)
+
+ def ascending_sequence_test(
+ self, max_size: int, offset: int, init_scale: int
+ ):
+
+ for step in range(max_size, max_size * 4):
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(), Mock(), max_size=max_size
+ )
+ )
+
+ if init_scale <= 0:
+ mapping = ExponentMapping(init_scale)
+ else:
+ mapping = LogarithmMapping(init_scale)
+
+ min_val = center_val(mapping, offset)
+ max_val = center_val(mapping, offset + step)
+
+ sum_ = 0.0
+
+ for index in range(max_size):
+ value = center_val(mapping, offset + index)
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+ sum_ += value
+
+ self.assertEqual(
+ init_scale, exponential_histogram_aggregation._mapping._scale
+ )
+ self.assertEqual(
+ offset, exponential_histogram_aggregation._positive.offset
+ )
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(max_val, Mock())
+ )
+ sum_ += max_val
+
+ self.assertNotEqual(
+ 0, exponential_histogram_aggregation._positive[0]
+ )
+
+ # The maximum-index filled bucket is at or
+ # above the mid-point, (otherwise we
+ # downscaled too much).
+
+ max_fill = 0
+ total_count = 0
+
+ for index in range(
+ len(exponential_histogram_aggregation._positive)
+ ):
+ total_count += exponential_histogram_aggregation._positive[
+ index
+ ]
+ if exponential_histogram_aggregation._positive[index] != 0:
+ max_fill = index
+
+ # FIXME the corresponding Go code is
+ # require.GreaterOrEqual(t, maxFill, uint32(maxSize)/2), make sure
+ # this is actually equivalent.
+ self.assertGreaterEqual(max_fill, int(max_size / 2))
+
+ self.assertGreaterEqual(max_size + 1, total_count)
+ self.assertGreaterEqual(
+ max_size + 1, exponential_histogram_aggregation._count
+ )
+ self.assertGreaterEqual(
+ sum_, exponential_histogram_aggregation._sum
+ )
+
+ if init_scale <= 0:
+ mapping = ExponentMapping(
+ exponential_histogram_aggregation._mapping.scale
+ )
+ else:
+ mapping = LogarithmMapping(
+ exponential_histogram_aggregation._mapping.scale
+ )
+ index = mapping.map_to_index(min_val)
+
+ self.assertEqual(
+ index, exponential_histogram_aggregation._positive.offset
+ )
+
+ index = mapping.map_to_index(max_val)
+
+ self.assertEqual(
+ index,
+ exponential_histogram_aggregation._positive.offset
+ + len(exponential_histogram_aggregation._positive)
+ - 1,
+ )
+
+ def test_reset(self):
+
+ for increment in [0x1, 0x100, 0x10000, 0x100000000, 0x200000000]:
+
+ def mock_increment(self, bucket_index: int) -> None:
+ """
+ Increments a bucket
+ """
+
+ self._counts[bucket_index] += increment
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(), Mock(), max_size=256
+ )
+ )
+
+ self.assertEqual(
+ exponential_histogram_aggregation._count,
+ exponential_histogram_aggregation._zero_count,
+ )
+ self.assertEqual(0, exponential_histogram_aggregation._sum)
+ expect = 0
+
+ for value in range(2, 257):
+ expect += value * increment
+ with patch.object(
+ exponential_histogram_aggregation._positive,
+ "increment_bucket",
+ # new=positive_mock
+ MethodType(
+ mock_increment,
+ exponential_histogram_aggregation._positive,
+ ),
+ ):
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+ exponential_histogram_aggregation._count *= increment
+ exponential_histogram_aggregation._sum *= increment
+
+ self.assertEqual(expect, exponential_histogram_aggregation._sum)
+ self.assertEqual(
+ 255 * increment, exponential_histogram_aggregation._count
+ )
+
+ # See test_integer_aggregation about why scale is 5, len is
+ # 256 - (1 << scale)- 1 and offset is (1 << scale) - 1.
+ scale = exponential_histogram_aggregation._mapping.scale
+ self.assertEqual(5, scale)
+
+ self.assertEqual(
+ 256 - ((1 << scale) - 1),
+ len(exponential_histogram_aggregation._positive),
+ )
+ self.assertEqual(
+ (1 << scale) - 1,
+ exponential_histogram_aggregation._positive.offset,
+ )
+
+ for index in range(0, 256):
+ self.assertLessEqual(
+ exponential_histogram_aggregation._positive[index],
+ 6 * increment,
+ )
+
+ def test_move_into(self):
+
+ exponential_histogram_aggregation_0 = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(), Mock(), max_size=256
+ )
+ )
+ exponential_histogram_aggregation_1 = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(), Mock(), max_size=256
+ )
+ )
+
+ expect = 0
+
+ for index in range(2, 257):
+ expect += index
+ exponential_histogram_aggregation_0.aggregate(
+ Measurement(index, Mock())
+ )
+ exponential_histogram_aggregation_0.aggregate(
+ Measurement(0, Mock())
+ )
+
+ swap(
+ exponential_histogram_aggregation_0,
+ exponential_histogram_aggregation_1,
+ )
+
+ self.assertEqual(0, exponential_histogram_aggregation_0._sum)
+ self.assertEqual(0, exponential_histogram_aggregation_0._count)
+ self.assertEqual(0, exponential_histogram_aggregation_0._zero_count)
+
+ self.assertEqual(expect, exponential_histogram_aggregation_1._sum)
+ self.assertEqual(255 * 2, exponential_histogram_aggregation_1._count)
+ self.assertEqual(255, exponential_histogram_aggregation_1._zero_count)
+
+ scale = exponential_histogram_aggregation_1._mapping.scale
+ self.assertEqual(5, scale)
+
+ self.assertEqual(
+ 256 - ((1 << scale) - 1),
+ len(exponential_histogram_aggregation_1._positive),
+ )
+ self.assertEqual(
+ (1 << scale) - 1,
+ exponential_histogram_aggregation_1._positive.offset,
+ )
+
+ for index in range(0, 256):
+ self.assertLessEqual(
+ exponential_histogram_aggregation_1._positive[index], 6
+ )
+
+ def test_very_large_numbers(self):
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=2)
+ )
+
+ def expect_balanced(count: int):
+ self.assertEqual(
+ 2, len(exponential_histogram_aggregation._positive)
+ )
+ self.assertEqual(
+ -1, exponential_histogram_aggregation._positive.offset
+ )
+ self.assertEqual(
+ count, exponential_histogram_aggregation._positive[0]
+ )
+ self.assertEqual(
+ count, exponential_histogram_aggregation._positive[1]
+ )
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**-100, Mock())
+ )
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**100, Mock())
+ )
+
+ self.assertLessEqual(
+ 2**100, (exponential_histogram_aggregation._sum * (1 + 1e-5))
+ )
+ self.assertGreaterEqual(
+ 2**100, (exponential_histogram_aggregation._sum * (1 - 1e-5))
+ )
+
+ self.assertEqual(2, exponential_histogram_aggregation._count)
+ self.assertEqual(-7, exponential_histogram_aggregation._mapping.scale)
+
+ expect_balanced(1)
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**-127, Mock())
+ )
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**128, Mock())
+ )
+
+ self.assertLessEqual(
+ 2**128, (exponential_histogram_aggregation._sum * (1 + 1e-5))
+ )
+ self.assertGreaterEqual(
+ 2**128, (exponential_histogram_aggregation._sum * (1 - 1e-5))
+ )
+
+ self.assertEqual(4, exponential_histogram_aggregation._count)
+ self.assertEqual(-7, exponential_histogram_aggregation._mapping.scale)
+
+ expect_balanced(2)
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**-129, Mock())
+ )
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**255, Mock())
+ )
+
+ self.assertLessEqual(
+ 2**255, (exponential_histogram_aggregation._sum * (1 + 1e-5))
+ )
+ self.assertGreaterEqual(
+ 2**255, (exponential_histogram_aggregation._sum * (1 - 1e-5))
+ )
+ self.assertEqual(6, exponential_histogram_aggregation._count)
+ self.assertEqual(-8, exponential_histogram_aggregation._mapping.scale)
+
+ expect_balanced(3)
+
+ def test_full_range(self):
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=2)
+ )
+
+ exponential_histogram_aggregation.aggregate(
+ Measurement(float_info.max, Mock())
+ )
+ exponential_histogram_aggregation.aggregate(Measurement(1, Mock()))
+ exponential_histogram_aggregation.aggregate(
+ Measurement(2**-1074, Mock())
+ )
+
+ self.assertEqual(
+ float_info.max, exponential_histogram_aggregation._sum
+ )
+ self.assertEqual(3, exponential_histogram_aggregation._count)
+ self.assertEqual(
+ ExponentMapping._min_scale,
+ exponential_histogram_aggregation._mapping.scale,
+ )
+
+ self.assertEqual(
+ _ExponentialBucketHistogramAggregation._min_max_size,
+ len(exponential_histogram_aggregation._positive),
+ )
+ self.assertEqual(
+ -1, exponential_histogram_aggregation._positive.offset
+ )
+ self.assertLessEqual(exponential_histogram_aggregation._positive[0], 2)
+ self.assertLessEqual(exponential_histogram_aggregation._positive[1], 1)
+
+ def test_aggregator_min_max(self):
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ for value in [1, 3, 5, 7, 9]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+
+ self.assertEqual(1, exponential_histogram_aggregation._min)
+ self.assertEqual(9, exponential_histogram_aggregation._max)
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ for value in [-1, -3, -5, -7, -9]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+
+ self.assertEqual(-9, exponential_histogram_aggregation._min)
+ self.assertEqual(-1, exponential_histogram_aggregation._max)
+
+ def test_aggregator_copy_swap(self):
+
+ exponential_histogram_aggregation_0 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+ for value in [1, 3, 5, 7, 9, -1, -3, -5]:
+ exponential_histogram_aggregation_0.aggregate(
+ Measurement(value, Mock())
+ )
+ exponential_histogram_aggregation_1 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+ for value in [5, 4, 3, 2]:
+ exponential_histogram_aggregation_1.aggregate(
+ Measurement(value, Mock())
+ )
+ exponential_histogram_aggregation_2 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ swap(
+ exponential_histogram_aggregation_0,
+ exponential_histogram_aggregation_1,
+ )
+
+ exponential_histogram_aggregation_2._positive.__init__()
+ exponential_histogram_aggregation_2._negative.__init__()
+ exponential_histogram_aggregation_2._sum = 0
+ exponential_histogram_aggregation_2._count = 0
+ exponential_histogram_aggregation_2._zero_count = 0
+ exponential_histogram_aggregation_2._min = 0
+ exponential_histogram_aggregation_2._max = 0
+ exponential_histogram_aggregation_2._mapping = LogarithmMapping(
+ LogarithmMapping._max_scale
+ )
+
+ for attribute in [
+ "_positive",
+ "_negative",
+ "_sum",
+ "_count",
+ "_zero_count",
+ "_min",
+ "_max",
+ "_mapping",
+ ]:
+ setattr(
+ exponential_histogram_aggregation_2,
+ attribute,
+ getattr(exponential_histogram_aggregation_1, attribute),
+ )
+
+ self.require_equal(
+ exponential_histogram_aggregation_1,
+ exponential_histogram_aggregation_2,
+ )
+
+ def test_zero_count_by_increment(self):
+
+ exponential_histogram_aggregation_0 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ increment = 10
+
+ for _ in range(increment):
+ exponential_histogram_aggregation_0.aggregate(
+ Measurement(0, Mock())
+ )
+ exponential_histogram_aggregation_1 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ # positive_mock = Mock(wraps=exponential_histogram_aggregation_1._positive)
+ def mock_increment(self, bucket_index: int) -> None:
+ """
+ Increments a bucket
+ """
+
+ self._counts[bucket_index] += increment
+
+ with patch.object(
+ exponential_histogram_aggregation_1._positive,
+ "increment_bucket",
+ # new=positive_mock
+ MethodType(
+ mock_increment, exponential_histogram_aggregation_1._positive
+ ),
+ ):
+ exponential_histogram_aggregation_1.aggregate(
+ Measurement(0, Mock())
+ )
+ exponential_histogram_aggregation_1._count *= increment
+ exponential_histogram_aggregation_1._zero_count *= increment
+
+ self.require_equal(
+ exponential_histogram_aggregation_0,
+ exponential_histogram_aggregation_1,
+ )
+
+ def test_one_count_by_increment(self):
+
+ exponential_histogram_aggregation_0 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ increment = 10
+
+ for _ in range(increment):
+ exponential_histogram_aggregation_0.aggregate(
+ Measurement(1, Mock())
+ )
+ exponential_histogram_aggregation_1 = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock())
+ )
+
+ # positive_mock = Mock(wraps=exponential_histogram_aggregation_1._positive)
+ def mock_increment(self, bucket_index: int) -> None:
+ """
+ Increments a bucket
+ """
+
+ self._counts[bucket_index] += increment
+
+ with patch.object(
+ exponential_histogram_aggregation_1._positive,
+ "increment_bucket",
+ # new=positive_mock
+ MethodType(
+ mock_increment, exponential_histogram_aggregation_1._positive
+ ),
+ ):
+ exponential_histogram_aggregation_1.aggregate(
+ Measurement(1, Mock())
+ )
+ exponential_histogram_aggregation_1._count *= increment
+ exponential_histogram_aggregation_1._sum *= increment
+
+ self.require_equal(
+ exponential_histogram_aggregation_0,
+ exponential_histogram_aggregation_1,
+ )
+
+ def test_boundary_statistics(self):
+
+ total = MAX_NORMAL_EXPONENT - MIN_NORMAL_EXPONENT + 1
+
+ for scale in range(
+ LogarithmMapping._min_scale, LogarithmMapping._max_scale + 1
+ ):
+
+ above = 0
+ below = 0
+
+ if scale <= 0:
+ mapping = ExponentMapping(scale)
+ else:
+ mapping = LogarithmMapping(scale)
+
+ for exp in range(MIN_NORMAL_EXPONENT, MAX_NORMAL_EXPONENT + 1):
+ value = ldexp(1, exp)
+
+ index = mapping.map_to_index(value)
+
+ with self.assertNotRaises(Exception):
+ boundary = mapping.get_lower_boundary(index + 1)
+
+ if boundary < value:
+ above += 1
+ elif boundary > value:
+ below += 1
+
+ self.assertInEpsilon(0.5, above / total, 0.05)
+ self.assertInEpsilon(0.5, below / total, 0.06)
+
+ def test_min_max_size(self):
+ """
+ Tests that the minimum max_size is the right value.
+ """
+
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(),
+ Mock(),
+ max_size=_ExponentialBucketHistogramAggregation._min_max_size,
+ )
+ )
+
+ # The minimum and maximum normal floating point values are used here to
+ # make sure the mapping can contain the full range of values.
+ exponential_histogram_aggregation.aggregate(Mock(value=float_info.min))
+ exponential_histogram_aggregation.aggregate(Mock(value=float_info.max))
+
+ # This means the smallest max_scale is enough for the full range of the
+ # normal floating point values.
+ self.assertEqual(
+ len(exponential_histogram_aggregation._positive._counts),
+ exponential_histogram_aggregation._min_max_size,
+ )
+
+ def test_aggregate_collect(self):
+ """
+ Tests a repeated cycle of aggregation and collection.
+ """
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(),
+ Mock(),
+ )
+ )
+
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 0
+ )
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 0
+ )
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 0
+ )
+
+ def test_collect_results_cumulative(self):
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(
+ Mock(),
+ Mock(),
+ )
+ )
+
+ self.assertEqual(exponential_histogram_aggregation._mapping._scale, 20)
+
+ exponential_histogram_aggregation.aggregate(Measurement(2, Mock()))
+ self.assertEqual(exponential_histogram_aggregation._mapping._scale, 20)
+
+ exponential_histogram_aggregation.aggregate(Measurement(4, Mock()))
+ self.assertEqual(exponential_histogram_aggregation._mapping._scale, 7)
+
+ exponential_histogram_aggregation.aggregate(Measurement(1, Mock()))
+ self.assertEqual(exponential_histogram_aggregation._mapping._scale, 6)
+
+ collection_0 = exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, Mock()
+ )
+
+ self.assertEqual(len(collection_0.positive.bucket_counts), 160)
+
+ self.assertEqual(collection_0.count, 3)
+ self.assertEqual(collection_0.sum, 7)
+ self.assertEqual(collection_0.scale, 6)
+ self.assertEqual(collection_0.zero_count, 0)
+ self.assertEqual(
+ collection_0.positive.bucket_counts,
+ [1, *[0] * 63, 1, *[0] * 31, 1, *[0] * 63],
+ )
+ self.assertEqual(collection_0.flags, 0)
+ self.assertEqual(collection_0.min, 1)
+ self.assertEqual(collection_0.max, 4)
+
+ exponential_histogram_aggregation.aggregate(Measurement(1, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(8, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(0.5, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(0.1, Mock()))
+ exponential_histogram_aggregation.aggregate(Measurement(0.045, Mock()))
+
+ collection_1 = exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, Mock()
+ )
+
+ previous_count = collection_1.positive.bucket_counts[0]
+
+ count_counts = [[previous_count, 0]]
+
+ for count in collection_1.positive.bucket_counts:
+ if count == previous_count:
+ count_counts[-1][1] += 1
+ else:
+ previous_count = count
+ count_counts.append([previous_count, 1])
+
+ self.assertEqual(collection_1.count, 5)
+ self.assertEqual(collection_1.sum, 16.645)
+ self.assertEqual(collection_1.scale, 4)
+ self.assertEqual(collection_1.zero_count, 0)
+
+ self.assertEqual(
+ collection_1.positive.bucket_counts,
+ [
+ 1,
+ *[0] * 15,
+ 1,
+ *[0] * 47,
+ 1,
+ *[0] * 40,
+ 1,
+ *[0] * 17,
+ 1,
+ *[0] * 36,
+ ],
+ )
+ self.assertEqual(collection_1.flags, 0)
+ self.assertEqual(collection_1.min, 0.045)
+ self.assertEqual(collection_1.max, 8)
+
+ def test_merge_collect_cumulative(self):
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=4)
+ )
+
+ for value in [2, 4, 8, 16]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, 0)
+ self.assertEqual(exponential_histogram_aggregation._positive.offset, 0)
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.counts, [1, 1, 1, 1]
+ )
+
+ result = exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE,
+ 0,
+ )
+
+ for value in [1, 2, 4, 8]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(1 / value, Mock())
+ )
+
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, 0)
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.offset, -4
+ )
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.counts, [1, 1, 1, 1]
+ )
+
+ result_1 = exponential_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE,
+ 0,
+ )
+
+ self.assertEqual(result.scale, result_1.scale)
+
+ def test_merge_collect_delta(self):
+ exponential_histogram_aggregation = (
+ _ExponentialBucketHistogramAggregation(Mock(), Mock(), max_size=4)
+ )
+
+ for value in [2, 4, 8, 16]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(value, Mock())
+ )
+
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, 0)
+ self.assertEqual(exponential_histogram_aggregation._positive.offset, 0)
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.counts, [1, 1, 1, 1]
+ )
+
+ result = exponential_histogram_aggregation.collect(
+ AggregationTemporality.DELTA,
+ 0,
+ )
+
+ for value in [1, 2, 4, 8]:
+ exponential_histogram_aggregation.aggregate(
+ Measurement(1 / value, Mock())
+ )
+
+ self.assertEqual(exponential_histogram_aggregation._mapping.scale, 0)
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.offset, -4
+ )
+ self.assertEqual(
+ exponential_histogram_aggregation._positive.counts, [1, 1, 1, 1]
+ )
+
+ result_1 = exponential_histogram_aggregation.collect(
+ AggregationTemporality.DELTA,
+ 0,
+ )
+
+ self.assertEqual(result.scale, result_1.scale)
diff --git a/opentelemetry-sdk/tests/metrics/exponential_histogram/test_logarithm_mapping.py b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_logarithm_mapping.py
new file mode 100644
index 0000000000..1fd18845bb
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/exponential_histogram/test_logarithm_mapping.py
@@ -0,0 +1,241 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import sqrt
+from unittest import TestCase
+from unittest.mock import patch
+
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.errors import (
+ MappingOverflowError,
+ MappingUnderflowError,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.ieee_754 import (
+ MAX_NORMAL_EXPONENT,
+ MAX_NORMAL_VALUE,
+ MIN_NORMAL_EXPONENT,
+ MIN_NORMAL_VALUE,
+)
+from opentelemetry.sdk.metrics._internal.exponential_histogram.mapping.logarithm_mapping import (
+ LogarithmMapping,
+)
+
+
+def left_boundary(scale: int, index: int) -> float:
+
+ # This is implemented in this way to avoid using a third-party bigfloat
+ # package. The Go implementation uses a bigfloat package that is part of
+ # their standard library. The assumption here is that the smallest float
+ # available in Python is 2 ** -1022 (from sys.float_info.min).
+ while scale > 0:
+ if index < -1022:
+ index /= 2
+ scale -= 1
+ else:
+ break
+
+ result = 2**index
+
+ for _ in range(scale, 0, -1):
+ result = sqrt(result)
+
+ return result
+
+
+class TestLogarithmMapping(TestCase):
+ def assertInEpsilon(self, first, second, epsilon):
+ self.assertLessEqual(first, (second * (1 + epsilon)))
+ self.assertGreaterEqual(first, (second * (1 - epsilon)))
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal.exponential_histogram.mapping."
+ "logarithm_mapping.LogarithmMapping._mappings",
+ new={},
+ )
+ @patch(
+ "opentelemetry.sdk.metrics._internal.exponential_histogram.mapping."
+ "logarithm_mapping.LogarithmMapping._init"
+ )
+ def test_init_called_once(self, mock_init):
+
+ LogarithmMapping(3)
+ LogarithmMapping(3)
+
+ mock_init.assert_called_once()
+
+ def test_invalid_scale(self):
+ with self.assertRaises(Exception):
+ LogarithmMapping(-1)
+
+ def test_logarithm_mapping_scale_one(self):
+
+ # The exponentiation factor for this logarithm exponent histogram
+ # mapping is square_root(2).
+ # Scale 1 means 1 division between every power of two, having
+ # a factor sqare_root(2) times the lower boundary.
+ logarithm_exponent_histogram_mapping = LogarithmMapping(1)
+
+ self.assertEqual(logarithm_exponent_histogram_mapping.scale, 1)
+
+ # Note: Do not test exact boundaries, with the exception of
+ # 1, because we expect errors in that case (e.g.,
+ # MapToIndex(8) returns 5, an off-by-one. See the following
+ # test.
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(15), 7
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(9), 6
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(7), 5
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(5), 4
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(3), 3
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(2.5), 2
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(1.5), 1
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(1.2), 0
+ )
+ # This one is actually an exact test
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(1), -1
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(0.75), -1
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(0.55), -2
+ )
+ self.assertEqual(
+ logarithm_exponent_histogram_mapping.map_to_index(0.45), -3
+ )
+
+ def test_logarithm_boundary(self):
+
+ for scale in [1, 2, 3, 4, 10, 15]:
+ logarithm_exponent_histogram_mapping = LogarithmMapping(scale)
+
+ for index in [-100, -10, -1, 0, 1, 10, 100]:
+
+ lower_boundary = (
+ logarithm_exponent_histogram_mapping.get_lower_boundary(
+ index
+ )
+ )
+
+ mapped_index = (
+ logarithm_exponent_histogram_mapping.map_to_index(
+ lower_boundary
+ )
+ )
+
+ self.assertLessEqual(index - 1, mapped_index)
+ self.assertGreaterEqual(index, mapped_index)
+
+ self.assertInEpsilon(
+ lower_boundary, left_boundary(scale, index), 1e-9
+ )
+
+ def test_logarithm_index_max(self):
+
+ for scale in range(
+ LogarithmMapping._min_scale, LogarithmMapping._max_scale + 1
+ ):
+ logarithm_mapping = LogarithmMapping(scale)
+
+ index = logarithm_mapping.map_to_index(MAX_NORMAL_VALUE)
+
+ max_index = ((MAX_NORMAL_EXPONENT + 1) << scale) - 1
+
+ # We do not check for max_index to be lesser than the
+ # greatest integer because the greatest integer in Python is inf.
+
+ self.assertEqual(index, max_index)
+
+ boundary = logarithm_mapping.get_lower_boundary(index)
+
+ base = logarithm_mapping.get_lower_boundary(1)
+
+ self.assertLess(boundary, MAX_NORMAL_VALUE)
+
+ self.assertInEpsilon(
+ (MAX_NORMAL_VALUE - boundary) / boundary, base - 1, 1e-6
+ )
+
+ with self.assertRaises(MappingOverflowError):
+ logarithm_mapping.get_lower_boundary(index + 1)
+
+ with self.assertRaises(MappingOverflowError):
+ logarithm_mapping.get_lower_boundary(index + 2)
+
+ def test_logarithm_index_min(self):
+ for scale in range(
+ LogarithmMapping._min_scale, LogarithmMapping._max_scale + 1
+ ):
+ logarithm_mapping = LogarithmMapping(scale)
+
+ min_index = logarithm_mapping.map_to_index(MIN_NORMAL_VALUE)
+
+ correct_min_index = (MIN_NORMAL_EXPONENT << scale) - 1
+ self.assertEqual(min_index, correct_min_index)
+
+ correct_mapped = left_boundary(scale, correct_min_index)
+ self.assertLess(correct_mapped, MIN_NORMAL_VALUE)
+
+ correct_mapped_upper = left_boundary(scale, correct_min_index + 1)
+ self.assertEqual(correct_mapped_upper, MIN_NORMAL_VALUE)
+
+ mapped = logarithm_mapping.get_lower_boundary(min_index + 1)
+
+ self.assertInEpsilon(mapped, MIN_NORMAL_VALUE, 1e-6)
+
+ self.assertEqual(
+ logarithm_mapping.map_to_index(MIN_NORMAL_VALUE / 2),
+ correct_min_index,
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(MIN_NORMAL_VALUE / 3),
+ correct_min_index,
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(MIN_NORMAL_VALUE / 100),
+ correct_min_index,
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(2**-1050), correct_min_index
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(2**-1073), correct_min_index
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(1.1 * 2**-1073),
+ correct_min_index,
+ )
+ self.assertEqual(
+ logarithm_mapping.map_to_index(2**-1074), correct_min_index
+ )
+
+ mapped_lower = logarithm_mapping.get_lower_boundary(min_index)
+ self.assertInEpsilon(correct_mapped, mapped_lower, 1e-6)
+
+ with self.assertRaises(MappingUnderflowError):
+ logarithm_mapping.get_lower_boundary(min_index - 1)
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_console_exporter.py b/opentelemetry-sdk/tests/metrics/integration_test/test_console_exporter.py
new file mode 100644
index 0000000000..1b3283717a
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_console_exporter.py
@@ -0,0 +1,90 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from io import StringIO
+from json import loads
+from unittest import TestCase
+
+from opentelemetry.metrics import get_meter, set_meter_provider
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ ConsoleMetricExporter,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.test.globals_test import reset_metrics_globals
+
+
+class TestConsoleExporter(TestCase):
+ def setUp(self):
+ reset_metrics_globals()
+
+ def tearDown(self):
+ reset_metrics_globals()
+
+ def test_console_exporter(self):
+
+ output = StringIO()
+ exporter = ConsoleMetricExporter(out=output)
+ reader = PeriodicExportingMetricReader(
+ exporter, export_interval_millis=100
+ )
+ provider = MeterProvider(metric_readers=[reader])
+ set_meter_provider(provider)
+ meter = get_meter(__name__)
+ counter = meter.create_counter(
+ "name", description="description", unit="unit"
+ )
+ counter.add(1, attributes={"a": "b"})
+ provider.shutdown()
+
+ output.seek(0)
+ result_0 = loads("".join(output.readlines()))
+
+ self.assertGreater(len(result_0), 0)
+
+ metrics = result_0["resource_metrics"][0]["scope_metrics"][0]
+
+ self.assertEqual(metrics["scope"]["name"], "test_console_exporter")
+
+ metrics = metrics["metrics"][0]
+
+ self.assertEqual(metrics["name"], "name")
+ self.assertEqual(metrics["description"], "description")
+ self.assertEqual(metrics["unit"], "unit")
+
+ metrics = metrics["data"]
+
+ self.assertEqual(metrics["aggregation_temporality"], 2)
+ self.assertTrue(metrics["is_monotonic"])
+
+ metrics = metrics["data_points"][0]
+
+ self.assertEqual(metrics["attributes"], {"a": "b"})
+ self.assertEqual(metrics["value"], 1)
+
+ def test_console_exporter_no_export(self):
+
+ output = StringIO()
+ exporter = ConsoleMetricExporter(out=output)
+ reader = PeriodicExportingMetricReader(
+ exporter, export_interval_millis=100
+ )
+ provider = MeterProvider(metric_readers=[reader])
+ provider.shutdown()
+
+ output.seek(0)
+ actual = "".join(output.readlines())
+ expected = ""
+
+ self.assertEqual(actual, expected)
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_cpu_time.py b/opentelemetry-sdk/tests/metrics/integration_test/test_cpu_time.py
new file mode 100644
index 0000000000..7b440c0332
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_cpu_time.py
@@ -0,0 +1,271 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+import io
+from typing import Generator, Iterable, List
+from unittest import TestCase
+
+from opentelemetry.metrics import CallbackOptions, Instrument, Observation
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+
+# FIXME Test that the instrument methods can be called concurrently safely.
+
+
+class TestCpuTimeIntegration(TestCase):
+ """Integration test of scraping CPU time from proc stat with an observable
+ counter"""
+
+ procstat_str = """\
+cpu 8549517 4919096 9165935 1430260740 1641349 0 1646147 623279 0 0
+cpu0 615029 317746 594601 89126459 129629 0 834346 42137 0 0
+cpu1 588232 349185 640492 89156411 124485 0 241004 41862 0 0
+intr 4370168813 38 9 0 0 1639 0 0 0 0 0 2865202 0 152 0 0 0 0 0 0 0 0 0 0 0 0 7236812 5966240 4501046 6467792 7289114 6048205 5299600 5178254 4642580 6826812 6880917 6230308 6307699 4699637 6119330 4905094 5644039 4700633 10539029 5365438 6086908 2227906 5094323 9685701 10137610 7739951 7143508 8123281 4968458 5683103 9890878 4466603 0 0 0 8929628 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ctxt 6877594077
+btime 1631501040
+processes 2557351
+procs_running 2
+procs_blocked 0
+softirq 1644603067 0 166540056 208 309152755 8936439 0 1354908 935642970 13 222975718\n"""
+
+ @staticmethod
+ def create_measurements_expected(
+ instrument: Instrument,
+ ) -> List[Measurement]:
+ return [
+ Measurement(
+ 6150.29,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "user"},
+ ),
+ Measurement(
+ 3177.46,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "nice"},
+ ),
+ Measurement(
+ 5946.01,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "system"},
+ ),
+ Measurement(
+ 891264.59,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "idle"},
+ ),
+ Measurement(
+ 1296.29,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "iowait"},
+ ),
+ Measurement(
+ 0.0,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "irq"},
+ ),
+ Measurement(
+ 8343.46,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "softirq"},
+ ),
+ Measurement(
+ 421.37,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "guest"},
+ ),
+ Measurement(
+ 0,
+ instrument=instrument,
+ attributes={"cpu": "cpu0", "state": "guest_nice"},
+ ),
+ Measurement(
+ 5882.32,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "user"},
+ ),
+ Measurement(
+ 3491.85,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "nice"},
+ ),
+ Measurement(
+ 6404.92,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "system"},
+ ),
+ Measurement(
+ 891564.11,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "idle"},
+ ),
+ Measurement(
+ 1244.85,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "iowait"},
+ ),
+ Measurement(
+ 0,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "irq"},
+ ),
+ Measurement(
+ 2410.04,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "softirq"},
+ ),
+ Measurement(
+ 418.62,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "guest"},
+ ),
+ Measurement(
+ 0,
+ instrument=instrument,
+ attributes={"cpu": "cpu1", "state": "guest_nice"},
+ ),
+ ]
+
+ def test_cpu_time_callback(self):
+ def cpu_time_callback(
+ options: CallbackOptions,
+ ) -> Iterable[Observation]:
+ procstat = io.StringIO(self.procstat_str)
+ procstat.readline() # skip the first line
+ for line in procstat:
+ if not line.startswith("cpu"):
+ break
+ cpu, *states = line.split()
+ yield Observation(
+ int(states[0]) / 100, {"cpu": cpu, "state": "user"}
+ )
+ yield Observation(
+ int(states[1]) / 100, {"cpu": cpu, "state": "nice"}
+ )
+ yield Observation(
+ int(states[2]) / 100, {"cpu": cpu, "state": "system"}
+ )
+ yield Observation(
+ int(states[3]) / 100, {"cpu": cpu, "state": "idle"}
+ )
+ yield Observation(
+ int(states[4]) / 100, {"cpu": cpu, "state": "iowait"}
+ )
+ yield Observation(
+ int(states[5]) / 100, {"cpu": cpu, "state": "irq"}
+ )
+ yield Observation(
+ int(states[6]) / 100, {"cpu": cpu, "state": "softirq"}
+ )
+ yield Observation(
+ int(states[7]) / 100, {"cpu": cpu, "state": "guest"}
+ )
+ yield Observation(
+ int(states[8]) / 100, {"cpu": cpu, "state": "guest_nice"}
+ )
+
+ meter = MeterProvider().get_meter("name")
+ observable_counter = meter.create_observable_counter(
+ "system.cpu.time",
+ callbacks=[cpu_time_callback],
+ unit="s",
+ description="CPU time",
+ )
+ measurements = list(observable_counter.callback(CallbackOptions()))
+ self.assertEqual(
+ measurements, self.create_measurements_expected(observable_counter)
+ )
+
+ def test_cpu_time_generator(self):
+ def cpu_time_generator() -> Generator[
+ Iterable[Observation], None, None
+ ]:
+ options = yield
+ while True:
+ self.assertIsInstance(options, CallbackOptions)
+ measurements = []
+ procstat = io.StringIO(self.procstat_str)
+ procstat.readline() # skip the first line
+ for line in procstat:
+ if not line.startswith("cpu"):
+ break
+ cpu, *states = line.split()
+ measurements.append(
+ Observation(
+ int(states[0]) / 100,
+ {"cpu": cpu, "state": "user"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[1]) / 100,
+ {"cpu": cpu, "state": "nice"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[2]) / 100,
+ {"cpu": cpu, "state": "system"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[3]) / 100,
+ {"cpu": cpu, "state": "idle"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[4]) / 100,
+ {"cpu": cpu, "state": "iowait"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[5]) / 100, {"cpu": cpu, "state": "irq"}
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[6]) / 100,
+ {"cpu": cpu, "state": "softirq"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[7]) / 100,
+ {"cpu": cpu, "state": "guest"},
+ )
+ )
+ measurements.append(
+ Observation(
+ int(states[8]) / 100,
+ {"cpu": cpu, "state": "guest_nice"},
+ )
+ )
+ options = yield measurements
+
+ meter = MeterProvider().get_meter("name")
+ observable_counter = meter.create_observable_counter(
+ "system.cpu.time",
+ callbacks=[cpu_time_generator()],
+ unit="s",
+ description="CPU time",
+ )
+ measurements = list(observable_counter.callback(CallbackOptions()))
+ self.assertEqual(
+ measurements, self.create_measurements_expected(observable_counter)
+ )
+
+ maxDiff = None
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_disable_default_views.py b/opentelemetry-sdk/tests/metrics/integration_test/test_disable_default_views.py
new file mode 100644
index 0000000000..d022456415
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_disable_default_views.py
@@ -0,0 +1,62 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import InMemoryMetricReader
+from opentelemetry.sdk.metrics.view import DropAggregation, View
+
+
+class TestDisableDefaultViews(TestCase):
+ def test_disable_default_views(self):
+ reader = InMemoryMetricReader()
+ meter_provider = MeterProvider(
+ metric_readers=[reader],
+ views=[View(instrument_name="*", aggregation=DropAggregation())],
+ )
+ meter = meter_provider.get_meter("testmeter")
+ counter = meter.create_counter("testcounter")
+ counter.add(10, {"label": "value1"})
+ counter.add(10, {"label": "value2"})
+ counter.add(10, {"label": "value3"})
+ self.assertIsNone(reader.get_metrics_data())
+
+ def test_disable_default_views_add_custom(self):
+ reader = InMemoryMetricReader()
+ meter_provider = MeterProvider(
+ metric_readers=[reader],
+ views=[
+ View(instrument_name="*", aggregation=DropAggregation()),
+ View(instrument_name="testhist"),
+ ],
+ )
+ meter = meter_provider.get_meter("testmeter")
+ counter = meter.create_counter("testcounter")
+ histogram = meter.create_histogram("testhist")
+ counter.add(10, {"label": "value1"})
+ counter.add(10, {"label": "value2"})
+ counter.add(10, {"label": "value3"})
+ histogram.record(12, {"label": "value"})
+
+ metrics = reader.get_metrics_data()
+ self.assertEqual(len(metrics.resource_metrics), 1)
+ self.assertEqual(len(metrics.resource_metrics[0].scope_metrics), 1)
+ self.assertEqual(
+ len(metrics.resource_metrics[0].scope_metrics[0].metrics), 1
+ )
+ self.assertEqual(
+ metrics.resource_metrics[0].scope_metrics[0].metrics[0].name,
+ "testhist",
+ )
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_exporter_concurrency.py b/opentelemetry-sdk/tests/metrics/integration_test/test_exporter_concurrency.py
new file mode 100644
index 0000000000..bbc67eac30
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_exporter_concurrency.py
@@ -0,0 +1,119 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+from threading import Lock
+
+from opentelemetry.metrics import CallbackOptions, Observation
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ MetricExporter,
+ MetricExportResult,
+ MetricsData,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase
+
+
+class MaxCountExporter(MetricExporter):
+ def __init__(self) -> None:
+ super().__init__(None, None)
+ self._lock = Lock()
+
+ # the number of threads inside of export()
+ self.count_in_export = 0
+
+ # the total count of calls to export()
+ self.export_count = 0
+
+ # the maximum number of threads in export() ever
+ self.max_count_in_export = 0
+
+ def export(
+ self,
+ metrics_data: MetricsData,
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ with self._lock:
+ self.export_count += 1
+ self.count_in_export += 1
+
+ # yield to other threads
+ time.sleep(0)
+
+ with self._lock:
+ self.max_count_in_export = max(
+ self.max_count_in_export, self.count_in_export
+ )
+ self.count_in_export -= 1
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ return True
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+
+class TestExporterConcurrency(ConcurrencyTestBase):
+ """
+ Tests the requirement that:
+
+ > `Export` will never be called concurrently for the same exporter instance. `Export` can
+ > be called again only after the current call returns.
+
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#exportbatch
+
+ This test also tests that a thread that calls the a
+ ``MetricReader.collect`` method using an asynchronous instrument is able
+ to perform two actions in the same thread lock space (without it being
+ interrupted by another thread):
+
+ 1. Consume the measurement produced by the callback associated to the
+ asynchronous instrument.
+ 2. Export the measurement mentioned in the step above.
+ """
+
+ def test_exporter_not_called_concurrently(self):
+ exporter = MaxCountExporter()
+ reader = PeriodicExportingMetricReader(
+ exporter=exporter,
+ export_interval_millis=100_000,
+ )
+ meter_provider = MeterProvider(metric_readers=[reader])
+
+ counter_cb_counter = 0
+
+ def counter_cb(options: CallbackOptions):
+ nonlocal counter_cb_counter
+ counter_cb_counter += 1
+ yield Observation(2)
+
+ meter_provider.get_meter(__name__).create_observable_counter(
+ "testcounter", callbacks=[counter_cb]
+ )
+
+ # call collect from a bunch of threads to try and enter export() concurrently
+ def test_many_threads():
+ reader.collect()
+
+ self.run_with_many_threads(test_many_threads, num_threads=100)
+
+ self.assertEqual(counter_cb_counter, 100)
+ # no thread should be in export() now
+ self.assertEqual(exporter.count_in_export, 0)
+ # should be one call for each thread
+ self.assertEqual(exporter.export_count, 100)
+ # should never have been more than one concurrent call
+ self.assertEqual(exporter.max_count_in_export, 1)
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_histogram_export.py b/opentelemetry-sdk/tests/metrics/integration_test/test_histogram_export.py
new file mode 100644
index 0000000000..81d419819a
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_histogram_export.py
@@ -0,0 +1,82 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import InMemoryMetricReader
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+
+
+class TestHistogramExport(TestCase):
+ def test_histogram_counter_collection(self):
+
+ in_memory_metric_reader = InMemoryMetricReader()
+
+ provider = MeterProvider(
+ resource=Resource.create({SERVICE_NAME: "otel-test"}),
+ metric_readers=[in_memory_metric_reader],
+ )
+
+ meter = provider.get_meter("my-meter")
+
+ histogram = meter.create_histogram("my_histogram")
+ counter = meter.create_counter("my_counter")
+ histogram.record(5, {"attribute": "value"})
+ counter.add(1, {"attribute": "value_counter"})
+
+ metric_data = in_memory_metric_reader.get_metrics_data()
+
+ self.assertEqual(
+ len(metric_data.resource_metrics[0].scope_metrics[0].metrics), 2
+ )
+
+ self.assertEqual(
+ (
+ metric_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .bucket_counts
+ ),
+ (0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
+ )
+ self.assertEqual(
+ (
+ metric_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points[0]
+ .value
+ ),
+ 1,
+ )
+
+ metric_data = in_memory_metric_reader.get_metrics_data()
+
+ # FIXME ExplicitBucketHistogramAggregation is resetting counts to zero
+ # even if aggregation temporality is cumulative.
+ self.assertEqual(
+ len(metric_data.resource_metrics[0].scope_metrics[0].metrics), 1
+ )
+ self.assertEqual(
+ (
+ metric_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .value
+ ),
+ 1,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_sum_aggregation.py b/opentelemetry-sdk/tests/metrics/integration_test/test_sum_aggregation.py
new file mode 100644
index 0000000000..708b44f5fe
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_sum_aggregation.py
@@ -0,0 +1,443 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from itertools import count
+from logging import ERROR
+from platform import system
+from unittest import TestCase
+
+from pytest import mark
+
+from opentelemetry.metrics import Observation
+from opentelemetry.sdk.metrics import Counter, MeterProvider, ObservableCounter
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ InMemoryMetricReader,
+)
+from opentelemetry.sdk.metrics.view import SumAggregation
+
+
+class TestSumAggregation(TestCase):
+ @mark.skipif(
+ system() != "Linux",
+ reason=(
+ "Tests fail because Windows time_ns resolution is too low so "
+ "two different time measurements may end up having the exact same"
+ "value."
+ ),
+ )
+ def test_asynchronous_delta_temporality(self):
+
+ eight_multiple_generator = count(start=8, step=8)
+
+ counter = 0
+
+ def observable_counter_callback(callback_options):
+ nonlocal counter
+ counter += 1
+
+ if counter < 11:
+ yield
+
+ elif counter < 21:
+ yield Observation(next(eight_multiple_generator))
+
+ else:
+ yield
+
+ aggregation = SumAggregation()
+
+ reader = InMemoryMetricReader(
+ preferred_aggregation={ObservableCounter: aggregation},
+ preferred_temporality={
+ ObservableCounter: AggregationTemporality.DELTA
+ },
+ )
+
+ provider = MeterProvider(metric_readers=[reader])
+ meter = provider.get_meter("name", "version")
+
+ meter.create_observable_counter(
+ "observable_counter", [observable_counter_callback]
+ )
+
+ results = []
+
+ for _ in range(10):
+ with self.assertLogs(level=ERROR):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 10)
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ results = []
+
+ for _ in range(10):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 20)
+
+ previous_time_unix_nano = (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .time_unix_nano
+ )
+
+ self.assertEqual(
+ (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .value
+ ),
+ 8,
+ )
+
+ self.assertLess(
+ (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .start_time_unix_nano
+ ),
+ previous_time_unix_nano,
+ )
+
+ for metrics_data in results[1:]:
+
+ metric_data = (
+ metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ )
+
+ self.assertEqual(
+ previous_time_unix_nano, metric_data.start_time_unix_nano
+ )
+ previous_time_unix_nano = metric_data.time_unix_nano
+ self.assertEqual(metric_data.value, 8)
+ self.assertLess(
+ metric_data.start_time_unix_nano, metric_data.time_unix_nano
+ )
+
+ results = []
+
+ for _ in range(10):
+ with self.assertLogs(level=ERROR):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 30)
+
+ provider.shutdown()
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ @mark.skipif(
+ system() != "Linux",
+ reason=(
+ "Tests fail because Windows time_ns resolution is too low so "
+ "two different time measurements may end up having the exact same"
+ "value."
+ ),
+ )
+ def test_asynchronous_cumulative_temporality(self):
+
+ eight_multiple_generator = count(start=8, step=8)
+
+ counter = 0
+
+ def observable_counter_callback(callback_options):
+ nonlocal counter
+ counter += 1
+
+ if counter < 11:
+ yield
+
+ elif counter < 21:
+ yield Observation(next(eight_multiple_generator))
+
+ else:
+ yield
+
+ aggregation = SumAggregation()
+
+ reader = InMemoryMetricReader(
+ preferred_aggregation={ObservableCounter: aggregation},
+ preferred_temporality={
+ ObservableCounter: AggregationTemporality.CUMULATIVE
+ },
+ )
+
+ provider = MeterProvider(metric_readers=[reader])
+ meter = provider.get_meter("name", "version")
+
+ meter.create_observable_counter(
+ "observable_counter", [observable_counter_callback]
+ )
+
+ results = []
+
+ for _ in range(10):
+ with self.assertLogs(level=ERROR):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 10)
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ results = []
+
+ for _ in range(10):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 20)
+
+ start_time_unix_nano = (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .start_time_unix_nano
+ )
+
+ for index, metrics_data in enumerate(results):
+
+ metric_data = (
+ metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ )
+
+ self.assertEqual(
+ start_time_unix_nano, metric_data.start_time_unix_nano
+ )
+ self.assertEqual(metric_data.value, 8 * (index + 1))
+
+ results = []
+
+ for _ in range(10):
+ with self.assertLogs(level=ERROR):
+ results.append(reader.get_metrics_data())
+
+ self.assertEqual(counter, 30)
+
+ provider.shutdown()
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ @mark.skipif(
+ system() != "Linux",
+ reason=(
+ "Tests fail because Windows time_ns resolution is too low so "
+ "two different time measurements may end up having the exact same"
+ "value."
+ ),
+ )
+ def test_synchronous_delta_temporality(self):
+
+ aggregation = SumAggregation()
+
+ reader = InMemoryMetricReader(
+ preferred_aggregation={Counter: aggregation},
+ preferred_temporality={Counter: AggregationTemporality.DELTA},
+ )
+
+ provider = MeterProvider(metric_readers=[reader])
+ meter = provider.get_meter("name", "version")
+
+ counter = meter.create_counter("counter")
+
+ results = []
+
+ for _ in range(10):
+
+ results.append(reader.get_metrics_data())
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ results = []
+
+ for _ in range(10):
+ counter.add(8)
+ results.append(reader.get_metrics_data())
+
+ previous_time_unix_nano = (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .time_unix_nano
+ )
+
+ self.assertEqual(
+ (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .value
+ ),
+ 8,
+ )
+
+ self.assertLess(
+ (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .start_time_unix_nano
+ ),
+ previous_time_unix_nano,
+ )
+
+ for metrics_data in results[1:]:
+
+ metric_data = (
+ metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ )
+
+ self.assertEqual(
+ previous_time_unix_nano, metric_data.start_time_unix_nano
+ )
+ previous_time_unix_nano = metric_data.time_unix_nano
+ self.assertEqual(metric_data.value, 8)
+ self.assertLess(
+ metric_data.start_time_unix_nano, metric_data.time_unix_nano
+ )
+
+ results = []
+
+ for _ in range(10):
+
+ results.append(reader.get_metrics_data())
+
+ provider.shutdown()
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ @mark.skipif(
+ system() != "Linux",
+ reason=(
+ "Tests fail because Windows time_ns resolution is too low so "
+ "two different time measurements may end up having the exact same"
+ "value."
+ ),
+ )
+ def test_synchronous_cumulative_temporality(self):
+
+ aggregation = SumAggregation()
+
+ reader = InMemoryMetricReader(
+ preferred_aggregation={Counter: aggregation},
+ preferred_temporality={Counter: AggregationTemporality.CUMULATIVE},
+ )
+
+ provider = MeterProvider(metric_readers=[reader])
+ meter = provider.get_meter("name", "version")
+
+ counter = meter.create_counter("counter")
+
+ results = []
+
+ for _ in range(10):
+
+ results.append(reader.get_metrics_data())
+
+ for metrics_data in results:
+ self.assertIsNone(metrics_data)
+
+ results = []
+
+ for _ in range(10):
+
+ counter.add(8)
+ results.append(reader.get_metrics_data())
+
+ start_time_unix_nano = (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .start_time_unix_nano
+ )
+
+ for index, metrics_data in enumerate(results):
+
+ metric_data = (
+ metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ )
+
+ self.assertEqual(
+ start_time_unix_nano, metric_data.start_time_unix_nano
+ )
+ self.assertEqual(metric_data.value, 8 * (index + 1))
+
+ results = []
+
+ for _ in range(10):
+
+ results.append(reader.get_metrics_data())
+
+ provider.shutdown()
+
+ start_time_unix_nano = (
+ results[0]
+ .resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ .start_time_unix_nano
+ )
+
+ for metrics_data in results:
+
+ metric_data = (
+ metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ )
+
+ self.assertEqual(
+ start_time_unix_nano, metric_data.start_time_unix_nano
+ )
+ self.assertEqual(metric_data.value, 80)
diff --git a/opentelemetry-sdk/tests/metrics/integration_test/test_time_align.py b/opentelemetry-sdk/tests/metrics/integration_test/test_time_align.py
new file mode 100644
index 0000000000..ad34f5622f
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/integration_test/test_time_align.py
@@ -0,0 +1,289 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from platform import system
+from time import sleep
+from unittest import TestCase
+
+from pytest import mark
+
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ InMemoryMetricReader,
+)
+
+
+class TestTimeAlign(TestCase):
+ def test_time_align_cumulative(self):
+ reader = InMemoryMetricReader()
+ meter_provider = MeterProvider(metric_readers=[reader])
+
+ meter = meter_provider.get_meter("testmeter")
+
+ counter_0 = meter.create_counter("counter_0")
+ counter_1 = meter.create_counter("counter_1")
+
+ counter_0.add(10, {"label": "value1"})
+ counter_0.add(10, {"label": "value2"})
+ sleep(0.5)
+ counter_1.add(10, {"label": "value1"})
+ counter_1.add(10, {"label": "value2"})
+
+ metrics = reader.get_metrics_data()
+
+ data_points_0_0 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )
+ data_points_0_1 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].start_time_unix_nano,
+ data_points_0_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].start_time_unix_nano,
+ data_points_0_1[1].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_0_0[1].start_time_unix_nano,
+ data_points_0_1[0].start_time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].time_unix_nano,
+ data_points_0_0[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].time_unix_nano,
+ data_points_0_1[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_0[1].time_unix_nano,
+ data_points_0_1[0].time_unix_nano,
+ )
+
+ counter_0.add(10, {"label": "value1"})
+ counter_0.add(10, {"label": "value2"})
+ sleep(0.5)
+ counter_1.add(10, {"label": "value1"})
+ counter_1.add(10, {"label": "value2"})
+
+ metrics = reader.get_metrics_data()
+
+ data_points_1_0 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )
+ data_points_1_1 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points
+ )
+
+ self.assertEqual(
+ data_points_1_0[0].start_time_unix_nano,
+ data_points_1_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_1[0].start_time_unix_nano,
+ data_points_1_1[1].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_1_0[1].start_time_unix_nano,
+ data_points_1_1[0].start_time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_1_0[0].time_unix_nano,
+ data_points_1_0[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_1[0].time_unix_nano,
+ data_points_1_1[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_0[1].time_unix_nano,
+ data_points_1_1[0].time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].start_time_unix_nano,
+ data_points_1_0[0].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_0[1].start_time_unix_nano,
+ data_points_1_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].start_time_unix_nano,
+ data_points_1_1[0].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[1].start_time_unix_nano,
+ data_points_1_1[1].start_time_unix_nano,
+ )
+
+ @mark.skipif(
+ system() != "Linux", reason="test failing in CI when run in Windows"
+ )
+ def test_time_align_delta(self):
+ reader = InMemoryMetricReader(
+ preferred_temporality={Counter: AggregationTemporality.DELTA}
+ )
+ meter_provider = MeterProvider(metric_readers=[reader])
+
+ meter = meter_provider.get_meter("testmeter")
+
+ counter_0 = meter.create_counter("counter_0")
+ counter_1 = meter.create_counter("counter_1")
+
+ counter_0.add(10, {"label": "value1"})
+ counter_0.add(10, {"label": "value2"})
+ sleep(0.5)
+ counter_1.add(10, {"label": "value1"})
+ counter_1.add(10, {"label": "value2"})
+
+ metrics = reader.get_metrics_data()
+
+ data_points_0_0 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )
+ data_points_0_1 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].start_time_unix_nano,
+ data_points_0_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].start_time_unix_nano,
+ data_points_0_1[1].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_0_0[1].start_time_unix_nano,
+ data_points_0_1[0].start_time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].time_unix_nano,
+ data_points_0_0[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].time_unix_nano,
+ data_points_0_1[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_0[1].time_unix_nano,
+ data_points_0_1[0].time_unix_nano,
+ )
+
+ counter_0.add(10, {"label": "value1"})
+ counter_0.add(10, {"label": "value2"})
+ sleep(0.5)
+ counter_1.add(10, {"label": "value1"})
+ counter_1.add(10, {"label": "value2"})
+
+ metrics = reader.get_metrics_data()
+
+ data_points_1_0 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )
+ data_points_1_1 = list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points
+ )
+
+ self.assertEqual(
+ data_points_1_0[0].start_time_unix_nano,
+ data_points_1_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_1[0].start_time_unix_nano,
+ data_points_1_1[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_0[1].start_time_unix_nano,
+ data_points_1_1[0].start_time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_1_0[0].time_unix_nano,
+ data_points_1_0[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_1[0].time_unix_nano,
+ data_points_1_1[1].time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_1_0[1].time_unix_nano,
+ data_points_1_1[0].time_unix_nano,
+ )
+
+ self.assertNotEqual(
+ data_points_0_0[0].start_time_unix_nano,
+ data_points_1_0[0].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_0_0[1].start_time_unix_nano,
+ data_points_1_0[1].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_0_1[0].start_time_unix_nano,
+ data_points_1_1[0].start_time_unix_nano,
+ )
+ self.assertNotEqual(
+ data_points_0_1[1].start_time_unix_nano,
+ data_points_1_1[1].start_time_unix_nano,
+ )
+
+ self.assertEqual(
+ data_points_0_0[0].time_unix_nano,
+ data_points_1_0[0].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_0[1].time_unix_nano,
+ data_points_1_0[1].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[0].time_unix_nano,
+ data_points_1_1[0].start_time_unix_nano,
+ )
+ self.assertEqual(
+ data_points_0_1[1].time_unix_nano,
+ data_points_1_1[1].start_time_unix_nano,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_aggregation.py b/opentelemetry-sdk/tests/metrics/test_aggregation.py
new file mode 100644
index 0000000000..b7cfc63cd4
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_aggregation.py
@@ -0,0 +1,547 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from math import inf
+from time import sleep
+from typing import Union
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ _ExplicitBucketHistogramAggregation,
+ _LastValueAggregation,
+ _SumAggregation,
+)
+from opentelemetry.sdk.metrics._internal.instrument import (
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableGauge,
+ _ObservableUpDownCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ NumberDataPoint,
+)
+from opentelemetry.sdk.metrics.view import (
+ DefaultAggregation,
+ ExplicitBucketHistogramAggregation,
+ LastValueAggregation,
+ SumAggregation,
+)
+from opentelemetry.util.types import Attributes
+
+
+def measurement(
+ value: Union[int, float], attributes: Attributes = None
+) -> Measurement:
+ return Measurement(value, instrument=Mock(), attributes=attributes)
+
+
+class TestSynchronousSumAggregation(TestCase):
+ def test_aggregate_delta(self):
+ """
+ `SynchronousSumAggregation` aggregates data for sum metric points
+ """
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.DELTA, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ synchronous_sum_aggregation.aggregate(measurement(2))
+ synchronous_sum_aggregation.aggregate(measurement(3))
+
+ self.assertEqual(synchronous_sum_aggregation._current_value, 6)
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.DELTA, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ synchronous_sum_aggregation.aggregate(measurement(-2))
+ synchronous_sum_aggregation.aggregate(measurement(3))
+
+ self.assertEqual(synchronous_sum_aggregation._current_value, 2)
+
+ def test_aggregate_cumulative(self):
+ """
+ `SynchronousSumAggregation` aggregates data for sum metric points
+ """
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.CUMULATIVE, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ synchronous_sum_aggregation.aggregate(measurement(2))
+ synchronous_sum_aggregation.aggregate(measurement(3))
+
+ self.assertEqual(synchronous_sum_aggregation._current_value, 6)
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.CUMULATIVE, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ synchronous_sum_aggregation.aggregate(measurement(-2))
+ synchronous_sum_aggregation.aggregate(measurement(3))
+
+ self.assertEqual(synchronous_sum_aggregation._current_value, 2)
+
+ def test_collect_delta(self):
+ """
+ `SynchronousSumAggregation` collects sum metric points
+ """
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.DELTA, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ # 1 is used here directly to simulate the instant the first
+ # collection process starts.
+ first_sum = synchronous_sum_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+
+ self.assertEqual(first_sum.value, 1)
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ # 2 is used here directly to simulate the instant the first
+ # collection process starts.
+ second_sum = synchronous_sum_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 2
+ )
+
+ self.assertEqual(second_sum.value, 2)
+
+ self.assertEqual(
+ second_sum.start_time_unix_nano, first_sum.start_time_unix_nano
+ )
+
+ synchronous_sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.DELTA, 0
+ )
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ # 1 is used here directly to simulate the instant the first
+ # collection process starts.
+ first_sum = synchronous_sum_aggregation.collect(
+ AggregationTemporality.DELTA, 1
+ )
+
+ self.assertEqual(first_sum.value, 1)
+
+ synchronous_sum_aggregation.aggregate(measurement(1))
+ # 2 is used here directly to simulate the instant the first
+ # collection process starts.
+ second_sum = synchronous_sum_aggregation.collect(
+ AggregationTemporality.DELTA, 2
+ )
+
+ self.assertEqual(second_sum.value, 1)
+
+ self.assertGreater(
+ second_sum.start_time_unix_nano, first_sum.start_time_unix_nano
+ )
+
+ def test_collect_cumulative(self):
+ """
+ `SynchronousSumAggregation` collects number data points
+ """
+
+ sum_aggregation = _SumAggregation(
+ Mock(), True, AggregationTemporality.CUMULATIVE, 0
+ )
+
+ sum_aggregation.aggregate(measurement(1))
+ first_sum = sum_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+
+ self.assertEqual(first_sum.value, 1)
+
+ # should have been reset after first collect
+ sum_aggregation.aggregate(measurement(1))
+ second_sum = sum_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+
+ self.assertEqual(second_sum.value, 1)
+
+ self.assertEqual(
+ second_sum.start_time_unix_nano, first_sum.start_time_unix_nano
+ )
+
+ # if no point seen for a whole interval, should return None
+ third_sum = sum_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+ self.assertIsNone(third_sum)
+
+
+class TestLastValueAggregation(TestCase):
+ def test_aggregate(self):
+ """
+ `LastValueAggregation` collects data for gauge metric points with delta
+ temporality
+ """
+
+ last_value_aggregation = _LastValueAggregation(Mock())
+
+ last_value_aggregation.aggregate(measurement(1))
+ self.assertEqual(last_value_aggregation._value, 1)
+
+ last_value_aggregation.aggregate(measurement(2))
+ self.assertEqual(last_value_aggregation._value, 2)
+
+ last_value_aggregation.aggregate(measurement(3))
+ self.assertEqual(last_value_aggregation._value, 3)
+
+ def test_collect(self):
+ """
+ `LastValueAggregation` collects number data points
+ """
+
+ last_value_aggregation = _LastValueAggregation(Mock())
+
+ self.assertIsNone(
+ last_value_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+ )
+
+ last_value_aggregation.aggregate(measurement(1))
+ # 1 is used here directly to simulate the instant the first
+ # collection process starts.
+ first_number_data_point = last_value_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+ self.assertIsInstance(first_number_data_point, NumberDataPoint)
+
+ self.assertEqual(first_number_data_point.value, 1)
+
+ last_value_aggregation.aggregate(measurement(1))
+
+ # CI fails the last assertion without this
+ sleep(0.1)
+
+ # 2 is used here directly to simulate the instant the second
+ # collection process starts.
+ second_number_data_point = last_value_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 2
+ )
+
+ self.assertEqual(second_number_data_point.value, 1)
+
+ self.assertGreater(
+ second_number_data_point.time_unix_nano,
+ first_number_data_point.time_unix_nano,
+ )
+
+ # 3 is used here directly to simulate the instant the second
+ # collection process starts.
+ third_number_data_point = last_value_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 3
+ )
+ self.assertIsNone(third_number_data_point)
+
+
+class TestExplicitBucketHistogramAggregation(TestCase):
+ def test_aggregate(self):
+ """
+ Test `ExplicitBucketHistogramAggregation with custom boundaries
+ """
+
+ explicit_bucket_histogram_aggregation = (
+ _ExplicitBucketHistogramAggregation(
+ Mock(), 0, boundaries=[0, 2, 4]
+ )
+ )
+
+ explicit_bucket_histogram_aggregation.aggregate(measurement(-1))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(0))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(1))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(2))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(3))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(4))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(5))
+
+ # The first bucket keeps count of values between (-inf, 0] (-1 and 0)
+ self.assertEqual(
+ explicit_bucket_histogram_aggregation._bucket_counts[0], 2
+ )
+
+ # The second bucket keeps count of values between (0, 2] (1 and 2)
+ self.assertEqual(
+ explicit_bucket_histogram_aggregation._bucket_counts[1], 2
+ )
+
+ # The third bucket keeps count of values between (2, 4] (3 and 4)
+ self.assertEqual(
+ explicit_bucket_histogram_aggregation._bucket_counts[2], 2
+ )
+
+ # The fourth bucket keeps count of values between (4, inf) (3 and 4)
+ self.assertEqual(
+ explicit_bucket_histogram_aggregation._bucket_counts[3], 1
+ )
+
+ histo = explicit_bucket_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+ self.assertEqual(histo.sum, 14)
+
+ def test_min_max(self):
+ """
+ `record_min_max` indicates the aggregator to record the minimum and
+ maximum value in the population
+ """
+
+ explicit_bucket_histogram_aggregation = (
+ _ExplicitBucketHistogramAggregation(Mock(), 0)
+ )
+
+ explicit_bucket_histogram_aggregation.aggregate(measurement(-1))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(2))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(7))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(8))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(9999))
+
+ self.assertEqual(explicit_bucket_histogram_aggregation._min, -1)
+ self.assertEqual(explicit_bucket_histogram_aggregation._max, 9999)
+
+ explicit_bucket_histogram_aggregation = (
+ _ExplicitBucketHistogramAggregation(
+ Mock(), 0, record_min_max=False
+ )
+ )
+
+ explicit_bucket_histogram_aggregation.aggregate(measurement(-1))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(2))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(7))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(8))
+ explicit_bucket_histogram_aggregation.aggregate(measurement(9999))
+
+ self.assertEqual(explicit_bucket_histogram_aggregation._min, inf)
+ self.assertEqual(explicit_bucket_histogram_aggregation._max, -inf)
+
+ def test_collect(self):
+ """
+ `_ExplicitBucketHistogramAggregation` collects sum metric points
+ """
+
+ explicit_bucket_histogram_aggregation = (
+ _ExplicitBucketHistogramAggregation(
+ Mock(), 0, boundaries=[0, 1, 2]
+ )
+ )
+
+ explicit_bucket_histogram_aggregation.aggregate(measurement(1))
+ # 1 is used here directly to simulate the instant the first
+ # collection process starts.
+ first_histogram = explicit_bucket_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 1
+ )
+
+ self.assertEqual(first_histogram.bucket_counts, (0, 1, 0, 0))
+ self.assertEqual(first_histogram.sum, 1)
+
+ # CI fails the last assertion without this
+ sleep(0.1)
+
+ explicit_bucket_histogram_aggregation.aggregate(measurement(1))
+ # 2 is used here directly to simulate the instant the second
+ # collection process starts.
+ second_histogram = explicit_bucket_histogram_aggregation.collect(
+ AggregationTemporality.CUMULATIVE, 2
+ )
+
+ self.assertEqual(second_histogram.bucket_counts, (0, 2, 0, 0))
+ self.assertEqual(second_histogram.sum, 2)
+
+ self.assertGreater(
+ second_histogram.time_unix_nano, first_histogram.time_unix_nano
+ )
+
+ def test_boundaries(self):
+ self.assertEqual(
+ _ExplicitBucketHistogramAggregation(Mock(), 0)._boundaries,
+ (
+ 0.0,
+ 5.0,
+ 10.0,
+ 25.0,
+ 50.0,
+ 75.0,
+ 100.0,
+ 250.0,
+ 500.0,
+ 750.0,
+ 1000.0,
+ 2500.0,
+ 5000.0,
+ 7500.0,
+ 10000.0,
+ ),
+ )
+
+
+class TestAggregationFactory(TestCase):
+ def test_sum_factory(self):
+ counter = _Counter("name", Mock(), Mock())
+ factory = SumAggregation()
+ aggregation = factory._create_aggregation(counter, Mock(), 0)
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertTrue(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.DELTA,
+ )
+ aggregation2 = factory._create_aggregation(counter, Mock(), 0)
+ self.assertNotEqual(aggregation, aggregation2)
+
+ counter = _UpDownCounter("name", Mock(), Mock())
+ factory = SumAggregation()
+ aggregation = factory._create_aggregation(counter, Mock(), 0)
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertFalse(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.DELTA,
+ )
+
+ counter = _ObservableCounter("name", Mock(), Mock(), None)
+ factory = SumAggregation()
+ aggregation = factory._create_aggregation(counter, Mock(), 0)
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertTrue(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ def test_explicit_bucket_histogram_factory(self):
+ histo = _Histogram("name", Mock(), Mock())
+ factory = ExplicitBucketHistogramAggregation(
+ boundaries=(
+ 0.0,
+ 5.0,
+ ),
+ record_min_max=False,
+ )
+ aggregation = factory._create_aggregation(histo, Mock(), 0)
+ self.assertIsInstance(aggregation, _ExplicitBucketHistogramAggregation)
+ self.assertFalse(aggregation._record_min_max)
+ self.assertEqual(aggregation._boundaries, (0.0, 5.0))
+ aggregation2 = factory._create_aggregation(histo, Mock(), 0)
+ self.assertNotEqual(aggregation, aggregation2)
+
+ def test_last_value_factory(self):
+ counter = _Counter("name", Mock(), Mock())
+ factory = LastValueAggregation()
+ aggregation = factory._create_aggregation(counter, Mock(), 0)
+ self.assertIsInstance(aggregation, _LastValueAggregation)
+ aggregation2 = factory._create_aggregation(counter, Mock(), 0)
+ self.assertNotEqual(aggregation, aggregation2)
+
+
+class TestDefaultAggregation(TestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.default_aggregation = DefaultAggregation()
+
+ def test_counter(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _Counter("name", Mock(), Mock()), Mock(), 0
+ )
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertTrue(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.DELTA,
+ )
+
+ def test_up_down_counter(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _UpDownCounter("name", Mock(), Mock()), Mock(), 0
+ )
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertFalse(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.DELTA,
+ )
+
+ def test_observable_counter(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _ObservableCounter("name", Mock(), Mock(), callbacks=[Mock()]),
+ Mock(),
+ 0,
+ )
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertTrue(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ def test_observable_up_down_counter(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _ObservableUpDownCounter(
+ "name", Mock(), Mock(), callbacks=[Mock()]
+ ),
+ Mock(),
+ 0,
+ )
+ self.assertIsInstance(aggregation, _SumAggregation)
+ self.assertFalse(aggregation._instrument_is_monotonic)
+ self.assertEqual(
+ aggregation._instrument_aggregation_temporality,
+ AggregationTemporality.CUMULATIVE,
+ )
+
+ def test_histogram(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _Histogram(
+ "name",
+ Mock(),
+ Mock(),
+ ),
+ Mock(),
+ 0,
+ )
+ self.assertIsInstance(aggregation, _ExplicitBucketHistogramAggregation)
+
+ def test_observable_gauge(self):
+
+ aggregation = self.default_aggregation._create_aggregation(
+ _ObservableGauge(
+ "name",
+ Mock(),
+ Mock(),
+ callbacks=[Mock()],
+ ),
+ Mock(),
+ 0,
+ )
+ self.assertIsInstance(aggregation, _LastValueAggregation)
diff --git a/opentelemetry-sdk/tests/metrics/test_backward_compat.py b/opentelemetry-sdk/tests/metrics/test_backward_compat.py
new file mode 100644
index 0000000000..46008554fe
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_backward_compat.py
@@ -0,0 +1,110 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The purpose of this test is to test for backward compatibility with any user-implementable
+interfaces as they were originally defined. For example, changes to the MetricExporter ABC must
+be made in such a way that existing implementations (outside of this repo) continue to work
+when *called* by the SDK.
+
+This does not apply to classes which are not intended to be overridden by the user e.g. Meter
+and PeriodicExportingMetricReader concrete class. Those may freely be modified in a
+backward-compatible way for *callers*.
+
+Ideally, we could use mypy for this as well, but SDK is not type checked atm.
+"""
+
+from typing import Iterable, Sequence
+
+from opentelemetry.metrics import CallbackOptions, Observation
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics._internal.export import InMemoryMetricReader
+from opentelemetry.sdk.metrics.export import (
+ Metric,
+ MetricExporter,
+ MetricExportResult,
+ MetricReader,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.test import TestCase
+
+
+# Do not change these classes until after major version 1
+class OrigMetricExporter(MetricExporter):
+ def export(
+ self,
+ metrics: Sequence[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ pass
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ return True
+
+
+class OrigMetricReader(MetricReader):
+ def _receive_metrics(
+ self,
+ metrics: Iterable[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ pass
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ self.collect()
+
+
+def orig_callback(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(2)
+
+
+class TestBackwardCompat(TestCase):
+ def test_metric_exporter(self):
+ exporter = OrigMetricExporter()
+ meter_provider = MeterProvider(
+ metric_readers=[PeriodicExportingMetricReader(exporter)]
+ )
+ # produce some data
+ meter_provider.get_meter("foo").create_counter("mycounter").add(12)
+ with self.assertNotRaises(Exception):
+ meter_provider.shutdown()
+
+ def test_metric_reader(self):
+ reader = OrigMetricReader()
+ meter_provider = MeterProvider(metric_readers=[reader])
+ # produce some data
+ meter_provider.get_meter("foo").create_counter("mycounter").add(12)
+ with self.assertNotRaises(Exception):
+ meter_provider.shutdown()
+
+ def test_observable_callback(self):
+ reader = InMemoryMetricReader()
+ meter_provider = MeterProvider(metric_readers=[reader])
+ # produce some data
+ meter_provider.get_meter("foo").create_counter("mycounter").add(12)
+ with self.assertNotRaises(Exception):
+ metrics_data = reader.get_metrics_data()
+
+ self.assertEqual(len(metrics_data.resource_metrics), 1)
+ self.assertEqual(
+ len(metrics_data.resource_metrics[0].scope_metrics), 1
+ )
+ self.assertEqual(
+ len(metrics_data.resource_metrics[0].scope_metrics[0].metrics), 1
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_import.py b/opentelemetry-sdk/tests/metrics/test_import.py
new file mode 100644
index 0000000000..f0302e00de
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_import.py
@@ -0,0 +1,79 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=unused-import
+
+from opentelemetry.test import TestCase
+
+
+class TestImport(TestCase):
+ def test_import_init(self):
+ """
+ Test that the metrics root module has the right symbols
+ """
+
+ with self.assertNotRaises(Exception):
+ from opentelemetry.sdk.metrics import ( # noqa: F401
+ Counter,
+ Histogram,
+ Meter,
+ MeterProvider,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+ )
+
+ def test_import_export(self):
+ """
+ Test that the metrics export module has the right symbols
+ """
+
+ with self.assertNotRaises(Exception):
+ from opentelemetry.sdk.metrics.export import ( # noqa: F401
+ AggregationTemporality,
+ ConsoleMetricExporter,
+ DataPointT,
+ DataT,
+ Gauge,
+ Histogram,
+ HistogramDataPoint,
+ InMemoryMetricReader,
+ Metric,
+ MetricExporter,
+ MetricExportResult,
+ MetricReader,
+ MetricsData,
+ NumberDataPoint,
+ PeriodicExportingMetricReader,
+ ResourceMetrics,
+ ScopeMetrics,
+ Sum,
+ )
+
+ def test_import_view(self):
+ """
+ Test that the metrics view module has the right symbols
+ """
+
+ with self.assertNotRaises(Exception):
+ from opentelemetry.sdk.metrics.view import ( # noqa: F401
+ Aggregation,
+ DefaultAggregation,
+ DropAggregation,
+ ExplicitBucketHistogramAggregation,
+ LastValueAggregation,
+ SumAggregation,
+ View,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_in_memory_metric_reader.py b/opentelemetry-sdk/tests/metrics/test_in_memory_metric_reader.py
new file mode 100644
index 0000000000..68c81e8b7e
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_in_memory_metric_reader.py
@@ -0,0 +1,156 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from time import sleep
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.metrics import Observation
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ InMemoryMetricReader,
+ Metric,
+ NumberDataPoint,
+ Sum,
+)
+
+
+class TestInMemoryMetricReader(TestCase):
+ def test_no_metrics(self):
+ mock_collect_callback = Mock(return_value=[])
+ reader = InMemoryMetricReader()
+ reader._set_collect_callback(mock_collect_callback)
+ self.assertEqual(reader.get_metrics_data(), [])
+ mock_collect_callback.assert_called_once()
+
+ def test_converts_metrics_to_list(self):
+ metric = Metric(
+ name="foo",
+ description="",
+ unit="",
+ data=Sum(
+ data_points=[
+ NumberDataPoint(
+ attributes={"myattr": "baz"},
+ start_time_unix_nano=1647626444152947792,
+ time_unix_nano=1647626444153163239,
+ value=72.3309814450449,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.CUMULATIVE,
+ is_monotonic=True,
+ ),
+ )
+ mock_collect_callback = Mock(return_value=(metric,))
+ reader = InMemoryMetricReader()
+ reader._set_collect_callback(mock_collect_callback)
+
+ returned_metrics = reader.get_metrics_data()
+ mock_collect_callback.assert_called_once()
+ self.assertIsInstance(returned_metrics, tuple)
+ self.assertEqual(len(returned_metrics), 1)
+ self.assertIs(returned_metrics[0], metric)
+
+ def test_shutdown(self):
+ # shutdown should always be successful
+ self.assertIsNone(InMemoryMetricReader().shutdown())
+
+ def test_integration(self):
+ reader = InMemoryMetricReader()
+ meter = MeterProvider(metric_readers=[reader]).get_meter("test_meter")
+ counter1 = meter.create_counter("counter1")
+ meter.create_observable_gauge(
+ "observable_gauge1",
+ callbacks=[lambda options: [Observation(value=12)]],
+ )
+ counter1.add(1, {"foo": "1"})
+ counter1.add(1, {"foo": "2"})
+
+ metrics = reader.get_metrics_data()
+ # should be 3 number data points, one from the observable gauge and one
+ # for each labelset from the counter
+ self.assertEqual(len(metrics.resource_metrics[0].scope_metrics), 1)
+ self.assertEqual(
+ len(metrics.resource_metrics[0].scope_metrics[0].metrics), 2
+ )
+ self.assertEqual(
+ len(
+ list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )
+ ),
+ 2,
+ )
+ self.assertEqual(
+ len(
+ list(
+ metrics.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points
+ )
+ ),
+ 1,
+ )
+
+ def test_cumulative_multiple_collect(self):
+
+ reader = InMemoryMetricReader(
+ preferred_temporality={Counter: AggregationTemporality.CUMULATIVE}
+ )
+ meter = MeterProvider(metric_readers=[reader]).get_meter("test_meter")
+ counter = meter.create_counter("counter1")
+ counter.add(1, attributes={"key": "value"})
+
+ reader.collect()
+
+ number_data_point_0 = list(
+ reader._metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )[0]
+
+ # Windows tests fail without this sleep because both time_unix_nano
+ # values are the same.
+ sleep(0.1)
+ reader.collect()
+
+ number_data_point_1 = list(
+ reader._metrics_data.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )[0]
+
+ self.assertEqual(
+ number_data_point_0.attributes, number_data_point_1.attributes
+ )
+ self.assertEqual(
+ number_data_point_0.start_time_unix_nano,
+ number_data_point_1.start_time_unix_nano,
+ )
+ self.assertEqual(
+ number_data_point_0.start_time_unix_nano,
+ number_data_point_1.start_time_unix_nano,
+ )
+ self.assertEqual(number_data_point_0.value, number_data_point_1.value)
+ self.assertGreater(
+ number_data_point_1.time_unix_nano,
+ number_data_point_0.time_unix_nano,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_instrument.py b/opentelemetry-sdk/tests/metrics/test_instrument.py
new file mode 100644
index 0000000000..5eb1a90885
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_instrument.py
@@ -0,0 +1,372 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import WARNING
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.metrics import Observation
+from opentelemetry.metrics._internal.instrument import CallbackOptions
+from opentelemetry.sdk.metrics import (
+ Counter,
+ Histogram,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.instrument import (
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableGauge,
+ _ObservableUpDownCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+
+
+class TestCounter(TestCase):
+ def testname(self):
+ self.assertEqual(_Counter("name", Mock(), Mock()).name, "name")
+ self.assertEqual(_Counter("Name", Mock(), Mock()).name, "name")
+
+ def test_add(self):
+ mc = Mock()
+ counter = _Counter("name", Mock(), mc)
+ counter.add(1.0)
+ mc.consume_measurement.assert_called_once()
+
+ def test_add_non_monotonic(self):
+ mc = Mock()
+ counter = _Counter("name", Mock(), mc)
+ with self.assertLogs(level=WARNING):
+ counter.add(-1.0)
+ mc.consume_measurement.assert_not_called()
+
+ def test_disallow_direct_counter_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ Counter("name", Mock(), Mock())
+
+
+class TestUpDownCounter(TestCase):
+ def test_add(self):
+ mc = Mock()
+ counter = _UpDownCounter("name", Mock(), mc)
+ counter.add(1.0)
+ mc.consume_measurement.assert_called_once()
+
+ def test_add_non_monotonic(self):
+ mc = Mock()
+ counter = _UpDownCounter("name", Mock(), mc)
+ counter.add(-1.0)
+ mc.consume_measurement.assert_called_once()
+
+ def test_disallow_direct_up_down_counter_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ UpDownCounter("name", Mock(), Mock())
+
+
+TEST_ATTRIBUTES = {"foo": "bar"}
+
+
+def callable_callback_0(options: CallbackOptions):
+ return [
+ Observation(1, attributes=TEST_ATTRIBUTES),
+ Observation(2, attributes=TEST_ATTRIBUTES),
+ Observation(3, attributes=TEST_ATTRIBUTES),
+ ]
+
+
+def callable_callback_1(options: CallbackOptions):
+ return [
+ Observation(4, attributes=TEST_ATTRIBUTES),
+ Observation(5, attributes=TEST_ATTRIBUTES),
+ Observation(6, attributes=TEST_ATTRIBUTES),
+ ]
+
+
+def generator_callback_0():
+ options = yield
+ assert isinstance(options, CallbackOptions)
+ options = yield [
+ Observation(1, attributes=TEST_ATTRIBUTES),
+ Observation(2, attributes=TEST_ATTRIBUTES),
+ Observation(3, attributes=TEST_ATTRIBUTES),
+ ]
+ assert isinstance(options, CallbackOptions)
+
+
+def generator_callback_1():
+ options = yield
+ assert isinstance(options, CallbackOptions)
+ options = yield [
+ Observation(4, attributes=TEST_ATTRIBUTES),
+ Observation(5, attributes=TEST_ATTRIBUTES),
+ Observation(6, attributes=TEST_ATTRIBUTES),
+ ]
+ assert isinstance(options, CallbackOptions)
+
+
+class TestObservableGauge(TestCase):
+ def testname(self):
+ self.assertEqual(_ObservableGauge("name", Mock(), Mock()).name, "name")
+ self.assertEqual(_ObservableGauge("Name", Mock(), Mock()).name, "name")
+
+ def test_callable_callback_0(self):
+ observable_gauge = _ObservableGauge(
+ "name", Mock(), Mock(), [callable_callback_0]
+ )
+
+ self.assertEqual(
+ list(observable_gauge.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 2, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 3, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ ],
+ )
+
+ def test_callable_multiple_callable_callback(self):
+ observable_gauge = _ObservableGauge(
+ "name", Mock(), Mock(), [callable_callback_0, callable_callback_1]
+ )
+
+ self.assertEqual(
+ list(observable_gauge.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 2, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 3, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 4, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 5, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 6, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ ],
+ )
+
+ def test_generator_callback_0(self):
+ observable_gauge = _ObservableGauge(
+ "name", Mock(), Mock(), [generator_callback_0()]
+ )
+
+ self.assertEqual(
+ list(observable_gauge.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 2, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 3, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ ],
+ )
+
+ def test_generator_multiple_generator_callback(self):
+ self.maxDiff = None
+ observable_gauge = _ObservableGauge(
+ "name",
+ Mock(),
+ Mock(),
+ callbacks=[generator_callback_0(), generator_callback_1()],
+ )
+
+ self.assertEqual(
+ list(observable_gauge.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 2, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 3, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 4, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 5, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ Measurement(
+ 6, instrument=observable_gauge, attributes=TEST_ATTRIBUTES
+ ),
+ ],
+ )
+
+ def test_disallow_direct_observable_gauge_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ ObservableGauge("name", Mock(), Mock())
+
+
+class TestObservableCounter(TestCase):
+ def test_callable_callback_0(self):
+ observable_counter = _ObservableCounter(
+ "name", Mock(), Mock(), [callable_callback_0]
+ )
+
+ self.assertEqual(
+ list(observable_counter.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 2,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 3,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ ],
+ )
+
+ def test_generator_callback_0(self):
+ observable_counter = _ObservableCounter(
+ "name", Mock(), Mock(), [generator_callback_0()]
+ )
+
+ self.assertEqual(
+ list(observable_counter.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 2,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 3,
+ instrument=observable_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ ],
+ )
+
+ def test_disallow_direct_observable_counter_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ ObservableCounter("name", Mock(), Mock())
+
+
+class TestObservableUpDownCounter(TestCase):
+ def test_callable_callback_0(self):
+ observable_up_down_counter = _ObservableUpDownCounter(
+ "name", Mock(), Mock(), [callable_callback_0]
+ )
+
+ self.assertEqual(
+ list(observable_up_down_counter.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 2,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 3,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ ],
+ )
+
+ def test_generator_callback_0(self):
+ observable_up_down_counter = _ObservableUpDownCounter(
+ "name", Mock(), Mock(), [generator_callback_0()]
+ )
+
+ self.assertEqual(
+ list(observable_up_down_counter.callback(CallbackOptions())),
+ [
+ Measurement(
+ 1,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 2,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ Measurement(
+ 3,
+ instrument=observable_up_down_counter,
+ attributes=TEST_ATTRIBUTES,
+ ),
+ ],
+ )
+
+ def test_disallow_direct_observable_up_down_counter_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ ObservableUpDownCounter("name", Mock(), Mock())
+
+
+class TestHistogram(TestCase):
+ def test_record(self):
+ mc = Mock()
+ hist = _Histogram("name", Mock(), mc)
+ hist.record(1.0)
+ mc.consume_measurement.assert_called_once()
+
+ def test_record_non_monotonic(self):
+ mc = Mock()
+ hist = _Histogram("name", Mock(), mc)
+ with self.assertLogs(level=WARNING):
+ hist.record(-1.0)
+ mc.consume_measurement.assert_not_called()
+
+ def test_disallow_direct_histogram_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ Histogram("name", Mock(), Mock())
diff --git a/opentelemetry-sdk/tests/metrics/test_measurement_consumer.py b/opentelemetry-sdk/tests/metrics/test_measurement_consumer.py
new file mode 100644
index 0000000000..b1f3dc2a38
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_measurement_consumer.py
@@ -0,0 +1,190 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from sys import version_info
+from time import sleep
+from unittest import TestCase
+from unittest.mock import MagicMock, Mock, patch
+
+from opentelemetry.sdk.metrics._internal.measurement_consumer import (
+ MeasurementConsumer,
+ SynchronousMeasurementConsumer,
+)
+from opentelemetry.sdk.metrics._internal.sdk_configuration import (
+ SdkConfiguration,
+)
+
+
+@patch(
+ "opentelemetry.sdk.metrics._internal."
+ "measurement_consumer.MetricReaderStorage"
+)
+class TestSynchronousMeasurementConsumer(TestCase):
+ def test_parent(self, _):
+
+ self.assertIsInstance(
+ SynchronousMeasurementConsumer(MagicMock()), MeasurementConsumer
+ )
+
+ def test_creates_metric_reader_storages(self, MockMetricReaderStorage):
+ """It should create one MetricReaderStorage per metric reader passed in the SdkConfiguration"""
+ reader_mocks = [Mock() for _ in range(5)]
+ SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=reader_mocks,
+ views=Mock(),
+ )
+ )
+ self.assertEqual(len(MockMetricReaderStorage.mock_calls), 5)
+
+ def test_measurements_passed_to_each_reader_storage(
+ self, MockMetricReaderStorage
+ ):
+ reader_mocks = [Mock() for _ in range(5)]
+ reader_storage_mocks = [Mock() for _ in range(5)]
+ MockMetricReaderStorage.side_effect = reader_storage_mocks
+
+ consumer = SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=reader_mocks,
+ views=Mock(),
+ )
+ )
+ measurement_mock = Mock()
+ consumer.consume_measurement(measurement_mock)
+
+ for rs_mock in reader_storage_mocks:
+ rs_mock.consume_measurement.assert_called_once_with(
+ measurement_mock
+ )
+
+ def test_collect_passed_to_reader_stage(self, MockMetricReaderStorage):
+ """Its collect() method should defer to the underlying MetricReaderStorage"""
+ reader_mocks = [Mock() for _ in range(5)]
+ reader_storage_mocks = [Mock() for _ in range(5)]
+ MockMetricReaderStorage.side_effect = reader_storage_mocks
+
+ consumer = SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=reader_mocks,
+ views=Mock(),
+ )
+ )
+ for r_mock, rs_mock in zip(reader_mocks, reader_storage_mocks):
+ rs_mock.collect.assert_not_called()
+ consumer.collect(r_mock)
+ rs_mock.collect.assert_called_once_with()
+
+ def test_collect_calls_async_instruments(self, MockMetricReaderStorage):
+ """Its collect() method should invoke async instruments and pass measurements to the
+ corresponding metric reader storage"""
+ reader_mock = Mock()
+ reader_storage_mock = Mock()
+ MockMetricReaderStorage.return_value = reader_storage_mock
+ consumer = SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=[reader_mock],
+ views=Mock(),
+ )
+ )
+ async_instrument_mocks = [MagicMock() for _ in range(5)]
+ for i_mock in async_instrument_mocks:
+ i_mock.callback.return_value = [Mock()]
+ consumer.register_asynchronous_instrument(i_mock)
+
+ consumer.collect(reader_mock)
+
+ # it should call async instruments
+ for i_mock in async_instrument_mocks:
+ i_mock.callback.assert_called_once()
+
+ # it should pass measurements to reader storage
+ self.assertEqual(
+ len(reader_storage_mock.consume_measurement.mock_calls), 5
+ )
+
+ def test_collect_timeout(self, MockMetricReaderStorage):
+ reader_mock = Mock()
+ reader_storage_mock = Mock()
+ MockMetricReaderStorage.return_value = reader_storage_mock
+ consumer = SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=[reader_mock],
+ views=Mock(),
+ )
+ )
+
+ def sleep_1(*args, **kwargs):
+ sleep(1)
+
+ consumer.register_asynchronous_instrument(
+ Mock(**{"callback.side_effect": sleep_1})
+ )
+
+ with self.assertRaises(Exception) as error:
+ consumer.collect(reader_mock, timeout_millis=10)
+
+ self.assertIn(
+ "Timed out while executing callback", error.exception.args[0]
+ )
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal."
+ "measurement_consumer.CallbackOptions"
+ )
+ def test_collect_deadline(
+ self, mock_callback_options, MockMetricReaderStorage
+ ):
+ reader_mock = Mock()
+ reader_storage_mock = Mock()
+ MockMetricReaderStorage.return_value = reader_storage_mock
+ consumer = SynchronousMeasurementConsumer(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=[reader_mock],
+ views=Mock(),
+ )
+ )
+
+ def sleep_1(*args, **kwargs):
+ sleep(1)
+ return []
+
+ consumer.register_asynchronous_instrument(
+ Mock(**{"callback.side_effect": sleep_1})
+ )
+ consumer.register_asynchronous_instrument(
+ Mock(**{"callback.side_effect": sleep_1})
+ )
+
+ consumer.collect(reader_mock)
+
+ if version_info < (3, 8):
+ callback_options_time_call = mock_callback_options.mock_calls[-1][
+ 2
+ ]["timeout_millis"]
+ else:
+ callback_options_time_call = mock_callback_options.mock_calls[
+ -1
+ ].kwargs["timeout_millis"]
+
+ self.assertLess(
+ callback_options_time_call,
+ 10000 * 10**6,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_metric_reader.py b/opentelemetry-sdk/tests/metrics/test_metric_reader.py
new file mode 100644
index 0000000000..86404328d6
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_metric_reader.py
@@ -0,0 +1,144 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import Dict, Iterable
+from unittest import TestCase
+from unittest.mock import patch
+
+from opentelemetry.sdk.metrics import Counter, Histogram, ObservableGauge
+from opentelemetry.sdk.metrics._internal.instrument import (
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableGauge,
+ _ObservableUpDownCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Metric,
+ MetricReader,
+)
+from opentelemetry.sdk.metrics.view import (
+ Aggregation,
+ DefaultAggregation,
+ LastValueAggregation,
+)
+
+_expected_keys = [
+ _Counter,
+ _UpDownCounter,
+ _Histogram,
+ _ObservableCounter,
+ _ObservableUpDownCounter,
+ _ObservableGauge,
+]
+
+
+class DummyMetricReader(MetricReader):
+ def __init__(
+ self,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[type, Aggregation] = None,
+ ) -> None:
+ super().__init__(
+ preferred_temporality=preferred_temporality,
+ preferred_aggregation=preferred_aggregation,
+ )
+
+ def _receive_metrics(
+ self,
+ metrics: Iterable[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ pass
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ return True
+
+
+class TestMetricReader(TestCase):
+ def test_configure_temporality(self):
+
+ dummy_metric_reader = DummyMetricReader(
+ preferred_temporality={
+ Histogram: AggregationTemporality.DELTA,
+ ObservableGauge: AggregationTemporality.DELTA,
+ }
+ )
+
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality.keys(),
+ set(_expected_keys),
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[_Counter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[_UpDownCounter],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[_Histogram],
+ AggregationTemporality.DELTA,
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[
+ _ObservableCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[
+ _ObservableUpDownCounter
+ ],
+ AggregationTemporality.CUMULATIVE,
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_temporality[
+ _ObservableGauge
+ ],
+ AggregationTemporality.DELTA,
+ )
+
+ def test_configure_aggregation(self):
+ dummy_metric_reader = DummyMetricReader()
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_aggregation.keys(),
+ set(_expected_keys),
+ )
+ for (
+ value
+ ) in dummy_metric_reader._instrument_class_aggregation.values():
+ self.assertIsInstance(value, DefaultAggregation)
+
+ dummy_metric_reader = DummyMetricReader(
+ preferred_aggregation={Counter: LastValueAggregation()}
+ )
+ self.assertEqual(
+ dummy_metric_reader._instrument_class_aggregation.keys(),
+ set(_expected_keys),
+ )
+ self.assertIsInstance(
+ dummy_metric_reader._instrument_class_aggregation[_Counter],
+ LastValueAggregation,
+ )
+
+ def test_force_flush(self):
+
+ with patch.object(DummyMetricReader, "collect") as mock_collect:
+ DummyMetricReader().force_flush(timeout_millis=10)
+ mock_collect.assert_called_with(timeout_millis=10)
diff --git a/opentelemetry-sdk/tests/metrics/test_metric_reader_storage.py b/opentelemetry-sdk/tests/metrics/test_metric_reader_storage.py
new file mode 100644
index 0000000000..1da6d5bcf6
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_metric_reader_storage.py
@@ -0,0 +1,867 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from logging import WARNING
+from unittest.mock import MagicMock, Mock, patch
+
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ _LastValueAggregation,
+)
+from opentelemetry.sdk.metrics._internal.instrument import (
+ _Counter,
+ _Histogram,
+ _ObservableCounter,
+ _UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.metric_reader_storage import (
+ _DEFAULT_VIEW,
+ MetricReaderStorage,
+)
+from opentelemetry.sdk.metrics._internal.sdk_configuration import (
+ SdkConfiguration,
+)
+from opentelemetry.sdk.metrics.export import AggregationTemporality
+from opentelemetry.sdk.metrics.view import (
+ DefaultAggregation,
+ DropAggregation,
+ ExplicitBucketHistogramAggregation,
+ SumAggregation,
+ View,
+)
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase, MockFunc
+
+
+def mock_view_matching(name, *instruments) -> Mock:
+ mock = Mock(name=name)
+ mock._match.side_effect = lambda instrument: instrument in instruments
+ return mock
+
+
+def mock_instrument() -> Mock:
+ instr = Mock()
+ instr.attributes = {}
+ return instr
+
+
+class TestMetricReaderStorage(ConcurrencyTestBase):
+ @patch(
+ "opentelemetry.sdk.metrics._internal"
+ ".metric_reader_storage._ViewInstrumentMatch"
+ )
+ def test_creates_view_instrument_matches(
+ self, MockViewInstrumentMatch: Mock
+ ):
+ """It should create a MockViewInstrumentMatch when an instrument
+ matches a view"""
+ instrument1 = Mock(name="instrument1")
+ instrument2 = Mock(name="instrument2")
+
+ view1 = mock_view_matching("view_1", instrument1)
+ view2 = mock_view_matching("view_2", instrument1, instrument2)
+ storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(view1, view2),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ # instrument1 matches view1 and view2, so should create two
+ # ViewInstrumentMatch objects
+ storage.consume_measurement(Measurement(1, instrument1))
+ self.assertEqual(
+ len(MockViewInstrumentMatch.call_args_list),
+ 2,
+ MockViewInstrumentMatch.mock_calls,
+ )
+ # they should only be created the first time the instrument is seen
+ storage.consume_measurement(Measurement(1, instrument1))
+ self.assertEqual(len(MockViewInstrumentMatch.call_args_list), 2)
+
+ # instrument2 matches view2, so should create a single
+ # ViewInstrumentMatch
+ MockViewInstrumentMatch.call_args_list.clear()
+ with self.assertLogs(level=WARNING):
+ storage.consume_measurement(Measurement(1, instrument2))
+ self.assertEqual(len(MockViewInstrumentMatch.call_args_list), 1)
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal."
+ "metric_reader_storage._ViewInstrumentMatch"
+ )
+ def test_forwards_calls_to_view_instrument_match(
+ self, MockViewInstrumentMatch: Mock
+ ):
+ view_instrument_match1 = Mock(_aggregation=_LastValueAggregation({}))
+ view_instrument_match2 = Mock(_aggregation=_LastValueAggregation({}))
+ view_instrument_match3 = Mock(_aggregation=_LastValueAggregation({}))
+ MockViewInstrumentMatch.side_effect = [
+ view_instrument_match1,
+ view_instrument_match2,
+ view_instrument_match3,
+ ]
+
+ instrument1 = Mock(name="instrument1")
+ instrument2 = Mock(name="instrument2")
+ view1 = mock_view_matching("view1", instrument1)
+ view2 = mock_view_matching("view2", instrument1, instrument2)
+
+ storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(view1, view2),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ # Measurements from an instrument should be passed on to each
+ # ViewInstrumentMatch objects created for that instrument
+ measurement = Measurement(1, instrument1)
+ storage.consume_measurement(measurement)
+ view_instrument_match1.consume_measurement.assert_called_once_with(
+ measurement
+ )
+ view_instrument_match2.consume_measurement.assert_called_once_with(
+ measurement
+ )
+ view_instrument_match3.consume_measurement.assert_not_called()
+
+ measurement = Measurement(1, instrument2)
+ with self.assertLogs(level=WARNING):
+ storage.consume_measurement(measurement)
+ view_instrument_match3.consume_measurement.assert_called_once_with(
+ measurement
+ )
+
+ # collect() should call collect on all of its _ViewInstrumentMatch
+ # objects and combine them together
+ all_metrics = [Mock() for _ in range(6)]
+ view_instrument_match1.collect.return_value = all_metrics[:2]
+ view_instrument_match2.collect.return_value = all_metrics[2:4]
+ view_instrument_match3.collect.return_value = all_metrics[4:]
+
+ result = storage.collect()
+ view_instrument_match1.collect.assert_called_once()
+ view_instrument_match2.collect.assert_called_once()
+ view_instrument_match3.collect.assert_called_once()
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[0]
+ ),
+ all_metrics[0],
+ )
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points[1]
+ ),
+ all_metrics[1],
+ )
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points[0]
+ ),
+ all_metrics[2],
+ )
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[1]
+ .data.data_points[1]
+ ),
+ all_metrics[3],
+ )
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[1]
+ .metrics[0]
+ .data.data_points[0]
+ ),
+ all_metrics[4],
+ )
+ self.assertEqual(
+ (
+ result.resource_metrics[0]
+ .scope_metrics[1]
+ .metrics[0]
+ .data.data_points[1]
+ ),
+ all_metrics[5],
+ )
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal."
+ "metric_reader_storage._ViewInstrumentMatch"
+ )
+ def test_race_concurrent_measurements(self, MockViewInstrumentMatch: Mock):
+ mock_view_instrument_match_ctor = MockFunc()
+ MockViewInstrumentMatch.side_effect = mock_view_instrument_match_ctor
+
+ instrument1 = Mock(name="instrument1")
+ view1 = mock_view_matching(instrument1)
+ storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(view1,),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ def send_measurement():
+ storage.consume_measurement(Measurement(1, instrument1))
+
+ # race sending many measurements concurrently
+ self.run_with_many_threads(send_measurement)
+
+ # _ViewInstrumentMatch constructor should have only been called once
+ self.assertEqual(mock_view_instrument_match_ctor.call_count, 1)
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal."
+ "metric_reader_storage._ViewInstrumentMatch"
+ )
+ def test_default_view_enabled(self, MockViewInstrumentMatch: Mock):
+ """Instruments should be matched with default views when enabled"""
+ instrument1 = Mock(name="instrument1")
+ instrument2 = Mock(name="instrument2")
+
+ storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ storage.consume_measurement(Measurement(1, instrument1))
+ self.assertEqual(
+ len(MockViewInstrumentMatch.call_args_list),
+ 1,
+ MockViewInstrumentMatch.mock_calls,
+ )
+ storage.consume_measurement(Measurement(1, instrument1))
+ self.assertEqual(len(MockViewInstrumentMatch.call_args_list), 1)
+
+ MockViewInstrumentMatch.call_args_list.clear()
+ storage.consume_measurement(Measurement(1, instrument2))
+ self.assertEqual(len(MockViewInstrumentMatch.call_args_list), 1)
+
+ def test_drop_aggregation(self):
+
+ counter = _Counter("name", Mock(), Mock())
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(
+ instrument_name="name", aggregation=DropAggregation()
+ ),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+ metric_reader_storage.consume_measurement(Measurement(1, counter))
+
+ self.assertIsNone(metric_reader_storage.collect())
+
+ def test_same_collection_start(self):
+
+ counter = _Counter("name", Mock(), Mock())
+ up_down_counter = _UpDownCounter("name", Mock(), Mock())
+
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(View(instrument_name="name"),),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ metric_reader_storage.consume_measurement(Measurement(1, counter))
+ metric_reader_storage.consume_measurement(
+ Measurement(1, up_down_counter)
+ )
+
+ actual = metric_reader_storage.collect()
+
+ self.assertEqual(
+ list(
+ actual.resource_metrics[0]
+ .scope_metrics[0]
+ .metrics[0]
+ .data.data_points
+ )[0].time_unix_nano,
+ list(
+ actual.resource_metrics[0]
+ .scope_metrics[1]
+ .metrics[0]
+ .data.data_points
+ )[0].time_unix_nano,
+ )
+
+ def test_conflicting_view_configuration(self):
+
+ observable_counter = _ObservableCounter(
+ "observable_counter",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(
+ instrument_name="observable_counter",
+ aggregation=ExplicitBucketHistogramAggregation(),
+ ),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter)
+ )
+
+ self.assertIs(
+ metric_reader_storage._instrument_view_instrument_matches[
+ observable_counter
+ ][0]._view,
+ _DEFAULT_VIEW,
+ )
+
+ def test_view_instrument_match_conflict_0(self):
+ # There is a conflict between views and instruments.
+
+ observable_counter_0 = _ObservableCounter(
+ "observable_counter_0",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ observable_counter_1 = _ObservableCounter(
+ "observable_counter_1",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="observable_counter_0", name="foo"),
+ View(instrument_name="observable_counter_1", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_0)
+ )
+
+ with self.assertLogs(level=WARNING) as log:
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_1)
+ )
+
+ self.assertIn(
+ "will cause conflicting metrics",
+ log.records[0].message,
+ )
+
+ def test_view_instrument_match_conflict_1(self):
+ # There is a conflict between views and instruments.
+
+ observable_counter_foo = _ObservableCounter(
+ "foo",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ observable_counter_bar = _ObservableCounter(
+ "bar",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ observable_counter_baz = _ObservableCounter(
+ "baz",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="bar", name="foo"),
+ View(instrument_name="baz", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_foo)
+ )
+
+ with self.assertLogs(level=WARNING) as log:
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_bar)
+ )
+
+ self.assertIn(
+ "will cause conflicting metrics",
+ log.records[0].message,
+ )
+
+ with self.assertLogs(level=WARNING) as log:
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_baz)
+ )
+
+ self.assertIn(
+ "will cause conflicting metrics",
+ log.records[0].message,
+ )
+
+ for (
+ view_instrument_matches
+ ) in (
+ metric_reader_storage._instrument_view_instrument_matches.values()
+ ):
+ for view_instrument_match in view_instrument_matches:
+ self.assertEqual(view_instrument_match._name, "foo")
+
+ def test_view_instrument_match_conflict_2(self):
+ # There is no conflict because the metric streams names are different.
+ observable_counter_foo = _ObservableCounter(
+ "foo",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ observable_counter_bar = _ObservableCounter(
+ "bar",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="foo"),
+ View(instrument_name="bar"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_foo)
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_bar)
+ )
+
+ def test_view_instrument_match_conflict_3(self):
+ # There is no conflict because the aggregation temporality of the
+ # instruments is different.
+
+ counter_bar = _Counter(
+ "bar",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ observable_counter_baz = _ObservableCounter(
+ "baz",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="bar", name="foo"),
+ View(instrument_name="baz", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, counter_bar)
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_baz)
+ )
+
+ def test_view_instrument_match_conflict_4(self):
+ # There is no conflict because the monotonicity of the instruments is
+ # different.
+
+ counter_bar = _Counter(
+ "bar",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ up_down_counter_baz = _UpDownCounter(
+ "baz",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="bar", name="foo"),
+ View(instrument_name="baz", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, counter_bar)
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, up_down_counter_baz)
+ )
+
+ def test_view_instrument_match_conflict_5(self):
+ # There is no conflict because the instrument units are different.
+
+ observable_counter_0 = _ObservableCounter(
+ "observable_counter_0",
+ Mock(),
+ [Mock()],
+ unit="unit_0",
+ description="description",
+ )
+ observable_counter_1 = _ObservableCounter(
+ "observable_counter_1",
+ Mock(),
+ [Mock()],
+ unit="unit_1",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="observable_counter_0", name="foo"),
+ View(instrument_name="observable_counter_1", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_0)
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_1)
+ )
+
+ def test_view_instrument_match_conflict_6(self):
+ # There is no conflict because the instrument data points are
+ # different.
+
+ observable_counter = _ObservableCounter(
+ "observable_counter",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ histogram = _Histogram(
+ "histogram",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="observable_counter", name="foo"),
+ View(instrument_name="histogram", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter)
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, histogram)
+ )
+
+ def test_view_instrument_match_conflict_7(self):
+ # There is a conflict between views and instruments because the
+ # description being different does not avoid a conflict.
+
+ observable_counter_0 = _ObservableCounter(
+ "observable_counter_0",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description_0",
+ )
+ observable_counter_1 = _ObservableCounter(
+ "observable_counter_1",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description_1",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="observable_counter_0", name="foo"),
+ View(instrument_name="observable_counter_1", name="foo"),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_0)
+ )
+
+ with self.assertLogs(level=WARNING) as log:
+ metric_reader_storage.consume_measurement(
+ Measurement(1, observable_counter_1)
+ )
+
+ self.assertIn(
+ "will cause conflicting metrics",
+ log.records[0].message,
+ )
+
+ def test_view_instrument_match_conflict_8(self):
+ # There is a conflict because the histogram-matching view changes the
+ # default aggregation of the histogram to Sum aggregation which is the
+ # same aggregation as the default aggregation of the up down counter
+ # and also the temporality and monotonicity of the up down counter and
+ # the histogram are the same.
+
+ up_down_counter = _UpDownCounter(
+ "up_down_counter",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ histogram = _Histogram(
+ "histogram",
+ Mock(),
+ [Mock()],
+ unit="unit",
+ description="description",
+ )
+ metric_reader_storage = MetricReaderStorage(
+ SdkConfiguration(
+ resource=Mock(),
+ metric_readers=(),
+ views=(
+ View(instrument_name="up_down_counter", name="foo"),
+ View(
+ instrument_name="histogram",
+ name="foo",
+ aggregation=SumAggregation(),
+ ),
+ ),
+ ),
+ MagicMock(
+ **{
+ "__getitem__.return_value": AggregationTemporality.CUMULATIVE
+ }
+ ),
+ MagicMock(**{"__getitem__.return_value": DefaultAggregation()}),
+ )
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ metric_reader_storage.consume_measurement(
+ Measurement(1, up_down_counter)
+ )
+
+ with self.assertLogs(level=WARNING) as log:
+ metric_reader_storage.consume_measurement(
+ Measurement(1, histogram)
+ )
+
+ self.assertIn(
+ "will cause conflicting metrics",
+ log.records[0].message,
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_metrics.py b/opentelemetry-sdk/tests/metrics/test_metrics.py
new file mode 100644
index 0000000000..0ccadf47ce
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_metrics.py
@@ -0,0 +1,540 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from logging import WARNING
+from time import sleep
+from typing import Iterable, Sequence
+from unittest.mock import MagicMock, Mock, patch
+
+from opentelemetry.metrics import NoOpMeter
+from opentelemetry.sdk.metrics import (
+ Counter,
+ Histogram,
+ Meter,
+ MeterProvider,
+ ObservableCounter,
+ ObservableGauge,
+ ObservableUpDownCounter,
+ UpDownCounter,
+)
+from opentelemetry.sdk.metrics._internal import SynchronousMeasurementConsumer
+from opentelemetry.sdk.metrics.export import (
+ Metric,
+ MetricExporter,
+ MetricExportResult,
+ MetricReader,
+ PeriodicExportingMetricReader,
+)
+from opentelemetry.sdk.metrics.view import SumAggregation, View
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.test import TestCase
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase, MockFunc
+
+
+class DummyMetricReader(MetricReader):
+ def __init__(self):
+ super().__init__()
+
+ def _receive_metrics(
+ self,
+ metrics: Iterable[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ pass
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ return True
+
+
+class TestMeterProvider(ConcurrencyTestBase, TestCase):
+ def tearDown(self):
+
+ MeterProvider._all_metric_readers = set()
+
+ @patch.object(Resource, "create")
+ def test_init_default(self, resource_patch):
+ meter_provider = MeterProvider()
+ resource_mock = resource_patch.return_value
+ resource_patch.assert_called_once()
+ self.assertIsNotNone(meter_provider._sdk_config)
+ self.assertEqual(meter_provider._sdk_config.resource, resource_mock)
+ self.assertTrue(
+ isinstance(
+ meter_provider._measurement_consumer,
+ SynchronousMeasurementConsumer,
+ )
+ )
+ self.assertIsNotNone(meter_provider._atexit_handler)
+
+ def test_register_metric_readers(self):
+ mock_exporter = Mock()
+ mock_exporter._preferred_temporality = None
+ mock_exporter._preferred_aggregation = None
+ metric_reader_0 = PeriodicExportingMetricReader(mock_exporter)
+ metric_reader_1 = PeriodicExportingMetricReader(mock_exporter)
+
+ with self.assertNotRaises(Exception):
+ MeterProvider(metric_readers=(metric_reader_0,))
+ MeterProvider(metric_readers=(metric_reader_1,))
+
+ with self.assertRaises(Exception):
+ MeterProvider(metric_readers=(metric_reader_0,))
+ MeterProvider(metric_readers=(metric_reader_0,))
+
+ def test_resource(self):
+ """
+ `MeterProvider` provides a way to allow a `Resource` to be specified.
+ """
+
+ meter_provider_0 = MeterProvider()
+ meter_provider_1 = MeterProvider()
+
+ self.assertEqual(
+ meter_provider_0._sdk_config.resource,
+ meter_provider_1._sdk_config.resource,
+ )
+ self.assertIsInstance(meter_provider_0._sdk_config.resource, Resource)
+ self.assertIsInstance(meter_provider_1._sdk_config.resource, Resource)
+
+ resource = Resource({"key": "value"})
+ self.assertIs(
+ MeterProvider(resource=resource)._sdk_config.resource, resource
+ )
+
+ def test_get_meter(self):
+ """
+ `MeterProvider.get_meter` arguments are used to create an
+ `InstrumentationScope` object on the created `Meter`.
+ """
+
+ meter = MeterProvider().get_meter(
+ "name",
+ version="version",
+ schema_url="schema_url",
+ )
+
+ self.assertEqual(meter._instrumentation_scope.name, "name")
+ self.assertEqual(meter._instrumentation_scope.version, "version")
+ self.assertEqual(meter._instrumentation_scope.schema_url, "schema_url")
+
+ def test_get_meter_empty(self):
+ """
+ `MeterProvider.get_meter` called with None or empty string as name
+ should return a NoOpMeter.
+ """
+
+ with self.assertLogs(level=WARNING):
+ meter = MeterProvider().get_meter(
+ None,
+ version="version",
+ schema_url="schema_url",
+ )
+ self.assertIsInstance(meter, NoOpMeter)
+ self.assertEqual(meter._name, None)
+
+ with self.assertLogs(level=WARNING):
+ meter = MeterProvider().get_meter(
+ "",
+ version="version",
+ schema_url="schema_url",
+ )
+ self.assertIsInstance(meter, NoOpMeter)
+ self.assertEqual(meter._name, "")
+
+ def test_get_meter_duplicate(self):
+ """
+ Subsequent calls to `MeterProvider.get_meter` with the same arguments
+ should return the same `Meter` instance.
+ """
+ mp = MeterProvider()
+ meter1 = mp.get_meter(
+ "name",
+ version="version",
+ schema_url="schema_url",
+ )
+ meter2 = mp.get_meter(
+ "name",
+ version="version",
+ schema_url="schema_url",
+ )
+ meter3 = mp.get_meter(
+ "name2",
+ version="version",
+ schema_url="schema_url",
+ )
+ self.assertIs(meter1, meter2)
+ self.assertIsNot(meter1, meter3)
+
+ def test_shutdown(self):
+
+ mock_metric_reader_0 = MagicMock(
+ **{
+ "shutdown.side_effect": ZeroDivisionError(),
+ }
+ )
+ mock_metric_reader_1 = MagicMock(
+ **{
+ "shutdown.side_effect": AssertionError(),
+ }
+ )
+
+ meter_provider = MeterProvider(
+ metric_readers=[mock_metric_reader_0, mock_metric_reader_1]
+ )
+
+ with self.assertRaises(Exception) as error:
+ meter_provider.shutdown()
+
+ error = error.exception
+
+ self.assertEqual(
+ str(error),
+ (
+ "MeterProvider.shutdown failed because the following "
+ "metric readers failed during shutdown:\n"
+ "MagicMock: ZeroDivisionError()\n"
+ "MagicMock: AssertionError()"
+ ),
+ )
+
+ mock_metric_reader_0.shutdown.assert_called_once()
+ mock_metric_reader_1.shutdown.assert_called_once()
+
+ mock_metric_reader_0 = Mock()
+ mock_metric_reader_1 = Mock()
+
+ meter_provider = MeterProvider(
+ metric_readers=[mock_metric_reader_0, mock_metric_reader_1]
+ )
+
+ self.assertIsNone(meter_provider.shutdown())
+ mock_metric_reader_0.shutdown.assert_called_once()
+ mock_metric_reader_1.shutdown.assert_called_once()
+
+ def test_shutdown_subsequent_calls(self):
+ """
+ No subsequent attempts to get a `Meter` are allowed after calling
+ `MeterProvider.shutdown`
+ """
+
+ meter_provider = MeterProvider()
+
+ with self.assertRaises(AssertionError):
+ with self.assertLogs(level=WARNING):
+ meter_provider.shutdown()
+
+ with self.assertLogs(level=WARNING):
+ meter_provider.shutdown()
+
+ @patch("opentelemetry.sdk.metrics._internal._logger")
+ def test_shutdown_race(self, mock_logger):
+ mock_logger.warning = MockFunc()
+ meter_provider = MeterProvider()
+ num_threads = 70
+ self.run_with_many_threads(
+ meter_provider.shutdown, num_threads=num_threads
+ )
+ self.assertEqual(mock_logger.warning.call_count, num_threads - 1)
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal." "SynchronousMeasurementConsumer"
+ )
+ def test_measurement_collect_callback(
+ self, mock_sync_measurement_consumer
+ ):
+ metric_readers = [
+ DummyMetricReader(),
+ DummyMetricReader(),
+ DummyMetricReader(),
+ DummyMetricReader(),
+ DummyMetricReader(),
+ ]
+ sync_consumer_instance = mock_sync_measurement_consumer()
+ sync_consumer_instance.collect = MockFunc()
+ MeterProvider(metric_readers=metric_readers)
+
+ for reader in metric_readers:
+ reader.collect()
+ self.assertEqual(
+ sync_consumer_instance.collect.call_count, len(metric_readers)
+ )
+
+ @patch(
+ "opentelemetry.sdk.metrics." "_internal.SynchronousMeasurementConsumer"
+ )
+ def test_creates_sync_measurement_consumer(
+ self, mock_sync_measurement_consumer
+ ):
+ MeterProvider()
+ mock_sync_measurement_consumer.assert_called()
+
+ @patch(
+ "opentelemetry.sdk.metrics." "_internal.SynchronousMeasurementConsumer"
+ )
+ def test_register_asynchronous_instrument(
+ self, mock_sync_measurement_consumer
+ ):
+
+ meter_provider = MeterProvider()
+
+ meter_provider._measurement_consumer.register_asynchronous_instrument.assert_called_with(
+ meter_provider.get_meter("name").create_observable_counter(
+ "name0", callbacks=[Mock()]
+ )
+ )
+ meter_provider._measurement_consumer.register_asynchronous_instrument.assert_called_with(
+ meter_provider.get_meter("name").create_observable_up_down_counter(
+ "name1", callbacks=[Mock()]
+ )
+ )
+ meter_provider._measurement_consumer.register_asynchronous_instrument.assert_called_with(
+ meter_provider.get_meter("name").create_observable_gauge(
+ "name2", callbacks=[Mock()]
+ )
+ )
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal." "SynchronousMeasurementConsumer"
+ )
+ def test_consume_measurement_counter(self, mock_sync_measurement_consumer):
+ sync_consumer_instance = mock_sync_measurement_consumer()
+ meter_provider = MeterProvider()
+ counter = meter_provider.get_meter("name").create_counter("name")
+
+ counter.add(1)
+
+ sync_consumer_instance.consume_measurement.assert_called()
+
+ @patch(
+ "opentelemetry.sdk.metrics." "_internal.SynchronousMeasurementConsumer"
+ )
+ def test_consume_measurement_up_down_counter(
+ self, mock_sync_measurement_consumer
+ ):
+ sync_consumer_instance = mock_sync_measurement_consumer()
+ meter_provider = MeterProvider()
+ counter = meter_provider.get_meter("name").create_up_down_counter(
+ "name"
+ )
+
+ counter.add(1)
+
+ sync_consumer_instance.consume_measurement.assert_called()
+
+ @patch(
+ "opentelemetry.sdk.metrics._internal." "SynchronousMeasurementConsumer"
+ )
+ def test_consume_measurement_histogram(
+ self, mock_sync_measurement_consumer
+ ):
+ sync_consumer_instance = mock_sync_measurement_consumer()
+ meter_provider = MeterProvider()
+ counter = meter_provider.get_meter("name").create_histogram("name")
+
+ counter.record(1)
+
+ sync_consumer_instance.consume_measurement.assert_called()
+
+
+class TestMeter(TestCase):
+ def setUp(self):
+ self.meter = Meter(Mock(), Mock())
+
+ def test_repeated_instrument_names(self):
+ with self.assertNotRaises(Exception):
+ self.meter.create_counter("counter")
+ self.meter.create_up_down_counter("up_down_counter")
+ self.meter.create_observable_counter(
+ "observable_counter", callbacks=[Mock()]
+ )
+ self.meter.create_histogram("histogram")
+ self.meter.create_observable_gauge(
+ "observable_gauge", callbacks=[Mock()]
+ )
+ self.meter.create_observable_up_down_counter(
+ "observable_up_down_counter", callbacks=[Mock()]
+ )
+
+ for instrument_name in [
+ "counter",
+ "up_down_counter",
+ "histogram",
+ ]:
+ with self.assertLogs(level=WARNING):
+ getattr(self.meter, f"create_{instrument_name}")(
+ instrument_name
+ )
+
+ for instrument_name in [
+ "observable_counter",
+ "observable_gauge",
+ "observable_up_down_counter",
+ ]:
+ with self.assertLogs(level=WARNING):
+ getattr(self.meter, f"create_{instrument_name}")(
+ instrument_name, callbacks=[Mock()]
+ )
+
+ def test_create_counter(self):
+ counter = self.meter.create_counter(
+ "name", unit="unit", description="description"
+ )
+
+ self.assertIsInstance(counter, Counter)
+ self.assertEqual(counter.name, "name")
+
+ def test_create_up_down_counter(self):
+ up_down_counter = self.meter.create_up_down_counter(
+ "name", unit="unit", description="description"
+ )
+
+ self.assertIsInstance(up_down_counter, UpDownCounter)
+ self.assertEqual(up_down_counter.name, "name")
+
+ def test_create_observable_counter(self):
+ observable_counter = self.meter.create_observable_counter(
+ "name", callbacks=[Mock()], unit="unit", description="description"
+ )
+
+ self.assertIsInstance(observable_counter, ObservableCounter)
+ self.assertEqual(observable_counter.name, "name")
+
+ def test_create_histogram(self):
+ histogram = self.meter.create_histogram(
+ "name", unit="unit", description="description"
+ )
+
+ self.assertIsInstance(histogram, Histogram)
+ self.assertEqual(histogram.name, "name")
+
+ def test_create_observable_gauge(self):
+ observable_gauge = self.meter.create_observable_gauge(
+ "name", callbacks=[Mock()], unit="unit", description="description"
+ )
+
+ self.assertIsInstance(observable_gauge, ObservableGauge)
+ self.assertEqual(observable_gauge.name, "name")
+
+ def test_create_observable_up_down_counter(self):
+ observable_up_down_counter = (
+ self.meter.create_observable_up_down_counter(
+ "name",
+ callbacks=[Mock()],
+ unit="unit",
+ description="description",
+ )
+ )
+ self.assertIsInstance(
+ observable_up_down_counter, ObservableUpDownCounter
+ )
+ self.assertEqual(observable_up_down_counter.name, "name")
+
+
+class InMemoryMetricExporter(MetricExporter):
+ def __init__(self):
+ super().__init__()
+ self.metrics = {}
+ self._counter = 0
+
+ def export(
+ self,
+ metrics: Sequence[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ self.metrics[self._counter] = metrics
+ self._counter += 1
+ return MetricExportResult.SUCCESS
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ pass
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ return True
+
+
+class TestDuplicateInstrumentAggregateData(TestCase):
+ def test_duplicate_instrument_aggregate_data(self):
+
+ exporter = InMemoryMetricExporter()
+ reader = PeriodicExportingMetricReader(
+ exporter, export_interval_millis=500
+ )
+ view = View(
+ instrument_type=Counter,
+ attribute_keys=[],
+ aggregation=SumAggregation(),
+ )
+ provider = MeterProvider(
+ metric_readers=[reader],
+ resource=Resource.create(),
+ views=[view],
+ )
+
+ meter_0 = provider.get_meter(
+ name="meter_0",
+ version="version",
+ schema_url="schema_url",
+ )
+ meter_1 = provider.get_meter(
+ name="meter_1",
+ version="version",
+ schema_url="schema_url",
+ )
+ counter_0_0 = meter_0.create_counter(
+ "counter", unit="unit", description="description"
+ )
+ with self.assertLogs(level=WARNING):
+ counter_0_1 = meter_0.create_counter(
+ "counter", unit="unit", description="description"
+ )
+ counter_1_0 = meter_1.create_counter(
+ "counter", unit="unit", description="description"
+ )
+
+ self.assertIs(counter_0_0, counter_0_1)
+ self.assertIsNot(counter_0_0, counter_1_0)
+
+ counter_0_0.add(1, {})
+ counter_0_1.add(2, {})
+
+ with self.assertLogs(level=WARNING):
+ counter_1_0.add(7, {})
+
+ sleep(1)
+
+ reader.shutdown()
+
+ sleep(1)
+
+ metrics = exporter.metrics[0]
+
+ scope_metrics = metrics.resource_metrics[0].scope_metrics
+ self.assertEqual(len(scope_metrics), 2)
+
+ metric_0 = scope_metrics[0].metrics[0]
+
+ self.assertEqual(metric_0.name, "counter")
+ self.assertEqual(metric_0.unit, "unit")
+ self.assertEqual(metric_0.description, "description")
+ self.assertEqual(next(iter(metric_0.data.data_points)).value, 3)
+
+ metric_1 = scope_metrics[1].metrics[0]
+
+ self.assertEqual(metric_1.name, "counter")
+ self.assertEqual(metric_1.unit, "unit")
+ self.assertEqual(metric_1.description, "description")
+ self.assertEqual(next(iter(metric_1.data.data_points)).value, 7)
diff --git a/opentelemetry-sdk/tests/metrics/test_periodic_exporting_metric_reader.py b/opentelemetry-sdk/tests/metrics/test_periodic_exporting_metric_reader.py
new file mode 100644
index 0000000000..98f59526ef
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_periodic_exporting_metric_reader.py
@@ -0,0 +1,257 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+from logging import WARNING
+from time import sleep, time_ns
+from typing import Optional, Sequence
+from unittest.mock import Mock
+
+from flaky import flaky
+
+from opentelemetry.sdk.metrics import Counter, MetricsTimeoutError
+from opentelemetry.sdk.metrics._internal import _Counter
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Gauge,
+ Metric,
+ MetricExporter,
+ MetricExportResult,
+ NumberDataPoint,
+ PeriodicExportingMetricReader,
+ Sum,
+)
+from opentelemetry.sdk.metrics.view import (
+ DefaultAggregation,
+ LastValueAggregation,
+)
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase
+
+
+class FakeMetricsExporter(MetricExporter):
+ def __init__(
+ self, wait=0, preferred_temporality=None, preferred_aggregation=None
+ ):
+ self.wait = wait
+ self.metrics = []
+ self._shutdown = False
+ super().__init__(
+ preferred_temporality=preferred_temporality,
+ preferred_aggregation=preferred_aggregation,
+ )
+
+ def export(
+ self,
+ metrics: Sequence[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> MetricExportResult:
+ sleep(self.wait)
+ self.metrics.extend(metrics)
+ return True
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ self._shutdown = True
+
+ def force_flush(self, timeout_millis: float = 10_000) -> bool:
+ return True
+
+
+class ExceptionAtCollectionPeriodicExportingMetricReader(
+ PeriodicExportingMetricReader
+):
+ def __init__(
+ self,
+ exporter: MetricExporter,
+ exception: Exception,
+ export_interval_millis: Optional[float] = None,
+ export_timeout_millis: Optional[float] = None,
+ ) -> None:
+ super().__init__(
+ exporter, export_interval_millis, export_timeout_millis
+ )
+ self._collect_exception = exception
+
+ def collect(self, timeout_millis: float = 10_000) -> None:
+ raise self._collect_exception
+
+
+metrics_list = [
+ Metric(
+ name="sum_name",
+ description="",
+ unit="",
+ data=Sum(
+ data_points=[
+ NumberDataPoint(
+ attributes={},
+ start_time_unix_nano=time_ns(),
+ time_unix_nano=time_ns(),
+ value=2,
+ )
+ ],
+ aggregation_temporality=1,
+ is_monotonic=True,
+ ),
+ ),
+ Metric(
+ name="gauge_name",
+ description="",
+ unit="",
+ data=Gauge(
+ data_points=[
+ NumberDataPoint(
+ attributes={},
+ start_time_unix_nano=time_ns(),
+ time_unix_nano=time_ns(),
+ value=2,
+ )
+ ]
+ ),
+ ),
+]
+
+
+class TestPeriodicExportingMetricReader(ConcurrencyTestBase):
+ def test_defaults(self):
+ pmr = PeriodicExportingMetricReader(FakeMetricsExporter())
+ self.assertEqual(pmr._export_interval_millis, 60000)
+ self.assertEqual(pmr._export_timeout_millis, 30000)
+ with self.assertLogs(level=WARNING):
+ pmr.shutdown()
+
+ def _create_periodic_reader(
+ self, metrics, exporter, collect_wait=0, interval=60000, timeout=30000
+ ):
+
+ pmr = PeriodicExportingMetricReader(
+ exporter,
+ export_interval_millis=interval,
+ export_timeout_millis=timeout,
+ )
+
+ def _collect(reader, timeout_millis):
+ sleep(collect_wait)
+ pmr._receive_metrics(metrics, timeout_millis)
+
+ pmr._set_collect_callback(_collect)
+ return pmr
+
+ def test_ticker_called(self):
+ collect_mock = Mock()
+ exporter = FakeMetricsExporter()
+ exporter.export = Mock()
+ pmr = PeriodicExportingMetricReader(exporter, export_interval_millis=1)
+ pmr._set_collect_callback(collect_mock)
+ sleep(0.1)
+ self.assertTrue(collect_mock.assert_called_once)
+ pmr.shutdown()
+
+ def test_ticker_not_called_on_infinity(self):
+ collect_mock = Mock()
+ exporter = FakeMetricsExporter()
+ exporter.export = Mock()
+ pmr = PeriodicExportingMetricReader(
+ exporter, export_interval_millis=math.inf
+ )
+ pmr._set_collect_callback(collect_mock)
+ sleep(0.1)
+ self.assertTrue(collect_mock.assert_not_called)
+ pmr.shutdown()
+
+ def test_ticker_value_exception_on_zero(self):
+ exporter = FakeMetricsExporter()
+ exporter.export = Mock()
+ self.assertRaises(
+ ValueError,
+ PeriodicExportingMetricReader,
+ exporter,
+ export_interval_millis=0,
+ )
+
+ def test_ticker_value_exception_on_negative(self):
+ exporter = FakeMetricsExporter()
+ exporter.export = Mock()
+ self.assertRaises(
+ ValueError,
+ PeriodicExportingMetricReader,
+ exporter,
+ export_interval_millis=-100,
+ )
+
+ @flaky(max_runs=3, min_passes=1)
+ def test_ticker_collects_metrics(self):
+ exporter = FakeMetricsExporter()
+
+ pmr = self._create_periodic_reader(
+ metrics_list, exporter, interval=100
+ )
+ sleep(0.15)
+ self.assertEqual(exporter.metrics, metrics_list)
+ pmr.shutdown()
+
+ def test_shutdown(self):
+ exporter = FakeMetricsExporter()
+
+ pmr = self._create_periodic_reader([], exporter)
+ pmr.shutdown()
+ self.assertEqual(exporter.metrics, [])
+ self.assertTrue(pmr._shutdown)
+ self.assertTrue(exporter._shutdown)
+
+ def test_shutdown_multiple_times(self):
+ pmr = self._create_periodic_reader([], FakeMetricsExporter())
+ with self.assertLogs(level="WARNING") as w:
+ self.run_with_many_threads(pmr.shutdown)
+ self.assertTrue("Can't shutdown multiple times", w.output[0])
+ with self.assertLogs(level="WARNING") as w:
+ pmr.shutdown()
+
+ def test_exporter_temporality_preference(self):
+ exporter = FakeMetricsExporter(
+ preferred_temporality={
+ Counter: AggregationTemporality.DELTA,
+ },
+ )
+ pmr = PeriodicExportingMetricReader(exporter)
+ for key, value in pmr._instrument_class_temporality.items():
+ if key is not _Counter:
+ self.assertEqual(value, AggregationTemporality.CUMULATIVE)
+ else:
+ self.assertEqual(value, AggregationTemporality.DELTA)
+
+ def test_exporter_aggregation_preference(self):
+ exporter = FakeMetricsExporter(
+ preferred_aggregation={
+ Counter: LastValueAggregation(),
+ },
+ )
+ pmr = PeriodicExportingMetricReader(exporter)
+ for key, value in pmr._instrument_class_aggregation.items():
+ if key is not _Counter:
+ self.assertTrue(isinstance(value, DefaultAggregation))
+ else:
+ self.assertTrue(isinstance(value, LastValueAggregation))
+
+ def test_metric_timeout_does_not_kill_worker_thread(self):
+ exporter = FakeMetricsExporter()
+ pmr = ExceptionAtCollectionPeriodicExportingMetricReader(
+ exporter,
+ MetricsTimeoutError("test timeout"),
+ export_timeout_millis=1,
+ )
+
+ sleep(0.1)
+ self.assertTrue(pmr._daemon_thread.is_alive())
+ pmr.shutdown()
diff --git a/opentelemetry-sdk/tests/metrics/test_point.py b/opentelemetry-sdk/tests/metrics/test_point.py
new file mode 100644
index 0000000000..5d6640fdea
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_point.py
@@ -0,0 +1,260 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Gauge,
+ Histogram,
+ HistogramDataPoint,
+ Metric,
+ MetricsData,
+ NumberDataPoint,
+ ResourceMetrics,
+ ScopeMetrics,
+ Sum,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.util.instrumentation import InstrumentationScope
+
+
+class TestToJson(TestCase):
+ @classmethod
+ def setUpClass(cls):
+
+ cls.attributes_0 = {
+ "a": "b",
+ "b": True,
+ "c": 1,
+ "d": 1.1,
+ "e": ["a", "b"],
+ "f": [True, False],
+ "g": [1, 2],
+ "h": [1.1, 2.2],
+ }
+ cls.attributes_0_str = '{"a": "b", "b": true, "c": 1, "d": 1.1, "e": ["a", "b"], "f": [true, false], "g": [1, 2], "h": [1.1, 2.2]}'
+
+ cls.attributes_1 = {
+ "i": "a",
+ "j": False,
+ "k": 2,
+ "l": 2.2,
+ "m": ["b", "a"],
+ "n": [False, True],
+ "o": [2, 1],
+ "p": [2.2, 1.1],
+ }
+ cls.attributes_1_str = '{"i": "a", "j": false, "k": 2, "l": 2.2, "m": ["b", "a"], "n": [false, true], "o": [2, 1], "p": [2.2, 1.1]}'
+
+ cls.number_data_point_0 = NumberDataPoint(
+ attributes=cls.attributes_0,
+ start_time_unix_nano=1,
+ time_unix_nano=2,
+ value=3.3,
+ )
+ cls.number_data_point_0_str = f'{{"attributes": {cls.attributes_0_str}, "start_time_unix_nano": 1, "time_unix_nano": 2, "value": 3.3}}'
+
+ cls.number_data_point_1 = NumberDataPoint(
+ attributes=cls.attributes_1,
+ start_time_unix_nano=2,
+ time_unix_nano=3,
+ value=4.4,
+ )
+ cls.number_data_point_1_str = f'{{"attributes": {cls.attributes_1_str}, "start_time_unix_nano": 2, "time_unix_nano": 3, "value": 4.4}}'
+
+ cls.histogram_data_point_0 = HistogramDataPoint(
+ attributes=cls.attributes_0,
+ start_time_unix_nano=1,
+ time_unix_nano=2,
+ count=3,
+ sum=3.3,
+ bucket_counts=[1, 1, 1],
+ explicit_bounds=[0.1, 1.2, 2.3, 3.4],
+ min=0.2,
+ max=3.3,
+ )
+ cls.histogram_data_point_0_str = f'{{"attributes": {cls.attributes_0_str}, "start_time_unix_nano": 1, "time_unix_nano": 2, "count": 3, "sum": 3.3, "bucket_counts": [1, 1, 1], "explicit_bounds": [0.1, 1.2, 2.3, 3.4], "min": 0.2, "max": 3.3}}'
+
+ cls.histogram_data_point_1 = HistogramDataPoint(
+ attributes=cls.attributes_1,
+ start_time_unix_nano=2,
+ time_unix_nano=3,
+ count=4,
+ sum=4.4,
+ bucket_counts=[2, 1, 1],
+ explicit_bounds=[1.2, 2.3, 3.4, 4.5],
+ min=0.3,
+ max=4.4,
+ )
+ cls.histogram_data_point_1_str = f'{{"attributes": {cls.attributes_1_str}, "start_time_unix_nano": 2, "time_unix_nano": 3, "count": 4, "sum": 4.4, "bucket_counts": [2, 1, 1], "explicit_bounds": [1.2, 2.3, 3.4, 4.5], "min": 0.3, "max": 4.4}}'
+
+ cls.sum_0 = Sum(
+ data_points=[cls.number_data_point_0, cls.number_data_point_1],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ is_monotonic=False,
+ )
+ cls.sum_0_str = f'{{"data_points": [{cls.number_data_point_0_str}, {cls.number_data_point_1_str}], "aggregation_temporality": 1, "is_monotonic": false}}'
+
+ cls.gauge_0 = Gauge(
+ data_points=[cls.number_data_point_0, cls.number_data_point_1],
+ )
+ cls.gauge_0_str = f'{{"data_points": [{cls.number_data_point_0_str}, {cls.number_data_point_1_str}]}}'
+
+ cls.histogram_0 = Histogram(
+ data_points=[
+ cls.histogram_data_point_0,
+ cls.histogram_data_point_1,
+ ],
+ aggregation_temporality=AggregationTemporality.DELTA,
+ )
+ cls.histogram_0_str = f'{{"data_points": [{cls.histogram_data_point_0_str}, {cls.histogram_data_point_1_str}], "aggregation_temporality": 1}}'
+
+ cls.metric_0 = Metric(
+ name="metric_0",
+ description="description_0",
+ unit="unit_0",
+ data=cls.sum_0,
+ )
+ cls.metric_0_str = f'{{"name": "metric_0", "description": "description_0", "unit": "unit_0", "data": {cls.sum_0_str}}}'
+
+ cls.metric_1 = Metric(
+ name="metric_1", description=None, unit="unit_1", data=cls.gauge_0
+ )
+ cls.metric_1_str = f'{{"name": "metric_1", "description": "", "unit": "unit_1", "data": {cls.gauge_0_str}}}'
+
+ cls.metric_2 = Metric(
+ name="metric_2",
+ description="description_2",
+ unit=None,
+ data=cls.histogram_0,
+ )
+ cls.metric_2_str = f'{{"name": "metric_2", "description": "description_2", "unit": "", "data": {cls.histogram_0_str}}}'
+
+ cls.scope_metrics_0 = ScopeMetrics(
+ scope=InstrumentationScope(
+ name="name_0",
+ version="version_0",
+ schema_url="schema_url_0",
+ ),
+ metrics=[cls.metric_0, cls.metric_1, cls.metric_2],
+ schema_url="schema_url_0",
+ )
+ cls.scope_metrics_0_str = f'{{"scope": {{"name": "name_0", "version": "version_0", "schema_url": "schema_url_0"}}, "metrics": [{cls.metric_0_str}, {cls.metric_1_str}, {cls.metric_2_str}], "schema_url": "schema_url_0"}}'
+
+ cls.scope_metrics_1 = ScopeMetrics(
+ scope=InstrumentationScope(
+ name="name_1",
+ version="version_1",
+ schema_url="schema_url_1",
+ ),
+ metrics=[cls.metric_0, cls.metric_1, cls.metric_2],
+ schema_url="schema_url_1",
+ )
+ cls.scope_metrics_1_str = f'{{"scope": {{"name": "name_1", "version": "version_1", "schema_url": "schema_url_1"}}, "metrics": [{cls.metric_0_str}, {cls.metric_1_str}, {cls.metric_2_str}], "schema_url": "schema_url_1"}}'
+
+ cls.resource_metrics_0 = ResourceMetrics(
+ resource=Resource(
+ attributes=cls.attributes_0, schema_url="schema_url_0"
+ ),
+ scope_metrics=[cls.scope_metrics_0, cls.scope_metrics_1],
+ schema_url="schema_url_0",
+ )
+ cls.resource_metrics_0_str = f'{{"resource": {{"attributes": {cls.attributes_0_str}, "schema_url": "schema_url_0"}}, "scope_metrics": [{cls.scope_metrics_0_str}, {cls.scope_metrics_1_str}], "schema_url": "schema_url_0"}}'
+
+ cls.resource_metrics_1 = ResourceMetrics(
+ resource=Resource(
+ attributes=cls.attributes_1, schema_url="schema_url_1"
+ ),
+ scope_metrics=[cls.scope_metrics_0, cls.scope_metrics_1],
+ schema_url="schema_url_1",
+ )
+ cls.resource_metrics_1_str = f'{{"resource": {{"attributes": {cls.attributes_1_str}, "schema_url": "schema_url_1"}}, "scope_metrics": [{cls.scope_metrics_0_str}, {cls.scope_metrics_1_str}], "schema_url": "schema_url_1"}}'
+
+ cls.metrics_data_0 = MetricsData(
+ resource_metrics=[cls.resource_metrics_0, cls.resource_metrics_1]
+ )
+ cls.metrics_data_0_str = f'{{"resource_metrics": [{cls.resource_metrics_0_str}, {cls.resource_metrics_1_str}]}}'
+
+ def test_number_data_point(self):
+
+ self.assertEqual(
+ self.number_data_point_0.to_json(indent=None),
+ self.number_data_point_0_str,
+ )
+ self.assertEqual(
+ self.number_data_point_1.to_json(indent=None),
+ self.number_data_point_1_str,
+ )
+
+ def test_histogram_data_point(self):
+
+ self.assertEqual(
+ self.histogram_data_point_0.to_json(indent=None),
+ self.histogram_data_point_0_str,
+ )
+ self.assertEqual(
+ self.histogram_data_point_1.to_json(indent=None),
+ self.histogram_data_point_1_str,
+ )
+
+ def test_sum(self):
+
+ self.assertEqual(self.sum_0.to_json(indent=None), self.sum_0_str)
+
+ def test_gauge(self):
+
+ self.maxDiff = None
+
+ self.assertEqual(self.gauge_0.to_json(indent=None), self.gauge_0_str)
+
+ def test_histogram(self):
+
+ self.assertEqual(
+ self.histogram_0.to_json(indent=None), self.histogram_0_str
+ )
+
+ def test_metric(self):
+
+ self.assertEqual(self.metric_0.to_json(indent=None), self.metric_0_str)
+
+ self.assertEqual(self.metric_1.to_json(indent=None), self.metric_1_str)
+
+ self.assertEqual(self.metric_2.to_json(indent=None), self.metric_2_str)
+
+ def test_scope_metrics(self):
+
+ self.assertEqual(
+ self.scope_metrics_0.to_json(indent=None), self.scope_metrics_0_str
+ )
+ self.assertEqual(
+ self.scope_metrics_1.to_json(indent=None), self.scope_metrics_1_str
+ )
+
+ def test_resource_metrics(self):
+
+ self.assertEqual(
+ self.resource_metrics_0.to_json(indent=None),
+ self.resource_metrics_0_str,
+ )
+ self.assertEqual(
+ self.resource_metrics_1.to_json(indent=None),
+ self.resource_metrics_1_str,
+ )
+
+ def test_metrics_data(self):
+
+ self.assertEqual(
+ self.metrics_data_0.to_json(indent=None), self.metrics_data_0_str
+ )
diff --git a/opentelemetry-sdk/tests/metrics/test_view.py b/opentelemetry-sdk/tests/metrics/test_view.py
new file mode 100644
index 0000000000..00376a0068
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_view.py
@@ -0,0 +1,125 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+from unittest.mock import Mock
+
+from opentelemetry.sdk.metrics.view import View
+
+
+class TestView(TestCase):
+ def test_required_instrument_criteria(self):
+
+ with self.assertRaises(Exception):
+ View()
+
+ def test_instrument_type(self):
+
+ self.assertTrue(View(instrument_type=Mock)._match(Mock()))
+
+ def test_instrument_name(self):
+
+ mock_instrument = Mock()
+ mock_instrument.configure_mock(**{"name": "instrument_name"})
+
+ self.assertTrue(
+ View(instrument_name="instrument_name")._match(mock_instrument)
+ )
+
+ def test_instrument_unit(self):
+
+ mock_instrument = Mock()
+ mock_instrument.configure_mock(**{"unit": "instrument_unit"})
+
+ self.assertTrue(
+ View(instrument_unit="instrument_unit")._match(mock_instrument)
+ )
+
+ def test_meter_name(self):
+
+ self.assertTrue(
+ View(meter_name="meter_name")._match(
+ Mock(**{"instrumentation_scope.name": "meter_name"})
+ )
+ )
+
+ def test_meter_version(self):
+
+ self.assertTrue(
+ View(meter_version="meter_version")._match(
+ Mock(**{"instrumentation_scope.version": "meter_version"})
+ )
+ )
+
+ def test_meter_schema_url(self):
+
+ self.assertTrue(
+ View(meter_schema_url="meter_schema_url")._match(
+ Mock(
+ **{"instrumentation_scope.schema_url": "meter_schema_url"}
+ )
+ )
+ )
+ self.assertFalse(
+ View(meter_schema_url="meter_schema_url")._match(
+ Mock(
+ **{
+ "instrumentation_scope.schema_url": "meter_schema_urlabc"
+ }
+ )
+ )
+ )
+ self.assertTrue(
+ View(meter_schema_url="meter_schema_url")._match(
+ Mock(
+ **{"instrumentation_scope.schema_url": "meter_schema_url"}
+ )
+ )
+ )
+
+ def test_additive_criteria(self):
+
+ view = View(
+ meter_name="meter_name",
+ meter_version="meter_version",
+ meter_schema_url="meter_schema_url",
+ )
+
+ self.assertTrue(
+ view._match(
+ Mock(
+ **{
+ "instrumentation_scope.name": "meter_name",
+ "instrumentation_scope.version": "meter_version",
+ "instrumentation_scope.schema_url": "meter_schema_url",
+ }
+ )
+ )
+ )
+ self.assertFalse(
+ view._match(
+ Mock(
+ **{
+ "instrumentation_scope.name": "meter_name",
+ "instrumentation_scope.version": "meter_version",
+ "instrumentation_scope.schema_url": "meter_schema_vrl",
+ }
+ )
+ )
+ )
+
+ def test_view_name(self):
+
+ with self.assertRaises(Exception):
+ View(name="name", instrument_name="instrument_name*")
diff --git a/opentelemetry-sdk/tests/metrics/test_view_instrument_match.py b/opentelemetry-sdk/tests/metrics/test_view_instrument_match.py
new file mode 100644
index 0000000000..c22c2d7a96
--- /dev/null
+++ b/opentelemetry-sdk/tests/metrics/test_view_instrument_match.py
@@ -0,0 +1,320 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest import TestCase
+from unittest.mock import MagicMock, Mock
+
+from opentelemetry.sdk.metrics._internal._view_instrument_match import (
+ _ViewInstrumentMatch,
+)
+from opentelemetry.sdk.metrics._internal.aggregation import (
+ _DropAggregation,
+ _LastValueAggregation,
+)
+from opentelemetry.sdk.metrics._internal.instrument import _Counter
+from opentelemetry.sdk.metrics._internal.measurement import Measurement
+from opentelemetry.sdk.metrics._internal.sdk_configuration import (
+ SdkConfiguration,
+)
+from opentelemetry.sdk.metrics.export import AggregationTemporality
+from opentelemetry.sdk.metrics.view import (
+ DefaultAggregation,
+ DropAggregation,
+ LastValueAggregation,
+ View,
+)
+
+
+class Test_ViewInstrumentMatch(TestCase):
+ @classmethod
+ def setUpClass(cls):
+
+ cls.mock_aggregation_factory = Mock()
+ cls.mock_created_aggregation = (
+ cls.mock_aggregation_factory._create_aggregation()
+ )
+ cls.mock_resource = Mock()
+ cls.mock_instrumentation_scope = Mock()
+ cls.sdk_configuration = SdkConfiguration(
+ resource=cls.mock_resource,
+ metric_readers=[],
+ views=[],
+ )
+
+ def test_consume_measurement(self):
+ instrument1 = Mock(name="instrument1")
+ instrument1.instrumentation_scope = self.mock_instrumentation_scope
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=self.mock_aggregation_factory,
+ attribute_keys={"a", "c"},
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{"__getitem__.return_value": DefaultAggregation()}
+ ),
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=instrument1,
+ attributes={"c": "d", "f": "g"},
+ )
+ )
+ self.assertEqual(
+ view_instrument_match._attributes_aggregation,
+ {frozenset([("c", "d")]): self.mock_created_aggregation},
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=instrument1,
+ attributes={"w": "x", "y": "z"},
+ )
+ )
+
+ self.assertEqual(
+ view_instrument_match._attributes_aggregation,
+ {
+ frozenset(): self.mock_created_aggregation,
+ frozenset([("c", "d")]): self.mock_created_aggregation,
+ },
+ )
+
+ # None attribute_keys (default) will keep all attributes
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=self.mock_aggregation_factory,
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{"__getitem__.return_value": DefaultAggregation()}
+ ),
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=instrument1,
+ attributes={"c": "d", "f": "g"},
+ )
+ )
+ self.assertEqual(
+ view_instrument_match._attributes_aggregation,
+ {
+ frozenset(
+ [("c", "d"), ("f", "g")]
+ ): self.mock_created_aggregation
+ },
+ )
+
+ # empty set attribute_keys will drop all labels and aggregate
+ # everything together
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=self.mock_aggregation_factory,
+ attribute_keys={},
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{"__getitem__.return_value": DefaultAggregation()}
+ ),
+ )
+ view_instrument_match.consume_measurement(
+ Measurement(value=0, instrument=instrument1, attributes=None)
+ )
+ self.assertEqual(
+ view_instrument_match._attributes_aggregation,
+ {frozenset({}): self.mock_created_aggregation},
+ )
+
+ # Test that a drop aggregation is handled in the same way as any
+ # other aggregation.
+ drop_aggregation = DropAggregation()
+
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=drop_aggregation,
+ attribute_keys={},
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{"__getitem__.return_value": DefaultAggregation()}
+ ),
+ )
+ view_instrument_match.consume_measurement(
+ Measurement(value=0, instrument=instrument1, attributes=None)
+ )
+ self.assertIsInstance(
+ view_instrument_match._attributes_aggregation[frozenset({})],
+ _DropAggregation,
+ )
+
+ def test_collect(self):
+ instrument1 = _Counter(
+ "instrument1",
+ Mock(),
+ Mock(),
+ description="description",
+ unit="unit",
+ )
+ instrument1.instrumentation_scope = self.mock_instrumentation_scope
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=DefaultAggregation(),
+ attribute_keys={"a", "c"},
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{"__getitem__.return_value": DefaultAggregation()}
+ ),
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"c": "d", "f": "g"},
+ )
+ )
+
+ number_data_points = view_instrument_match.collect(
+ AggregationTemporality.CUMULATIVE, 0
+ )
+ number_data_points = list(number_data_points)
+ self.assertEqual(len(number_data_points), 1)
+
+ number_data_point = number_data_points[0]
+
+ self.assertEqual(number_data_point.attributes, {"c": "d"})
+ self.assertEqual(number_data_point.value, 0)
+
+ def test_data_point_check(self):
+ instrument1 = _Counter(
+ "instrument1",
+ Mock(),
+ Mock(),
+ description="description",
+ unit="unit",
+ )
+ instrument1.instrumentation_scope = self.mock_instrumentation_scope
+
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=DefaultAggregation(),
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation=MagicMock(
+ **{
+ "__getitem__.return_value": Mock(
+ **{
+ "_create_aggregation.return_value": Mock(
+ **{
+ "collect.side_effect": [
+ Mock(),
+ Mock(),
+ None,
+ Mock(),
+ ]
+ }
+ )
+ }
+ )
+ }
+ ),
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"c": "d", "f": "g"},
+ )
+ )
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"h": "i", "j": "k"},
+ )
+ )
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"l": "m", "n": "o"},
+ )
+ )
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"p": "q", "r": "s"},
+ )
+ )
+
+ result = view_instrument_match.collect(
+ AggregationTemporality.CUMULATIVE, 0
+ )
+
+ self.assertEqual(len(list(result)), 3)
+
+ def test_setting_aggregation(self):
+ instrument1 = _Counter(
+ name="instrument1",
+ instrumentation_scope=Mock(),
+ measurement_consumer=Mock(),
+ description="description",
+ unit="unit",
+ )
+ instrument1.instrumentation_scope = self.mock_instrumentation_scope
+ view_instrument_match = _ViewInstrumentMatch(
+ view=View(
+ instrument_name="instrument1",
+ name="name",
+ aggregation=DefaultAggregation(),
+ attribute_keys={"a", "c"},
+ ),
+ instrument=instrument1,
+ instrument_class_aggregation={_Counter: LastValueAggregation()},
+ )
+
+ view_instrument_match.consume_measurement(
+ Measurement(
+ value=0,
+ instrument=Mock(name="instrument1"),
+ attributes={"c": "d", "f": "g"},
+ )
+ )
+
+ self.assertIsInstance(
+ view_instrument_match._attributes_aggregation[
+ frozenset({("c", "d")})
+ ],
+ _LastValueAggregation,
+ )
diff --git a/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics.py b/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics.py
new file mode 100644
index 0000000000..81fb0b6e1d
--- /dev/null
+++ b/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics.py
@@ -0,0 +1,77 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import pytest
+
+from opentelemetry.sdk.metrics import Counter, MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ InMemoryMetricReader,
+)
+
+reader_cumulative = InMemoryMetricReader()
+reader_delta = InMemoryMetricReader(
+ preferred_temporality={
+ Counter: AggregationTemporality.DELTA,
+ },
+)
+provider_reader_cumulative = MeterProvider(
+ metric_readers=[reader_cumulative],
+)
+provider_reader_delta = MeterProvider(metric_readers=[reader_delta])
+meter_cumulative = provider_reader_cumulative.get_meter("sdk_meter_provider")
+meter_delta = provider_reader_delta.get_meter("sdk_meter_provider_delta")
+counter_cumulative = meter_cumulative.create_counter("test_counter")
+counter_delta = meter_delta.create_counter("test_counter2")
+udcounter = meter_cumulative.create_up_down_counter("test_udcounter")
+
+
+@pytest.mark.parametrize(
+ ("num_labels", "temporality"),
+ [
+ (0, "delta"),
+ (1, "delta"),
+ (3, "delta"),
+ (5, "delta"),
+ (10, "delta"),
+ (0, "cumulative"),
+ (1, "cumulative"),
+ (3, "cumulative"),
+ (5, "cumulative"),
+ (10, "cumulative"),
+ ],
+)
+def test_counter_add(benchmark, num_labels, temporality):
+ labels = {}
+ for i in range(num_labels):
+ labels = {f"Key{i}": f"Value{i}" for i in range(num_labels)}
+
+ def benchmark_counter_add():
+ if temporality == "cumulative":
+ counter_cumulative.add(1, labels)
+ else:
+ counter_delta.add(1, labels)
+
+ benchmark(benchmark_counter_add)
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 10])
+def test_up_down_counter_add(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels = {f"Key{i}": f"Value{i}" for i in range(num_labels)}
+
+ def benchmark_up_down_counter_add():
+ udcounter.add(1, labels)
+
+ benchmark(benchmark_up_down_counter_add)
diff --git a/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics_histogram,.py b/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics_histogram,.py
new file mode 100644
index 0000000000..2f9c440541
--- /dev/null
+++ b/opentelemetry-sdk/tests/performance/benchmarks/metrics/test_benchmark_metrics_histogram,.py
@@ -0,0 +1,126 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import random
+
+import pytest
+
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import InMemoryMetricReader
+from opentelemetry.sdk.metrics.view import (
+ ExplicitBucketHistogramAggregation,
+ View,
+)
+
+MAX_BOUND_VALUE = 10000
+
+
+def _generate_bounds(bound_count):
+ bounds = []
+ for i in range(bound_count):
+ bounds.append(i * MAX_BOUND_VALUE / bound_count)
+ return bounds
+
+
+hist_view_10 = View(
+ instrument_name="test_histogram_10_bound",
+ aggregation=ExplicitBucketHistogramAggregation(_generate_bounds(10)),
+)
+hist_view_49 = View(
+ instrument_name="test_histogram_49_bound",
+ aggregation=ExplicitBucketHistogramAggregation(_generate_bounds(49)),
+)
+hist_view_50 = View(
+ instrument_name="test_histogram_50_bound",
+ aggregation=ExplicitBucketHistogramAggregation(_generate_bounds(50)),
+)
+hist_view_1000 = View(
+ instrument_name="test_histogram_1000_bound",
+ aggregation=ExplicitBucketHistogramAggregation(_generate_bounds(1000)),
+)
+reader = InMemoryMetricReader()
+provider = MeterProvider(
+ metric_readers=[reader],
+ views=[
+ hist_view_10,
+ hist_view_49,
+ hist_view_50,
+ hist_view_1000,
+ ],
+)
+meter = provider.get_meter("sdk_meter_provider")
+hist = meter.create_histogram("test_histogram_default")
+hist10 = meter.create_histogram("test_histogram_10_bound")
+hist49 = meter.create_histogram("test_histogram_49_bound")
+hist50 = meter.create_histogram("test_histogram_50_bound")
+hist1000 = meter.create_histogram("test_histogram_1000_bound")
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 7])
+def test_histogram_record(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels["Key{}".format(i)] = "Value{}".format(i)
+
+ def benchmark_histogram_record():
+ hist.record(random.random() * MAX_BOUND_VALUE)
+
+ benchmark(benchmark_histogram_record)
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 7])
+def test_histogram_record_10(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels["Key{}".format(i)] = "Value{}".format(i)
+
+ def benchmark_histogram_record_10():
+ hist10.record(random.random() * MAX_BOUND_VALUE)
+
+ benchmark(benchmark_histogram_record_10)
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 7])
+def test_histogram_record_49(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels["Key{}".format(i)] = "Value{}".format(i)
+
+ def benchmark_histogram_record_49():
+ hist49.record(random.random() * MAX_BOUND_VALUE)
+
+ benchmark(benchmark_histogram_record_49)
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 7])
+def test_histogram_record_50(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels["Key{}".format(i)] = "Value{}".format(i)
+
+ def benchmark_histogram_record_50():
+ hist50.record(random.random() * MAX_BOUND_VALUE)
+
+ benchmark(benchmark_histogram_record_50)
+
+
+@pytest.mark.parametrize("num_labels", [0, 1, 3, 5, 7])
+def test_histogram_record_1000(benchmark, num_labels):
+ labels = {}
+ for i in range(num_labels):
+ labels["Key{}".format(i)] = "Value{}".format(i)
+
+ def benchmark_histogram_record_1000():
+ hist1000.record(random.random() * MAX_BOUND_VALUE)
+
+ benchmark(benchmark_histogram_record_1000)
diff --git a/opentelemetry-sdk/tests/performance/benchmarks/trace/test_benchmark_trace.py b/opentelemetry-sdk/tests/performance/benchmarks/trace/test_benchmark_trace.py
new file mode 100644
index 0000000000..a407a341f4
--- /dev/null
+++ b/opentelemetry-sdk/tests/performance/benchmarks/trace/test_benchmark_trace.py
@@ -0,0 +1,51 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import opentelemetry.sdk.trace as trace
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import sampling
+
+tracer = trace.TracerProvider(
+ sampler=sampling.DEFAULT_ON,
+ resource=Resource(
+ {
+ "service.name": "A123456789",
+ "service.version": "1.34567890",
+ "service.instance.id": "123ab456-a123-12ab-12ab-12340a1abc12",
+ }
+ ),
+).get_tracer("sdk_tracer_provider")
+
+
+def test_simple_start_span(benchmark):
+ def benchmark_start_as_current_span():
+ span = tracer.start_span(
+ "benchmarkedSpan",
+ attributes={"long.attribute": -10000000001000000000},
+ )
+ span.add_event("benchmarkEvent")
+ span.end()
+
+ benchmark(benchmark_start_as_current_span)
+
+
+def test_simple_start_as_current_span(benchmark):
+ def benchmark_start_as_current_span():
+ with tracer.start_as_current_span(
+ "benchmarkedSpan",
+ attributes={"long.attribute": -10000000001000000000},
+ ) as span:
+ span.add_event("benchmarkEvent")
+
+ benchmark(benchmark_start_as_current_span)
diff --git a/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_batch_export.py b/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_batch_export.py
new file mode 100644
index 0000000000..3e9a201c96
--- /dev/null
+++ b/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_batch_export.py
@@ -0,0 +1,49 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider, sampling
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
+
+TEST_DURATION_SECONDS = 15
+SPANS_PER_SECOND = 10_000
+
+
+class MockTraceServiceStub:
+ def __init__(self, channel):
+ self.Export = lambda *args, **kwargs: None
+
+
+old_stub = OTLPSpanExporter._stub
+OTLPSpanExporter._stub = MockTraceServiceStub
+
+simple_span_processor = BatchSpanProcessor(OTLPSpanExporter())
+tracer = TracerProvider(
+ active_span_processor=simple_span_processor,
+ sampler=sampling.DEFAULT_ON,
+).get_tracer("resource_usage_tracer")
+
+starttime = time.time()
+for _ in range(TEST_DURATION_SECONDS):
+ for _ in range(SPANS_PER_SECOND):
+ span = tracer.start_span("benchmarkedSpan")
+ span.end()
+ time_to_finish_spans = time.time() - starttime
+ time.sleep(1.0 - time_to_finish_spans if time_to_finish_spans < 1.0 else 0)
+
+OTLPSpanExporter._stub = old_stub
diff --git a/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_simple_export.py b/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_simple_export.py
new file mode 100644
index 0000000000..bc27fb519d
--- /dev/null
+++ b/opentelemetry-sdk/tests/performance/resource-usage/trace/profile_resource_usage_simple_export.py
@@ -0,0 +1,49 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider, sampling
+from opentelemetry.sdk.trace.export import SimpleSpanProcessor
+
+TEST_DURATION_SECONDS = 15
+SPANS_PER_SECOND = 10_000
+
+
+class MockTraceServiceStub:
+ def __init__(self, channel):
+ self.Export = lambda *args, **kwargs: None
+
+
+old_stub = OTLPSpanExporter._stub
+OTLPSpanExporter._stub = MockTraceServiceStub
+
+simple_span_processor = SimpleSpanProcessor(OTLPSpanExporter())
+tracer = TracerProvider(
+ active_span_processor=simple_span_processor,
+ sampler=sampling.DEFAULT_ON,
+).get_tracer("resource_usage_tracer")
+
+starttime = time.time()
+for _ in range(TEST_DURATION_SECONDS):
+ for _ in range(SPANS_PER_SECOND):
+ span = tracer.start_span("benchmarkedSpan")
+ span.end()
+ time_to_finish_spans = time.time() - starttime
+ time.sleep(1.0 - time_to_finish_spans if time_to_finish_spans < 1.0 else 0)
+
+OTLPSpanExporter._stub = old_stub
diff --git a/opentelemetry-sdk/tests/resources/__init__.py b/opentelemetry-sdk/tests/resources/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-sdk/tests/resources/test_resources.py b/opentelemetry-sdk/tests/resources/test_resources.py
new file mode 100644
index 0000000000..da3f946961
--- /dev/null
+++ b/opentelemetry-sdk/tests/resources/test_resources.py
@@ -0,0 +1,725 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+import unittest
+import uuid
+from concurrent.futures import TimeoutError
+from logging import ERROR, WARNING
+from os import environ
+from unittest.mock import Mock, patch
+from urllib import parse
+
+from opentelemetry.sdk.environment_variables import (
+ OTEL_EXPERIMENTAL_RESOURCE_DETECTORS,
+)
+from opentelemetry.sdk.resources import (
+ _DEFAULT_RESOURCE,
+ _EMPTY_RESOURCE,
+ _OPENTELEMETRY_SDK_VERSION,
+ OTEL_RESOURCE_ATTRIBUTES,
+ OTEL_SERVICE_NAME,
+ PROCESS_COMMAND,
+ PROCESS_COMMAND_ARGS,
+ PROCESS_COMMAND_LINE,
+ PROCESS_EXECUTABLE_NAME,
+ PROCESS_EXECUTABLE_PATH,
+ PROCESS_OWNER,
+ PROCESS_PARENT_PID,
+ PROCESS_PID,
+ PROCESS_RUNTIME_DESCRIPTION,
+ PROCESS_RUNTIME_NAME,
+ PROCESS_RUNTIME_VERSION,
+ SERVICE_NAME,
+ TELEMETRY_SDK_LANGUAGE,
+ TELEMETRY_SDK_NAME,
+ TELEMETRY_SDK_VERSION,
+ OTELResourceDetector,
+ ProcessResourceDetector,
+ Resource,
+ ResourceDetector,
+ get_aggregated_resources,
+)
+
+try:
+ import psutil
+except ImportError:
+ psutil = None
+
+
+class TestResources(unittest.TestCase):
+ def setUp(self) -> None:
+ environ[OTEL_RESOURCE_ATTRIBUTES] = ""
+
+ def tearDown(self) -> None:
+ environ.pop(OTEL_RESOURCE_ATTRIBUTES)
+
+ def test_create(self):
+ attributes = {
+ "service": "ui",
+ "version": 1,
+ "has_bugs": True,
+ "cost": 112.12,
+ }
+
+ expected_attributes = {
+ "service": "ui",
+ "version": 1,
+ "has_bugs": True,
+ "cost": 112.12,
+ TELEMETRY_SDK_NAME: "opentelemetry",
+ TELEMETRY_SDK_LANGUAGE: "python",
+ TELEMETRY_SDK_VERSION: _OPENTELEMETRY_SDK_VERSION,
+ SERVICE_NAME: "unknown_service",
+ }
+
+ resource = Resource.create(attributes)
+ self.assertIsInstance(resource, Resource)
+ self.assertEqual(resource.attributes, expected_attributes)
+ self.assertEqual(resource.schema_url, "")
+
+ schema_url = "https://opentelemetry.io/schemas/1.3.0"
+
+ resource = Resource.create(attributes, schema_url)
+ self.assertIsInstance(resource, Resource)
+ self.assertEqual(resource.attributes, expected_attributes)
+ self.assertEqual(resource.schema_url, schema_url)
+
+ environ[OTEL_RESOURCE_ATTRIBUTES] = "key=value"
+ resource = Resource.create(attributes)
+ self.assertIsInstance(resource, Resource)
+ expected_with_envar = expected_attributes.copy()
+ expected_with_envar["key"] = "value"
+ self.assertEqual(resource.attributes, expected_with_envar)
+ environ[OTEL_RESOURCE_ATTRIBUTES] = ""
+
+ resource = Resource.get_empty()
+ self.assertEqual(resource, _EMPTY_RESOURCE)
+
+ resource = Resource.create(None)
+ self.assertEqual(
+ resource,
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+ self.assertEqual(resource.schema_url, "")
+
+ resource = Resource.create(None, None)
+ self.assertEqual(
+ resource,
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+ self.assertEqual(resource.schema_url, "")
+
+ resource = Resource.create({})
+ self.assertEqual(
+ resource,
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+ self.assertEqual(resource.schema_url, "")
+
+ resource = Resource.create({}, None)
+ self.assertEqual(
+ resource,
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+ self.assertEqual(resource.schema_url, "")
+
+ def test_resource_merge(self):
+ left = Resource({"service": "ui"})
+ right = Resource({"host": "service-host"})
+ self.assertEqual(
+ left.merge(right),
+ Resource({"service": "ui", "host": "service-host"}),
+ )
+ schema_urls = (
+ "https://opentelemetry.io/schemas/1.2.0",
+ "https://opentelemetry.io/schemas/1.3.0",
+ )
+
+ left = Resource.create({}, None)
+ right = Resource.create({}, None)
+ self.assertEqual(left.merge(right).schema_url, "")
+
+ left = Resource.create({}, None)
+ right = Resource.create({}, schema_urls[0])
+ self.assertEqual(left.merge(right).schema_url, schema_urls[0])
+
+ left = Resource.create({}, schema_urls[0])
+ right = Resource.create({}, None)
+ self.assertEqual(left.merge(right).schema_url, schema_urls[0])
+
+ left = Resource.create({}, schema_urls[0])
+ right = Resource.create({}, schema_urls[0])
+ self.assertEqual(left.merge(right).schema_url, schema_urls[0])
+
+ left = Resource.create({}, schema_urls[0])
+ right = Resource.create({}, schema_urls[1])
+ with self.assertLogs(level=ERROR) as log_entry:
+ self.assertEqual(left.merge(right), left)
+ self.assertIn(schema_urls[0], log_entry.output[0])
+ self.assertIn(schema_urls[1], log_entry.output[0])
+
+ def test_resource_merge_empty_string(self):
+ """Verify Resource.merge behavior with the empty string.
+
+ Attributes from the source Resource take precedence, with
+ the exception of the empty string.
+
+ """
+ left = Resource({"service": "ui", "host": ""})
+ right = Resource({"host": "service-host", "service": "not-ui"})
+ self.assertEqual(
+ left.merge(right),
+ Resource({"service": "not-ui", "host": "service-host"}),
+ )
+
+ def test_immutability(self):
+ attributes = {
+ "service": "ui",
+ "version": 1,
+ "has_bugs": True,
+ "cost": 112.12,
+ }
+
+ default_attributes = {
+ TELEMETRY_SDK_NAME: "opentelemetry",
+ TELEMETRY_SDK_LANGUAGE: "python",
+ TELEMETRY_SDK_VERSION: _OPENTELEMETRY_SDK_VERSION,
+ SERVICE_NAME: "unknown_service",
+ }
+
+ attributes_copy = attributes.copy()
+ attributes_copy.update(default_attributes)
+
+ resource = Resource.create(attributes)
+ self.assertEqual(resource.attributes, attributes_copy)
+
+ with self.assertRaises(TypeError):
+ resource.attributes["has_bugs"] = False
+ self.assertEqual(resource.attributes, attributes_copy)
+
+ attributes["cost"] = 999.91
+ self.assertEqual(resource.attributes, attributes_copy)
+
+ with self.assertRaises(AttributeError):
+ resource.schema_url = "bug"
+
+ self.assertEqual(resource.schema_url, "")
+
+ def test_service_name_using_process_name(self):
+ resource = Resource.create({PROCESS_EXECUTABLE_NAME: "test"})
+ self.assertEqual(
+ resource.attributes.get(SERVICE_NAME),
+ "unknown_service:test",
+ )
+
+ def test_invalid_resource_attribute_values(self):
+ with self.assertLogs(level=WARNING):
+ resource = Resource(
+ {
+ SERVICE_NAME: "test",
+ "non-primitive-data-type": {},
+ "invalid-byte-type-attribute": (
+ b"\xd8\xe1\xb7\xeb\xa8\xe5 \xd2\xb7\xe1"
+ ),
+ "": "empty-key-value",
+ None: "null-key-value",
+ "another-non-primitive": uuid.uuid4(),
+ }
+ )
+ self.assertEqual(
+ resource.attributes,
+ {
+ SERVICE_NAME: "test",
+ },
+ )
+ self.assertEqual(len(resource.attributes), 1)
+
+ def test_aggregated_resources_no_detectors(self):
+ aggregated_resources = get_aggregated_resources([])
+ self.assertEqual(
+ aggregated_resources,
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+
+ def test_aggregated_resources_with_default_destroying_static_resource(
+ self,
+ ):
+ static_resource = Resource({"static_key": "static_value"})
+
+ self.assertEqual(
+ get_aggregated_resources([], initial_resource=static_resource),
+ static_resource,
+ )
+
+ resource_detector = Mock(spec=ResourceDetector)
+ resource_detector.detect.return_value = Resource(
+ {"static_key": "try_to_overwrite_existing_value", "key": "value"}
+ )
+ self.assertEqual(
+ get_aggregated_resources(
+ [resource_detector], initial_resource=static_resource
+ ),
+ Resource(
+ {
+ "static_key": "try_to_overwrite_existing_value",
+ "key": "value",
+ }
+ ),
+ )
+
+ def test_aggregated_resources_multiple_detectors(self):
+ resource_detector1 = Mock(spec=ResourceDetector)
+ resource_detector1.detect.return_value = Resource({"key1": "value1"})
+ resource_detector2 = Mock(spec=ResourceDetector)
+ resource_detector2.detect.return_value = Resource(
+ {"key2": "value2", "key3": "value3"}
+ )
+ resource_detector3 = Mock(spec=ResourceDetector)
+ resource_detector3.detect.return_value = Resource(
+ {
+ "key2": "try_to_overwrite_existing_value",
+ "key3": "try_to_overwrite_existing_value",
+ "key4": "value4",
+ }
+ )
+
+ self.assertEqual(
+ get_aggregated_resources(
+ [resource_detector1, resource_detector2, resource_detector3]
+ ),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ).merge(
+ Resource(
+ {
+ "key1": "value1",
+ "key2": "try_to_overwrite_existing_value",
+ "key3": "try_to_overwrite_existing_value",
+ "key4": "value4",
+ }
+ )
+ ),
+ )
+
+ def test_aggregated_resources_different_schema_urls(self):
+ resource_detector1 = Mock(spec=ResourceDetector)
+ resource_detector1.detect.return_value = Resource(
+ {"key1": "value1"}, ""
+ )
+ resource_detector2 = Mock(spec=ResourceDetector)
+ resource_detector2.detect.return_value = Resource(
+ {"key2": "value2", "key3": "value3"}, "url1"
+ )
+ resource_detector3 = Mock(spec=ResourceDetector)
+ resource_detector3.detect.return_value = Resource(
+ {
+ "key2": "try_to_overwrite_existing_value",
+ "key3": "try_to_overwrite_existing_value",
+ "key4": "value4",
+ },
+ "url2",
+ )
+ resource_detector4 = Mock(spec=ResourceDetector)
+ resource_detector4.detect.return_value = Resource(
+ {
+ "key2": "try_to_overwrite_existing_value",
+ "key3": "try_to_overwrite_existing_value",
+ "key4": "value4",
+ },
+ "url1",
+ )
+ self.assertEqual(
+ get_aggregated_resources([resource_detector1, resource_detector2]),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ).merge(
+ Resource(
+ {"key1": "value1", "key2": "value2", "key3": "value3"},
+ "url1",
+ )
+ ),
+ )
+ with self.assertLogs(level=ERROR) as log_entry:
+ self.assertEqual(
+ get_aggregated_resources(
+ [resource_detector2, resource_detector3]
+ ),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ).merge(
+ Resource({"key2": "value2", "key3": "value3"}, "url1")
+ ),
+ )
+ self.assertIn("url1", log_entry.output[0])
+ self.assertIn("url2", log_entry.output[0])
+ with self.assertLogs(level=ERROR):
+ self.assertEqual(
+ get_aggregated_resources(
+ [
+ resource_detector2,
+ resource_detector3,
+ resource_detector4,
+ resource_detector1,
+ ]
+ ),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ).merge(
+ Resource(
+ {
+ "key1": "value1",
+ "key2": "try_to_overwrite_existing_value",
+ "key3": "try_to_overwrite_existing_value",
+ "key4": "value4",
+ },
+ "url1",
+ )
+ ),
+ )
+ self.assertIn("url1", log_entry.output[0])
+ self.assertIn("url2", log_entry.output[0])
+
+ def test_resource_detector_ignore_error(self):
+ resource_detector = Mock(spec=ResourceDetector)
+ resource_detector.detect.side_effect = Exception()
+ resource_detector.raise_on_error = False
+ with self.assertLogs(level=WARNING):
+ self.assertEqual(
+ get_aggregated_resources([resource_detector]),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+
+ def test_resource_detector_raise_error(self):
+ resource_detector = Mock(spec=ResourceDetector)
+ resource_detector.detect.side_effect = Exception()
+ resource_detector.raise_on_error = True
+ self.assertRaises(
+ Exception, get_aggregated_resources, [resource_detector]
+ )
+
+ @patch("opentelemetry.sdk.resources.logger")
+ def test_resource_detector_timeout(self, mock_logger):
+ resource_detector = Mock(spec=ResourceDetector)
+ resource_detector.detect.side_effect = TimeoutError()
+ resource_detector.raise_on_error = False
+ self.assertEqual(
+ get_aggregated_resources([resource_detector]),
+ _DEFAULT_RESOURCE.merge(
+ Resource({SERVICE_NAME: "unknown_service"}, "")
+ ),
+ )
+ mock_logger.warning.assert_called_with(
+ "Detector %s took longer than %s seconds, skipping",
+ resource_detector,
+ 5,
+ )
+
+ @patch.dict(
+ environ,
+ {"OTEL_RESOURCE_ATTRIBUTES": "key1=env_value1,key2=env_value2"},
+ )
+ def test_env_priority(self):
+ resource_env = Resource.create()
+ self.assertEqual(resource_env.attributes["key1"], "env_value1")
+ self.assertEqual(resource_env.attributes["key2"], "env_value2")
+
+ resource_env_override = Resource.create(
+ {"key1": "value1", "key2": "value2"}
+ )
+ self.assertEqual(resource_env_override.attributes["key1"], "value1")
+ self.assertEqual(resource_env_override.attributes["key2"], "value2")
+
+ @patch.dict(
+ environ,
+ {
+ OTEL_SERVICE_NAME: "test-srv-name",
+ OTEL_RESOURCE_ATTRIBUTES: "service.name=svc-name-from-resource",
+ },
+ )
+ def test_service_name_env(self):
+ resource = Resource.create()
+ self.assertEqual(resource.attributes["service.name"], "test-srv-name")
+
+ resource = Resource.create({"service.name": "from-code"})
+ self.assertEqual(resource.attributes["service.name"], "from-code")
+
+
+class TestOTELResourceDetector(unittest.TestCase):
+ def setUp(self) -> None:
+ environ[OTEL_RESOURCE_ATTRIBUTES] = ""
+
+ def tearDown(self) -> None:
+ environ.pop(OTEL_RESOURCE_ATTRIBUTES)
+
+ def test_empty(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = ""
+ self.assertEqual(detector.detect(), Resource.get_empty())
+
+ def test_one(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = "k=v"
+ self.assertEqual(detector.detect(), Resource({"k": "v"}))
+
+ def test_one_with_whitespace(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = " k = v "
+ self.assertEqual(detector.detect(), Resource({"k": "v"}))
+
+ def test_multiple(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = "k=v,k2=v2"
+ self.assertEqual(detector.detect(), Resource({"k": "v", "k2": "v2"}))
+
+ def test_multiple_with_whitespace(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = " k = v , k2 = v2 "
+ self.assertEqual(detector.detect(), Resource({"k": "v", "k2": "v2"}))
+
+ def test_invalid_key_value_pairs(self):
+ detector = OTELResourceDetector()
+ environ[OTEL_RESOURCE_ATTRIBUTES] = "k=v,k2=v2,invalid,,foo=bar=baz,"
+ with self.assertLogs(level=WARNING):
+ self.assertEqual(
+ detector.detect(),
+ Resource({"k": "v", "k2": "v2", "foo": "bar=baz"}),
+ )
+
+ def test_multiple_with_url_decode(self):
+ detector = OTELResourceDetector()
+ environ[
+ OTEL_RESOURCE_ATTRIBUTES
+ ] = "key=value%20test%0A, key2=value+%202"
+ self.assertEqual(
+ detector.detect(),
+ Resource({"key": "value test\n", "key2": "value+ 2"}),
+ )
+ self.assertEqual(
+ detector.detect(),
+ Resource(
+ {
+ "key": parse.unquote("value%20test%0A"),
+ "key2": parse.unquote("value+%202"),
+ }
+ ),
+ )
+
+ @patch.dict(
+ environ,
+ {OTEL_SERVICE_NAME: "test-srv-name"},
+ )
+ def test_service_name_env(self):
+ detector = OTELResourceDetector()
+ self.assertEqual(
+ detector.detect(),
+ Resource({"service.name": "test-srv-name"}),
+ )
+
+ @patch.dict(
+ environ,
+ {
+ OTEL_SERVICE_NAME: "from-service-name",
+ OTEL_RESOURCE_ATTRIBUTES: "service.name=from-resource-attrs",
+ },
+ )
+ def test_service_name_env_precedence(self):
+ detector = OTELResourceDetector()
+ self.assertEqual(
+ detector.detect(),
+ Resource({"service.name": "from-service-name"}),
+ )
+
+ @patch(
+ "sys.argv",
+ ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"],
+ )
+ def test_process_detector(self):
+ initial_resource = Resource({"foo": "bar"})
+ aggregated_resource = get_aggregated_resources(
+ [ProcessResourceDetector()], initial_resource
+ )
+
+ self.assertIn(
+ PROCESS_RUNTIME_NAME,
+ aggregated_resource.attributes.keys(),
+ )
+ self.assertIn(
+ PROCESS_RUNTIME_DESCRIPTION,
+ aggregated_resource.attributes.keys(),
+ )
+ self.assertIn(
+ PROCESS_RUNTIME_VERSION,
+ aggregated_resource.attributes.keys(),
+ )
+
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_PID], os.getpid()
+ )
+ if hasattr(os, "getppid"):
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_PARENT_PID],
+ os.getppid(),
+ )
+
+ if psutil is not None:
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_OWNER],
+ psutil.Process().username(),
+ )
+
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_EXECUTABLE_NAME],
+ sys.executable,
+ )
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_EXECUTABLE_PATH],
+ os.path.dirname(sys.executable),
+ )
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_COMMAND], sys.argv[0]
+ )
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_COMMAND_LINE],
+ " ".join(sys.argv),
+ )
+ self.assertEqual(
+ aggregated_resource.attributes[PROCESS_COMMAND_ARGS],
+ tuple(sys.argv[1:]),
+ )
+
+ def test_resource_detector_entry_points_default(self):
+ resource = Resource({}).create()
+
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.language"], "python"
+ )
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.name"], "opentelemetry"
+ )
+ self.assertEqual(
+ resource.attributes["service.name"], "unknown_service"
+ )
+ self.assertEqual(resource.schema_url, "")
+
+ resource = Resource({}).create({"a": "b", "c": "d"})
+
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.language"], "python"
+ )
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.name"], "opentelemetry"
+ )
+ self.assertEqual(
+ resource.attributes["service.name"], "unknown_service"
+ )
+ self.assertEqual(resource.attributes["a"], "b")
+ self.assertEqual(resource.attributes["c"], "d")
+ self.assertEqual(resource.schema_url, "")
+
+ @patch.dict(
+ environ, {OTEL_EXPERIMENTAL_RESOURCE_DETECTORS: "mock"}, clear=True
+ )
+ @patch(
+ "opentelemetry.sdk.resources.entry_points",
+ Mock(
+ return_value=[
+ Mock(
+ **{
+ "load.return_value": Mock(
+ return_value=Mock(
+ **{"detect.return_value": Resource({"a": "b"})}
+ )
+ )
+ }
+ )
+ ]
+ ),
+ )
+ def test_resource_detector_entry_points_non_default(self):
+ resource = Resource({}).create()
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.language"], "python"
+ )
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.name"], "opentelemetry"
+ )
+ self.assertEqual(
+ resource.attributes["service.name"], "unknown_service"
+ )
+ self.assertEqual(resource.attributes["a"], "b")
+ self.assertEqual(resource.schema_url, "")
+
+ def test_resource_detector_entry_points_otel(self):
+ """
+ Test that OTELResourceDetector-resource-generated attributes are
+ always being added.
+ """
+ with patch.dict(
+ environ, {OTEL_RESOURCE_ATTRIBUTES: "a=b,c=d"}, clear=True
+ ):
+ resource = Resource({}).create()
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.language"], "python"
+ )
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.name"], "opentelemetry"
+ )
+ self.assertEqual(
+ resource.attributes["service.name"], "unknown_service"
+ )
+ self.assertEqual(resource.attributes["a"], "b")
+ self.assertEqual(resource.attributes["c"], "d")
+ self.assertEqual(resource.schema_url, "")
+
+ with patch.dict(
+ environ,
+ {
+ OTEL_RESOURCE_ATTRIBUTES: "a=b,c=d",
+ OTEL_EXPERIMENTAL_RESOURCE_DETECTORS: "process",
+ },
+ clear=True,
+ ):
+ resource = Resource({}).create()
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.language"], "python"
+ )
+ self.assertEqual(
+ resource.attributes["telemetry.sdk.name"], "opentelemetry"
+ )
+ self.assertEqual(
+ resource.attributes["service.name"],
+ "unknown_service:"
+ + resource.attributes["process.executable.name"],
+ )
+ self.assertEqual(resource.attributes["a"], "b")
+ self.assertEqual(resource.attributes["c"], "d")
+ self.assertIn(PROCESS_RUNTIME_NAME, resource.attributes.keys())
+ self.assertIn(
+ PROCESS_RUNTIME_DESCRIPTION, resource.attributes.keys()
+ )
+ self.assertIn(PROCESS_RUNTIME_VERSION, resource.attributes.keys())
+ self.assertEqual(resource.schema_url, "")
diff --git a/opentelemetry-sdk/tests/test_configurator.py b/opentelemetry-sdk/tests/test_configurator.py
new file mode 100644
index 0000000000..b825ae931c
--- /dev/null
+++ b/opentelemetry-sdk/tests/test_configurator.py
@@ -0,0 +1,912 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+# pylint: skip-file
+
+from logging import WARNING, getLogger
+from os import environ
+from typing import Dict, Iterable, Optional, Sequence
+from unittest import TestCase
+from unittest.mock import Mock, patch
+
+from pytest import raises
+
+from opentelemetry import trace
+from opentelemetry.context import Context
+from opentelemetry.environment_variables import OTEL_PYTHON_ID_GENERATOR
+from opentelemetry.sdk._configuration import (
+ _EXPORTER_OTLP,
+ _EXPORTER_OTLP_PROTO_GRPC,
+ _EXPORTER_OTLP_PROTO_HTTP,
+ _get_exporter_names,
+ _get_id_generator,
+ _get_sampler,
+ _import_config_components,
+ _import_exporters,
+ _import_id_generator,
+ _import_sampler,
+ _init_logging,
+ _init_metrics,
+ _init_tracing,
+ _initialize_components,
+)
+from opentelemetry.sdk._logs import LoggingHandler
+from opentelemetry.sdk._logs.export import ConsoleLogExporter
+from opentelemetry.sdk.environment_variables import (
+ OTEL_TRACES_SAMPLER,
+ OTEL_TRACES_SAMPLER_ARG,
+)
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ ConsoleMetricExporter,
+ Metric,
+ MetricExporter,
+ MetricReader,
+)
+from opentelemetry.sdk.metrics.view import Aggregation
+from opentelemetry.sdk.resources import SERVICE_NAME, Resource
+from opentelemetry.sdk.trace.export import ConsoleSpanExporter
+from opentelemetry.sdk.trace.id_generator import IdGenerator, RandomIdGenerator
+from opentelemetry.sdk.trace.sampling import (
+ ALWAYS_ON,
+ Decision,
+ ParentBased,
+ Sampler,
+ SamplingResult,
+ TraceIdRatioBased,
+)
+from opentelemetry.trace import Link, SpanKind
+from opentelemetry.trace.span import TraceState
+from opentelemetry.util.types import Attributes
+
+
+class Provider:
+ def __init__(self, resource=None, sampler=None, id_generator=None):
+ self.sampler = sampler
+ self.id_generator = id_generator
+ self.processor = None
+ self.resource = resource or Resource.create({})
+
+ def add_span_processor(self, processor):
+ self.processor = processor
+
+
+class DummyLoggerProvider:
+ def __init__(self, resource=None):
+ self.resource = resource
+ self.processor = DummyLogRecordProcessor(DummyOTLPLogExporter())
+
+ def add_log_record_processor(self, processor):
+ self.processor = processor
+
+ def get_logger(self, name, *args, **kwargs):
+ return DummyLogger(name, self.resource, self.processor)
+
+ def force_flush(self, *args, **kwargs):
+ pass
+
+
+class DummyMeterProvider(MeterProvider):
+ pass
+
+
+class DummyLogger:
+ def __init__(self, name, resource, processor):
+ self.name = name
+ self.resource = resource
+ self.processor = processor
+
+ def emit(self, record):
+ self.processor.emit(record)
+
+
+class DummyLogRecordProcessor:
+ def __init__(self, exporter):
+ self.exporter = exporter
+
+ def emit(self, record):
+ self.exporter.export([record])
+
+ def force_flush(self, time):
+ pass
+
+ def shutdown(self):
+ pass
+
+
+class Processor:
+ def __init__(self, exporter):
+ self.exporter = exporter
+
+
+class DummyMetricReader(MetricReader):
+ def __init__(
+ self,
+ exporter: MetricExporter,
+ preferred_temporality: Dict[type, AggregationTemporality] = None,
+ preferred_aggregation: Dict[type, Aggregation] = None,
+ export_interval_millis: Optional[float] = None,
+ export_timeout_millis: Optional[float] = None,
+ ) -> None:
+ super().__init__(
+ preferred_temporality=preferred_temporality,
+ preferred_aggregation=preferred_aggregation,
+ )
+ self.exporter = exporter
+
+ def _receive_metrics(
+ self,
+ metrics: Iterable[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ self.exporter.export(None)
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ return True
+
+
+# MetricReader that can be configured as a pull exporter
+class DummyMetricReaderPullExporter(MetricReader):
+ def _receive_metrics(
+ self,
+ metrics: Iterable[Metric],
+ timeout_millis: float = 10_000,
+ **kwargs,
+ ) -> None:
+ pass
+
+ def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
+ return True
+
+
+class DummyOTLPMetricExporter:
+ def __init__(self, *args, **kwargs):
+ self.export_called = False
+
+ def export(self, batch):
+ self.export_called = True
+
+ def shutdown(self):
+ pass
+
+
+class Exporter:
+ def __init__(self):
+ tracer_provider = trace.get_tracer_provider()
+ self.service_name = (
+ tracer_provider.resource.attributes[SERVICE_NAME]
+ if getattr(tracer_provider, "resource", None)
+ else Resource.create().attributes.get(SERVICE_NAME)
+ )
+
+ def shutdown(self):
+ pass
+
+
+class OTLPSpanExporter:
+ pass
+
+
+class DummyOTLPLogExporter:
+ def __init__(self, *args, **kwargs):
+ self.export_called = False
+
+ def export(self, batch):
+ self.export_called = True
+
+ def shutdown(self):
+ pass
+
+
+class CustomSampler(Sampler):
+ def __init__(self) -> None:
+ pass
+
+ def get_description(self) -> str:
+ return "CustomSampler"
+
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence[Link] = None,
+ trace_state: TraceState = None,
+ ) -> "SamplingResult":
+ return SamplingResult(
+ Decision.RECORD_AND_SAMPLE,
+ None,
+ None,
+ )
+
+
+class CustomRatioSampler(TraceIdRatioBased):
+ def __init__(self, ratio):
+ if not isinstance(ratio, float):
+ raise ValueError(
+ "CustomRatioSampler ratio argument is not a float."
+ )
+ self.ratio = ratio
+ super().__init__(ratio)
+
+ def get_description(self) -> str:
+ return "CustomSampler"
+
+ def should_sample(
+ self,
+ parent_context: Optional["Context"],
+ trace_id: int,
+ name: str,
+ kind: SpanKind = None,
+ attributes: Attributes = None,
+ links: Sequence[Link] = None,
+ trace_state: TraceState = None,
+ ) -> "SamplingResult":
+ return SamplingResult(
+ Decision.RECORD_AND_SAMPLE,
+ None,
+ None,
+ )
+
+
+class CustomSamplerFactory:
+ @staticmethod
+ def get_custom_sampler(unused_sampler_arg):
+ return CustomSampler()
+
+ @staticmethod
+ def get_custom_ratio_sampler(sampler_arg):
+ return CustomRatioSampler(float(sampler_arg))
+
+ @staticmethod
+ def empty_get_custom_sampler(sampler_arg):
+ return
+
+
+class CustomIdGenerator(IdGenerator):
+ def generate_span_id(self):
+ pass
+
+ def generate_trace_id(self):
+ pass
+
+
+class IterEntryPoint:
+ def __init__(self, name, class_type):
+ self.name = name
+ self.class_type = class_type
+
+ def load(self):
+ return self.class_type
+
+
+class TestTraceInit(TestCase):
+ def setUp(self):
+ super()
+ self.get_provider_patcher = patch(
+ "opentelemetry.sdk._configuration.TracerProvider", Provider
+ )
+ self.get_processor_patcher = patch(
+ "opentelemetry.sdk._configuration.BatchSpanProcessor", Processor
+ )
+ self.set_provider_patcher = patch(
+ "opentelemetry.sdk._configuration.set_tracer_provider"
+ )
+
+ self.get_provider_mock = self.get_provider_patcher.start()
+ self.get_processor_mock = self.get_processor_patcher.start()
+ self.set_provider_mock = self.set_provider_patcher.start()
+
+ def tearDown(self):
+ super()
+ self.get_provider_patcher.stop()
+ self.get_processor_patcher.stop()
+ self.set_provider_patcher.stop()
+
+ # pylint: disable=protected-access
+ @patch.dict(
+ environ, {"OTEL_RESOURCE_ATTRIBUTES": "service.name=my-test-service"}
+ )
+ def test_trace_init_default(self):
+ auto_resource = Resource.create(
+ {
+ "telemetry.auto.version": "test-version",
+ }
+ )
+ _init_tracing(
+ {"zipkin": Exporter},
+ id_generator=RandomIdGenerator(),
+ resource=auto_resource,
+ )
+
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, Provider)
+ self.assertIsInstance(provider.id_generator, RandomIdGenerator)
+ self.assertIsInstance(provider.processor, Processor)
+ self.assertIsInstance(provider.processor.exporter, Exporter)
+ self.assertEqual(
+ provider.processor.exporter.service_name, "my-test-service"
+ )
+ self.assertEqual(
+ provider.resource.attributes.get("telemetry.auto.version"),
+ "test-version",
+ )
+
+ @patch.dict(
+ environ,
+ {"OTEL_RESOURCE_ATTRIBUTES": "service.name=my-otlp-test-service"},
+ )
+ def test_trace_init_otlp(self):
+ _init_tracing(
+ {"otlp": OTLPSpanExporter}, id_generator=RandomIdGenerator()
+ )
+
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, Provider)
+ self.assertIsInstance(provider.id_generator, RandomIdGenerator)
+ self.assertIsInstance(provider.processor, Processor)
+ self.assertIsInstance(provider.processor.exporter, OTLPSpanExporter)
+ self.assertIsInstance(provider.resource, Resource)
+ self.assertEqual(
+ provider.resource.attributes.get("service.name"),
+ "my-otlp-test-service",
+ )
+
+ @patch.dict(environ, {OTEL_PYTHON_ID_GENERATOR: "custom_id_generator"})
+ @patch("opentelemetry.sdk._configuration.IdGenerator", new=IdGenerator)
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ def test_trace_init_custom_id_generator(self, mock_entry_points):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint("custom_id_generator", CustomIdGenerator)
+ ]
+ )
+
+ id_generator_name = _get_id_generator()
+ id_generator = _import_id_generator(id_generator_name)
+ _init_tracing({}, id_generator=id_generator)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider.id_generator, CustomIdGenerator)
+
+ @patch.dict(
+ "os.environ", {OTEL_TRACES_SAMPLER: "non_existent_entry_point"}
+ )
+ def test_trace_init_custom_sampler_with_env_non_existent_entry_point(self):
+ sampler_name = _get_sampler()
+ with self.assertLogs(level=WARNING):
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsNone(provider.sampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict("os.environ", {OTEL_TRACES_SAMPLER: "custom_sampler_factory"})
+ def test_trace_init_custom_sampler_with_env(self, mock_entry_points):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_sampler_factory",
+ CustomSamplerFactory.get_custom_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider.sampler, CustomSampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict("os.environ", {OTEL_TRACES_SAMPLER: "custom_sampler_factory"})
+ def test_trace_init_custom_sampler_with_env_bad_factory(
+ self, mock_entry_points
+ ):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_sampler_factory",
+ CustomSamplerFactory.empty_get_custom_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ with self.assertLogs(level=WARNING):
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsNone(provider.sampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "custom_sampler_factory",
+ OTEL_TRACES_SAMPLER_ARG: "0.5",
+ },
+ )
+ def test_trace_init_custom_sampler_with_env_unused_arg(
+ self, mock_entry_points
+ ):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_sampler_factory",
+ CustomSamplerFactory.get_custom_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider.sampler, CustomSampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "custom_ratio_sampler_factory",
+ OTEL_TRACES_SAMPLER_ARG: "0.5",
+ },
+ )
+ def test_trace_init_custom_ratio_sampler_with_env(self, mock_entry_points):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_ratio_sampler_factory",
+ CustomSamplerFactory.get_custom_ratio_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider.sampler, CustomRatioSampler)
+ self.assertEqual(provider.sampler.ratio, 0.5)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "custom_ratio_sampler_factory",
+ OTEL_TRACES_SAMPLER_ARG: "foobar",
+ },
+ )
+ def test_trace_init_custom_ratio_sampler_with_env_bad_arg(
+ self, mock_entry_points
+ ):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_ratio_sampler_factory",
+ CustomSamplerFactory.get_custom_ratio_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ with self.assertLogs(level=WARNING):
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsNone(provider.sampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "custom_ratio_sampler_factory",
+ },
+ )
+ def test_trace_init_custom_ratio_sampler_with_env_missing_arg(
+ self, mock_entry_points
+ ):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_ratio_sampler_factory",
+ CustomSamplerFactory.get_custom_ratio_sampler,
+ )
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ with self.assertLogs(level=WARNING):
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsNone(provider.sampler)
+
+ @patch("opentelemetry.sdk._configuration.entry_points")
+ @patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "custom_sampler_factory",
+ OTEL_TRACES_SAMPLER_ARG: "0.5",
+ },
+ )
+ def test_trace_init_custom_ratio_sampler_with_env_multiple_entry_points(
+ self, mock_entry_points
+ ):
+ mock_entry_points.configure_mock(
+ return_value=[
+ IterEntryPoint(
+ "custom_sampler_factory",
+ CustomSamplerFactory.get_custom_sampler,
+ ),
+ ]
+ )
+
+ sampler_name = _get_sampler()
+ sampler = _import_sampler(sampler_name)
+ _init_tracing({}, sampler=sampler)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider.sampler, CustomSampler)
+
+ def verify_default_sampler(self, tracer_provider):
+ self.assertIsInstance(tracer_provider.sampler, ParentBased)
+ # pylint: disable=protected-access
+ self.assertEqual(tracer_provider.sampler._root, ALWAYS_ON)
+
+
+class TestLoggingInit(TestCase):
+ def setUp(self):
+ self.processor_patch = patch(
+ "opentelemetry.sdk._configuration.BatchLogRecordProcessor",
+ DummyLogRecordProcessor,
+ )
+ self.provider_patch = patch(
+ "opentelemetry.sdk._configuration.LoggerProvider",
+ DummyLoggerProvider,
+ )
+ self.set_provider_patch = patch(
+ "opentelemetry.sdk._configuration.set_logger_provider"
+ )
+
+ self.processor_mock = self.processor_patch.start()
+ self.provider_mock = self.provider_patch.start()
+ self.set_provider_mock = self.set_provider_patch.start()
+
+ def tearDown(self):
+ self.processor_patch.stop()
+ self.set_provider_patch.stop()
+ self.provider_patch.stop()
+ root_logger = getLogger("root")
+ root_logger.handlers = [
+ handler
+ for handler in root_logger.handlers
+ if not isinstance(handler, LoggingHandler)
+ ]
+
+ def test_logging_init_empty(self):
+ auto_resource = Resource.create(
+ {
+ "telemetry.auto.version": "auto-version",
+ }
+ )
+ _init_logging({}, resource=auto_resource)
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, DummyLoggerProvider)
+ self.assertIsInstance(provider.resource, Resource)
+ self.assertEqual(
+ provider.resource.attributes.get("telemetry.auto.version"),
+ "auto-version",
+ )
+
+ @patch.dict(
+ environ,
+ {"OTEL_RESOURCE_ATTRIBUTES": "service.name=otlp-service"},
+ )
+ def test_logging_init_exporter(self):
+ resource = Resource.create({})
+ _init_logging({"otlp": DummyOTLPLogExporter}, resource=resource)
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, DummyLoggerProvider)
+ self.assertIsInstance(provider.resource, Resource)
+ self.assertEqual(
+ provider.resource.attributes.get("service.name"),
+ "otlp-service",
+ )
+ self.assertIsInstance(provider.processor, DummyLogRecordProcessor)
+ self.assertIsInstance(
+ provider.processor.exporter, DummyOTLPLogExporter
+ )
+ getLogger(__name__).error("hello")
+ self.assertTrue(provider.processor.exporter.export_called)
+
+ @patch.dict(
+ environ,
+ {"OTEL_RESOURCE_ATTRIBUTES": "service.name=otlp-service"},
+ )
+ @patch("opentelemetry.sdk._configuration._init_tracing")
+ @patch("opentelemetry.sdk._configuration._init_logging")
+ def test_logging_init_disable_default(self, logging_mock, tracing_mock):
+ _initialize_components("auto-version")
+ self.assertEqual(logging_mock.call_count, 0)
+ self.assertEqual(tracing_mock.call_count, 1)
+
+ @patch.dict(
+ environ,
+ {
+ "OTEL_RESOURCE_ATTRIBUTES": "service.name=otlp-service",
+ "OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED": "True",
+ },
+ )
+ @patch("opentelemetry.sdk._configuration._init_tracing")
+ @patch("opentelemetry.sdk._configuration._init_logging")
+ def test_logging_init_enable_env(self, logging_mock, tracing_mock):
+ with self.assertLogs(level=WARNING):
+ _initialize_components("auto-version")
+ self.assertEqual(logging_mock.call_count, 1)
+ self.assertEqual(tracing_mock.call_count, 1)
+
+ @patch.dict(
+ environ,
+ {
+ "OTEL_RESOURCE_ATTRIBUTES": "service.name=otlp-service",
+ "OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED": "True",
+ },
+ )
+ @patch("opentelemetry.sdk._configuration._init_tracing")
+ @patch("opentelemetry.sdk._configuration._init_logging")
+ @patch("opentelemetry.sdk._configuration._init_metrics")
+ def test_initialize_components_resource(
+ self, metrics_mock, logging_mock, tracing_mock
+ ):
+ _initialize_components("auto-version")
+ self.assertEqual(logging_mock.call_count, 1)
+ self.assertEqual(tracing_mock.call_count, 1)
+ self.assertEqual(metrics_mock.call_count, 1)
+
+ _, args, _ = logging_mock.mock_calls[0]
+ logging_resource = args[1]
+ _, _, kwargs = tracing_mock.mock_calls[0]
+ tracing_resource = kwargs["resource"]
+ _, args, _ = metrics_mock.mock_calls[0]
+ metrics_resource = args[1]
+ self.assertEqual(logging_resource, tracing_resource)
+ self.assertEqual(logging_resource, metrics_resource)
+ self.assertEqual(tracing_resource, metrics_resource)
+
+
+class TestMetricsInit(TestCase):
+ def setUp(self):
+ self.metric_reader_patch = patch(
+ "opentelemetry.sdk._configuration.PeriodicExportingMetricReader",
+ DummyMetricReader,
+ )
+ self.provider_patch = patch(
+ "opentelemetry.sdk._configuration.MeterProvider",
+ DummyMeterProvider,
+ )
+ self.set_provider_patch = patch(
+ "opentelemetry.sdk._configuration.set_meter_provider"
+ )
+
+ self.metric_reader_mock = self.metric_reader_patch.start()
+ self.provider_mock = self.provider_patch.start()
+ self.set_provider_mock = self.set_provider_patch.start()
+
+ def tearDown(self):
+ self.metric_reader_patch.stop()
+ self.set_provider_patch.stop()
+ self.provider_patch.stop()
+
+ def test_metrics_init_empty(self):
+ auto_resource = Resource.create(
+ {
+ "telemetry.auto.version": "auto-version",
+ }
+ )
+ _init_metrics({}, resource=auto_resource)
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, DummyMeterProvider)
+ self.assertIsInstance(provider._sdk_config.resource, Resource)
+ self.assertEqual(
+ provider._sdk_config.resource.attributes.get(
+ "telemetry.auto.version"
+ ),
+ "auto-version",
+ )
+
+ @patch.dict(
+ environ,
+ {"OTEL_RESOURCE_ATTRIBUTES": "service.name=otlp-service"},
+ )
+ def test_metrics_init_exporter(self):
+ resource = Resource.create({})
+ _init_metrics({"otlp": DummyOTLPMetricExporter}, resource=resource)
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, DummyMeterProvider)
+ self.assertIsInstance(provider._sdk_config.resource, Resource)
+ self.assertEqual(
+ provider._sdk_config.resource.attributes.get("service.name"),
+ "otlp-service",
+ )
+ reader = provider._sdk_config.metric_readers[0]
+ self.assertIsInstance(reader, DummyMetricReader)
+ self.assertIsInstance(reader.exporter, DummyOTLPMetricExporter)
+
+ def test_metrics_init_pull_exporter(self):
+ resource = Resource.create({})
+ _init_metrics(
+ {"dummy_metric_reader": DummyMetricReaderPullExporter},
+ resource=resource,
+ )
+ self.assertEqual(self.set_provider_mock.call_count, 1)
+ provider = self.set_provider_mock.call_args[0][0]
+ self.assertIsInstance(provider, DummyMeterProvider)
+ reader = provider._sdk_config.metric_readers[0]
+ self.assertIsInstance(reader, DummyMetricReaderPullExporter)
+
+
+class TestExporterNames(TestCase):
+ @patch.dict(
+ environ,
+ {
+ "OTEL_TRACES_EXPORTER": _EXPORTER_OTLP,
+ "OTEL_METRICS_EXPORTER": _EXPORTER_OTLP_PROTO_GRPC,
+ "OTEL_LOGS_EXPORTER": _EXPORTER_OTLP_PROTO_HTTP,
+ },
+ )
+ def test_otlp_exporter(self):
+ self.assertEqual(
+ _get_exporter_names("traces"), [_EXPORTER_OTLP_PROTO_GRPC]
+ )
+ self.assertEqual(
+ _get_exporter_names("metrics"), [_EXPORTER_OTLP_PROTO_GRPC]
+ )
+ self.assertEqual(
+ _get_exporter_names("logs"), [_EXPORTER_OTLP_PROTO_HTTP]
+ )
+
+ @patch.dict(
+ environ,
+ {
+ "OTEL_TRACES_EXPORTER": _EXPORTER_OTLP,
+ "OTEL_METRICS_EXPORTER": _EXPORTER_OTLP,
+ "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf",
+ "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL": "grpc",
+ },
+ )
+ def test_otlp_custom_exporter(self):
+ self.assertEqual(
+ _get_exporter_names("traces"), [_EXPORTER_OTLP_PROTO_HTTP]
+ )
+ self.assertEqual(
+ _get_exporter_names("metrics"), [_EXPORTER_OTLP_PROTO_GRPC]
+ )
+
+ @patch.dict(
+ environ,
+ {
+ "OTEL_TRACES_EXPORTER": _EXPORTER_OTLP_PROTO_HTTP,
+ "OTEL_METRICS_EXPORTER": _EXPORTER_OTLP_PROTO_GRPC,
+ "OTEL_EXPORTER_OTLP_PROTOCOL": "grpc",
+ "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL": "http/protobuf",
+ },
+ )
+ def test_otlp_exporter_conflict(self):
+ # Verify that OTEL_*_EXPORTER is used, and a warning is logged
+ with self.assertLogs(level="WARNING") as logs_context:
+ self.assertEqual(
+ _get_exporter_names("traces"), [_EXPORTER_OTLP_PROTO_HTTP]
+ )
+ assert len(logs_context.output) == 1
+
+ with self.assertLogs(level="WARNING") as logs_context:
+ self.assertEqual(
+ _get_exporter_names("metrics"), [_EXPORTER_OTLP_PROTO_GRPC]
+ )
+ assert len(logs_context.output) == 1
+
+ @patch.dict(environ, {"OTEL_TRACES_EXPORTER": "zipkin"})
+ def test_multiple_exporters(self):
+ self.assertEqual(sorted(_get_exporter_names("traces")), ["zipkin"])
+
+ @patch.dict(environ, {"OTEL_TRACES_EXPORTER": "none"})
+ def test_none_exporters(self):
+ self.assertEqual(sorted(_get_exporter_names("traces")), [])
+
+ def test_no_exporters(self):
+ self.assertEqual(sorted(_get_exporter_names("traces")), [])
+
+ @patch.dict(environ, {"OTEL_TRACES_EXPORTER": ""})
+ def test_empty_exporters(self):
+ self.assertEqual(sorted(_get_exporter_names("traces")), [])
+
+
+class TestImportExporters(TestCase):
+ def test_console_exporters(self):
+ trace_exporters, metric_exporterts, logs_exporters = _import_exporters(
+ ["console"], ["console"], ["console"]
+ )
+ self.assertEqual(
+ trace_exporters["console"].__class__, ConsoleSpanExporter.__class__
+ )
+ self.assertEqual(
+ logs_exporters["console"].__class__, ConsoleLogExporter.__class__
+ )
+ self.assertEqual(
+ metric_exporterts["console"].__class__,
+ ConsoleMetricExporter.__class__,
+ )
+
+ @patch(
+ "opentelemetry.sdk._configuration.entry_points",
+ )
+ def test_metric_pull_exporter(self, mock_entry_points: Mock):
+ def mock_entry_points_impl(group, name):
+ if name == "dummy_pull_exporter":
+ return [
+ IterEntryPoint(
+ name=name, class_type=DummyMetricReaderPullExporter
+ )
+ ]
+ return []
+
+ mock_entry_points.side_effect = mock_entry_points_impl
+ _, metric_exporters, _ = _import_exporters(
+ [], ["dummy_pull_exporter"], []
+ )
+ self.assertIs(
+ metric_exporters["dummy_pull_exporter"],
+ DummyMetricReaderPullExporter,
+ )
+
+
+class TestImportConfigComponents(TestCase):
+ @patch(
+ "opentelemetry.sdk._configuration.entry_points",
+ **{"side_effect": KeyError},
+ )
+ def test__import_config_components_missing_entry_point(
+ self, mock_entry_points
+ ):
+
+ with raises(RuntimeError) as error:
+ _import_config_components(["a", "b", "c"], "name")
+ self.assertEqual(
+ str(error.value), "Requested entry point 'name' not found"
+ )
+
+ @patch(
+ "opentelemetry.sdk._configuration.entry_points",
+ **{"side_effect": StopIteration},
+ )
+ def test__import_config_components_missing_component(
+ self, mock_entry_points
+ ):
+
+ with raises(RuntimeError) as error:
+ _import_config_components(["a", "b", "c"], "name")
+ self.assertEqual(
+ str(error.value),
+ "Requested component 'a' not found in entry point 'name'",
+ )
diff --git a/opentelemetry-sdk/tests/test_util.py b/opentelemetry-sdk/tests/test_util.py
new file mode 100644
index 0000000000..00099090cd
--- /dev/null
+++ b/opentelemetry-sdk/tests/test_util.py
@@ -0,0 +1,143 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry.sdk.util import BoundedList
+
+
+class TestBoundedList(unittest.TestCase):
+ base = [52, 36, 53, 29, 54, 99, 56, 48, 22, 35, 21, 65, 10, 95, 42, 60]
+
+ def test_raises(self):
+ """Test corner cases
+
+ - negative list size
+ - access out of range indexes
+ """
+ with self.assertRaises(ValueError):
+ BoundedList(-1)
+
+ blist = BoundedList(4)
+ blist.append(37)
+ blist.append(13)
+
+ with self.assertRaises(IndexError):
+ _ = blist[2]
+
+ with self.assertRaises(IndexError):
+ _ = blist[4]
+
+ with self.assertRaises(IndexError):
+ _ = blist[-3]
+
+ def test_from_seq(self):
+ list_len = len(self.base)
+ base_copy = list(self.base)
+ blist = BoundedList.from_seq(list_len, base_copy)
+
+ self.assertEqual(len(blist), list_len)
+
+ # modify base_copy and test that blist is not changed
+ for idx in range(list_len):
+ base_copy[idx] = idx * base_copy[idx]
+
+ for idx in range(list_len):
+ self.assertEqual(blist[idx], self.base[idx])
+
+ # test that iter yields the correct number of elements
+ self.assertEqual(len(tuple(blist)), list_len)
+
+ # sequence too big
+ blist = BoundedList.from_seq(list_len // 2, base_copy)
+ self.assertEqual(len(blist), list_len // 2)
+ self.assertEqual(blist.dropped, list_len - (list_len // 2))
+
+ def test_append_no_drop(self):
+ """Append max capacity elements to the list without dropping elements."""
+ # create empty list
+ list_len = len(self.base)
+ blist = BoundedList(list_len)
+ self.assertEqual(len(blist), 0)
+
+ # fill list
+ for item in self.base:
+ blist.append(item)
+
+ self.assertEqual(len(blist), list_len)
+ self.assertEqual(blist.dropped, 0)
+
+ for idx in range(list_len):
+ self.assertEqual(blist[idx], self.base[idx])
+
+ # test __iter__ in BoundedList
+ for idx, val in enumerate(blist):
+ self.assertEqual(val, self.base[idx])
+
+ def test_append_drop(self):
+ """Append more than max capacity elements and test that oldest ones are dropped."""
+ list_len = len(self.base)
+ # create full BoundedList
+ blist = BoundedList.from_seq(list_len, self.base)
+
+ # try to append more items
+ for val in self.base:
+ # should drop the element without raising exceptions
+ blist.append(2 * val)
+
+ self.assertEqual(len(blist), list_len)
+ self.assertEqual(blist.dropped, list_len)
+
+ # test that new elements are in the list
+ for idx in range(list_len):
+ self.assertEqual(blist[idx], 2 * self.base[idx])
+
+ def test_extend_no_drop(self):
+ # create empty list
+ list_len = len(self.base)
+ blist = BoundedList(list_len)
+ self.assertEqual(len(blist), 0)
+
+ # fill list
+ blist.extend(self.base)
+
+ self.assertEqual(len(blist), list_len)
+ self.assertEqual(blist.dropped, 0)
+
+ for idx in range(list_len):
+ self.assertEqual(blist[idx], self.base[idx])
+
+ # test __iter__ in BoundedList
+ for idx, val in enumerate(blist):
+ self.assertEqual(val, self.base[idx])
+
+ def test_extend_drop(self):
+ list_len = len(self.base)
+ # create full BoundedList
+ blist = BoundedList.from_seq(list_len, self.base)
+ other_list = [13, 37, 51, 91]
+
+ # try to extend with more elements
+ blist.extend(other_list)
+
+ self.assertEqual(len(blist), list_len)
+ self.assertEqual(blist.dropped, len(other_list))
+
+ def test_no_limit(self):
+ blist = BoundedList(maxlen=None)
+ for num in range(100):
+ blist.append(num)
+
+ for num in range(100):
+ self.assertEqual(blist[num], num)
diff --git a/opentelemetry-sdk/tests/trace/__init__.py b/opentelemetry-sdk/tests/trace/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-sdk/tests/trace/export/__init__.py b/opentelemetry-sdk/tests/trace/export/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/export/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/opentelemetry-sdk/tests/trace/export/test_export.py b/opentelemetry-sdk/tests/trace/export/test_export.py
new file mode 100644
index 0000000000..8175c09f59
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/export/test_export.py
@@ -0,0 +1,615 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import multiprocessing
+import os
+import threading
+import time
+import unittest
+from concurrent.futures import ThreadPoolExecutor
+from logging import WARNING
+from platform import python_implementation, system
+from unittest import mock
+
+from pytest import mark
+
+from opentelemetry import trace as trace_api
+from opentelemetry.context import Context
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.environment_variables import (
+ OTEL_BSP_EXPORT_TIMEOUT,
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE,
+ OTEL_BSP_MAX_QUEUE_SIZE,
+ OTEL_BSP_SCHEDULE_DELAY,
+)
+from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.trace import export
+from opentelemetry.sdk.trace.export import logger
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+from opentelemetry.test.concurrency_test import ConcurrencyTestBase
+
+
+class MySpanExporter(export.SpanExporter):
+ """Very simple span exporter used for testing."""
+
+ def __init__(
+ self,
+ destination,
+ max_export_batch_size=None,
+ export_timeout_millis=0.0,
+ export_event: threading.Event = None,
+ ):
+ self.destination = destination
+ self.max_export_batch_size = max_export_batch_size
+ self.is_shutdown = False
+ self.export_timeout = export_timeout_millis / 1e3
+ self.export_event = export_event
+
+ def export(self, spans: trace.Span) -> export.SpanExportResult:
+ if (
+ self.max_export_batch_size is not None
+ and len(spans) > self.max_export_batch_size
+ ):
+ raise ValueError("Batch is too big")
+ time.sleep(self.export_timeout)
+ self.destination.extend(span.name for span in spans)
+ if self.export_event:
+ self.export_event.set()
+ return export.SpanExportResult.SUCCESS
+
+ def shutdown(self):
+ self.is_shutdown = True
+
+
+class TestSimpleSpanProcessor(unittest.TestCase):
+ def test_simple_span_processor(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.SimpleSpanProcessor(my_exporter)
+ tracer_provider.add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("xxx"):
+ pass
+
+ self.assertListEqual(["xxx", "bar", "foo"], spans_names_list)
+
+ span_processor.shutdown()
+ self.assertTrue(my_exporter.is_shutdown)
+
+ def test_simple_span_processor_no_context(self):
+ """Check that we process spans that are never made active.
+
+ SpanProcessors should act on a span's start and end events whether or
+ not it is ever the active span.
+ """
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.SimpleSpanProcessor(my_exporter)
+ tracer_provider.add_span_processor(span_processor)
+
+ with tracer.start_span("foo"):
+ with tracer.start_span("bar"):
+ with tracer.start_span("xxx"):
+ pass
+
+ self.assertListEqual(["xxx", "bar", "foo"], spans_names_list)
+
+ def test_on_start_accepts_context(self):
+ # pylint: disable=no-self-use
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ exporter = MySpanExporter([])
+ span_processor = mock.Mock(wraps=export.SimpleSpanProcessor(exporter))
+ tracer_provider.add_span_processor(span_processor)
+
+ context = Context()
+ span = tracer.start_span("foo", context=context)
+ span_processor.on_start.assert_called_once_with(
+ span, parent_context=context
+ )
+
+ def test_simple_span_processor_not_sampled(self):
+ tracer_provider = trace.TracerProvider(
+ sampler=trace.sampling.ALWAYS_OFF
+ )
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.SimpleSpanProcessor(my_exporter)
+ tracer_provider.add_span_processor(span_processor)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("xxx"):
+ pass
+
+ self.assertListEqual([], spans_names_list)
+
+
+def _create_start_and_end_span(name, span_processor, resource):
+ span = trace._Span(
+ name,
+ trace_api.SpanContext(
+ 0xDEADBEEF,
+ 0xDEADBEEF,
+ is_remote=False,
+ trace_flags=trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED),
+ ),
+ span_processor=span_processor,
+ resource=resource,
+ )
+ span.start()
+ span.end()
+
+
+class TestBatchSpanProcessor(ConcurrencyTestBase):
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_BSP_MAX_QUEUE_SIZE: "10",
+ OTEL_BSP_SCHEDULE_DELAY: "2",
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE: "3",
+ OTEL_BSP_EXPORT_TIMEOUT: "4",
+ },
+ )
+ def test_args_env_var(self):
+
+ batch_span_processor = export.BatchSpanProcessor(
+ MySpanExporter(destination=[])
+ )
+
+ self.assertEqual(batch_span_processor.max_queue_size, 10)
+ self.assertEqual(batch_span_processor.schedule_delay_millis, 2)
+ self.assertEqual(batch_span_processor.max_export_batch_size, 3)
+ self.assertEqual(batch_span_processor.export_timeout_millis, 4)
+
+ def test_args_env_var_defaults(self):
+
+ batch_span_processor = export.BatchSpanProcessor(
+ MySpanExporter(destination=[])
+ )
+
+ self.assertEqual(batch_span_processor.max_queue_size, 2048)
+ self.assertEqual(batch_span_processor.schedule_delay_millis, 5000)
+ self.assertEqual(batch_span_processor.max_export_batch_size, 512)
+ self.assertEqual(batch_span_processor.export_timeout_millis, 30000)
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_BSP_MAX_QUEUE_SIZE: "a",
+ OTEL_BSP_SCHEDULE_DELAY: " ",
+ OTEL_BSP_MAX_EXPORT_BATCH_SIZE: "One",
+ OTEL_BSP_EXPORT_TIMEOUT: "@",
+ },
+ )
+ def test_args_env_var_value_error(self):
+
+ logger.disabled = True
+ batch_span_processor = export.BatchSpanProcessor(
+ MySpanExporter(destination=[])
+ )
+ logger.disabled = False
+
+ self.assertEqual(batch_span_processor.max_queue_size, 2048)
+ self.assertEqual(batch_span_processor.schedule_delay_millis, 5000)
+ self.assertEqual(batch_span_processor.max_export_batch_size, 512)
+ self.assertEqual(batch_span_processor.export_timeout_millis, 30000)
+
+ def test_on_start_accepts_parent_context(self):
+ # pylint: disable=no-self-use
+ my_exporter = MySpanExporter(destination=[])
+ span_processor = mock.Mock(
+ wraps=export.BatchSpanProcessor(my_exporter)
+ )
+ tracer_provider = trace.TracerProvider()
+ tracer_provider.add_span_processor(span_processor)
+ tracer = tracer_provider.get_tracer(__name__)
+
+ context = Context()
+ span = tracer.start_span("foo", context=context)
+
+ span_processor.on_start.assert_called_once_with(
+ span, parent_context=context
+ )
+
+ def test_shutdown(self):
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.BatchSpanProcessor(my_exporter)
+
+ span_names = ["xxx", "bar", "foo"]
+
+ resource = Resource.create({})
+ for name in span_names:
+ _create_start_and_end_span(name, span_processor, resource)
+
+ span_processor.shutdown()
+ self.assertTrue(my_exporter.is_shutdown)
+
+ # check that spans are exported without an explicitly call to
+ # force_flush()
+ self.assertListEqual(span_names, spans_names_list)
+
+ def test_flush(self):
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.BatchSpanProcessor(my_exporter)
+
+ span_names0 = ["xxx", "bar", "foo"]
+ span_names1 = ["yyy", "baz", "fox"]
+
+ resource = Resource.create({})
+ for name in span_names0:
+ _create_start_and_end_span(name, span_processor, resource)
+
+ self.assertTrue(span_processor.force_flush())
+ self.assertListEqual(span_names0, spans_names_list)
+
+ # create some more spans to check that span processor still works
+ for name in span_names1:
+ _create_start_and_end_span(name, span_processor, resource)
+
+ self.assertTrue(span_processor.force_flush())
+ self.assertListEqual(span_names0 + span_names1, spans_names_list)
+
+ span_processor.shutdown()
+
+ def test_flush_empty(self):
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(destination=spans_names_list)
+ span_processor = export.BatchSpanProcessor(my_exporter)
+
+ self.assertTrue(span_processor.force_flush())
+
+ def test_flush_from_multiple_threads(self):
+ num_threads = 50
+ num_spans = 10
+
+ span_list = []
+
+ my_exporter = MySpanExporter(destination=span_list)
+ span_processor = export.BatchSpanProcessor(
+ my_exporter, max_queue_size=512, max_export_batch_size=128
+ )
+
+ resource = Resource.create({})
+
+ def create_spans_and_flush(tno: int):
+ for span_idx in range(num_spans):
+ _create_start_and_end_span(
+ f"Span {tno}-{span_idx}", span_processor, resource
+ )
+ self.assertTrue(span_processor.force_flush())
+
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
+ future_list = []
+ for thread_no in range(num_threads):
+ future = executor.submit(create_spans_and_flush, thread_no)
+ future_list.append(future)
+
+ executor.shutdown()
+
+ self.assertEqual(num_threads * num_spans, len(span_list))
+
+ def test_flush_timeout(self):
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(
+ destination=spans_names_list, export_timeout_millis=500
+ )
+ span_processor = export.BatchSpanProcessor(my_exporter)
+
+ resource = Resource.create({})
+ _create_start_and_end_span("foo", span_processor, resource)
+
+ # check that the timeout is not meet
+ with self.assertLogs(level=WARNING):
+ self.assertFalse(span_processor.force_flush(100))
+ span_processor.shutdown()
+
+ def test_batch_span_processor_lossless(self):
+ """Test that no spans are lost when sending max_queue_size spans"""
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(
+ destination=spans_names_list, max_export_batch_size=128
+ )
+ span_processor = export.BatchSpanProcessor(
+ my_exporter, max_queue_size=512, max_export_batch_size=128
+ )
+
+ resource = Resource.create({})
+ for _ in range(512):
+ _create_start_and_end_span("foo", span_processor, resource)
+
+ time.sleep(1)
+ self.assertTrue(span_processor.force_flush())
+ self.assertEqual(len(spans_names_list), 512)
+ span_processor.shutdown()
+
+ def test_batch_span_processor_many_spans(self):
+ """Test that no spans are lost when sending many spans"""
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(
+ destination=spans_names_list, max_export_batch_size=128
+ )
+ span_processor = export.BatchSpanProcessor(
+ my_exporter,
+ max_queue_size=256,
+ max_export_batch_size=64,
+ schedule_delay_millis=100,
+ )
+
+ resource = Resource.create({})
+ for _ in range(4):
+ for _ in range(256):
+ _create_start_and_end_span("foo", span_processor, resource)
+
+ time.sleep(0.1) # give some time for the exporter to upload spans
+
+ self.assertTrue(span_processor.force_flush())
+ self.assertEqual(len(spans_names_list), 1024)
+ span_processor.shutdown()
+
+ def test_batch_span_processor_not_sampled(self):
+ tracer_provider = trace.TracerProvider(
+ sampler=trace.sampling.ALWAYS_OFF
+ )
+ tracer = tracer_provider.get_tracer(__name__)
+ spans_names_list = []
+
+ my_exporter = MySpanExporter(
+ destination=spans_names_list, max_export_batch_size=128
+ )
+ span_processor = export.BatchSpanProcessor(
+ my_exporter,
+ max_queue_size=256,
+ max_export_batch_size=64,
+ schedule_delay_millis=100,
+ )
+ tracer_provider.add_span_processor(span_processor)
+ with tracer.start_as_current_span("foo"):
+ pass
+ time.sleep(0.05) # give some time for the exporter to upload spans
+
+ self.assertTrue(span_processor.force_flush())
+ self.assertEqual(len(spans_names_list), 0)
+ span_processor.shutdown()
+
+ def _check_fork_trace(self, exporter, expected):
+ time.sleep(0.5) # give some time for the exporter to upload spans
+ spans = exporter.get_finished_spans()
+ for span in spans:
+ self.assertIn(span.name, expected)
+
+ @unittest.skipUnless(
+ hasattr(os, "fork"),
+ "needs *nix",
+ )
+ def test_batch_span_processor_fork(self):
+ # pylint: disable=invalid-name
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ exporter = InMemorySpanExporter()
+ span_processor = export.BatchSpanProcessor(
+ exporter,
+ max_queue_size=256,
+ max_export_batch_size=64,
+ schedule_delay_millis=10,
+ )
+ tracer_provider.add_span_processor(span_processor)
+ with tracer.start_as_current_span("foo"):
+ pass
+ time.sleep(0.5) # give some time for the exporter to upload spans
+
+ self.assertTrue(span_processor.force_flush())
+ self.assertEqual(len(exporter.get_finished_spans()), 1)
+ exporter.clear()
+
+ def child(conn):
+ def _target():
+ with tracer.start_as_current_span("span") as s:
+ s.set_attribute("i", "1")
+ with tracer.start_as_current_span("temp"):
+ pass
+
+ self.run_with_many_threads(_target, 100)
+
+ time.sleep(0.5)
+
+ spans = exporter.get_finished_spans()
+ conn.send(len(spans) == 200)
+ conn.close()
+
+ parent_conn, child_conn = multiprocessing.Pipe()
+ p = multiprocessing.Process(target=child, args=(child_conn,))
+ p.start()
+ self.assertTrue(parent_conn.recv())
+ p.join()
+
+ span_processor.shutdown()
+
+ def test_batch_span_processor_scheduled_delay(self):
+ """Test that spans are exported each schedule_delay_millis"""
+ spans_names_list = []
+
+ export_event = threading.Event()
+ my_exporter = MySpanExporter(
+ destination=spans_names_list, export_event=export_event
+ )
+ start_time = time.time()
+ span_processor = export.BatchSpanProcessor(
+ my_exporter,
+ schedule_delay_millis=500,
+ )
+
+ # create single span
+ resource = Resource.create({})
+ _create_start_and_end_span("foo", span_processor, resource)
+
+ self.assertTrue(export_event.wait(2))
+ export_time = time.time()
+ self.assertEqual(len(spans_names_list), 1)
+ self.assertGreaterEqual((export_time - start_time) * 1e3, 500)
+
+ span_processor.shutdown()
+
+ @mark.skipif(
+ python_implementation() == "PyPy" and system() == "Windows",
+ reason="This test randomly fails in Windows with PyPy",
+ )
+ def test_batch_span_processor_reset_timeout(self):
+ """Test that the scheduled timeout is reset on cycles without spans"""
+ spans_names_list = []
+
+ export_event = threading.Event()
+ my_exporter = MySpanExporter(
+ destination=spans_names_list,
+ export_event=export_event,
+ export_timeout_millis=50,
+ )
+
+ span_processor = export.BatchSpanProcessor(
+ my_exporter,
+ schedule_delay_millis=50,
+ )
+
+ with mock.patch.object(span_processor.condition, "wait") as mock_wait:
+ resource = Resource.create({})
+ _create_start_and_end_span("foo", span_processor, resource)
+ self.assertTrue(export_event.wait(2))
+
+ # give some time for exporter to loop
+ # since wait is mocked it should return immediately
+ time.sleep(0.05)
+ mock_wait_calls = list(mock_wait.mock_calls)
+
+ # find the index of the call that processed the singular span
+ for idx, wait_call in enumerate(mock_wait_calls):
+ _, args, __ = wait_call
+ if args[0] <= 0:
+ after_calls = mock_wait_calls[idx + 1 :]
+ break
+
+ self.assertTrue(
+ all(args[0] >= 0.05 for _, args, __ in after_calls)
+ )
+
+ span_processor.shutdown()
+
+ def test_batch_span_processor_parameters(self):
+ # zero max_queue_size
+ self.assertRaises(
+ ValueError, export.BatchSpanProcessor, None, max_queue_size=0
+ )
+
+ # negative max_queue_size
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ max_queue_size=-500,
+ )
+
+ # zero schedule_delay_millis
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ schedule_delay_millis=0,
+ )
+
+ # negative schedule_delay_millis
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ schedule_delay_millis=-500,
+ )
+
+ # zero max_export_batch_size
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ max_export_batch_size=0,
+ )
+
+ # negative max_export_batch_size
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ max_export_batch_size=-500,
+ )
+
+ # max_export_batch_size > max_queue_size:
+ self.assertRaises(
+ ValueError,
+ export.BatchSpanProcessor,
+ None,
+ max_queue_size=256,
+ max_export_batch_size=512,
+ )
+
+
+class TestConsoleSpanExporter(unittest.TestCase):
+ def test_export(self): # pylint: disable=no-self-use
+ """Check that the console exporter prints spans."""
+
+ exporter = export.ConsoleSpanExporter()
+ # Mocking stdout interferes with debugging and test reporting, mock on
+ # the exporter instance instead.
+ span = trace._Span("span name", trace_api.INVALID_SPAN_CONTEXT)
+ with mock.patch.object(exporter, "out") as mock_stdout:
+ exporter.export([span])
+ mock_stdout.write.assert_called_once_with(span.to_json() + os.linesep)
+
+ self.assertEqual(mock_stdout.write.call_count, 1)
+ self.assertEqual(mock_stdout.flush.call_count, 1)
+
+ def test_export_custom(self): # pylint: disable=no-self-use
+ """Check that console exporter uses custom io, formatter."""
+ mock_span_str = mock.Mock(str)
+
+ def formatter(span): # pylint: disable=unused-argument
+ return mock_span_str
+
+ mock_stdout = mock.Mock()
+ exporter = export.ConsoleSpanExporter(
+ out=mock_stdout, formatter=formatter
+ )
+ exporter.export([trace._Span("span name", mock.Mock())])
+ mock_stdout.write.assert_called_once_with(mock_span_str)
diff --git a/opentelemetry-sdk/tests/trace/export/test_in_memory_span_exporter.py b/opentelemetry-sdk/tests/trace/export/test_in_memory_span_exporter.py
new file mode 100644
index 0000000000..eb366728c0
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/export/test_in_memory_span_exporter.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from unittest import mock
+
+from opentelemetry import trace as trace_api
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.trace import export
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+
+
+class TestInMemorySpanExporter(unittest.TestCase):
+ def setUp(self):
+ self.tracer_provider = trace.TracerProvider()
+ self.tracer = self.tracer_provider.get_tracer(__name__)
+ self.memory_exporter = InMemorySpanExporter()
+ span_processor = export.SimpleSpanProcessor(self.memory_exporter)
+ self.tracer_provider.add_span_processor(span_processor)
+ self.exec_scenario()
+
+ def exec_scenario(self):
+ with self.tracer.start_as_current_span("foo"):
+ with self.tracer.start_as_current_span("bar"):
+ with self.tracer.start_as_current_span("xxx"):
+ pass
+
+ def test_get_finished_spans(self):
+ span_list = self.memory_exporter.get_finished_spans()
+ spans_names_list = [span.name for span in span_list]
+ self.assertListEqual(["xxx", "bar", "foo"], spans_names_list)
+
+ def test_clear(self):
+ self.memory_exporter.clear()
+ span_list = self.memory_exporter.get_finished_spans()
+ self.assertEqual(len(span_list), 0)
+
+ def test_shutdown(self):
+ span_list = self.memory_exporter.get_finished_spans()
+ self.assertEqual(len(span_list), 3)
+
+ self.memory_exporter.shutdown()
+
+ # after shutdown no new spans are accepted
+ self.exec_scenario()
+
+ span_list = self.memory_exporter.get_finished_spans()
+ self.assertEqual(len(span_list), 3)
+
+ def test_return_code(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ span_list = (span,)
+ memory_exporter = InMemorySpanExporter()
+
+ ret = memory_exporter.export(span_list)
+ self.assertEqual(ret, export.SpanExportResult.SUCCESS)
+
+ memory_exporter.shutdown()
+
+ # after shutdown export should fail
+ ret = memory_exporter.export(span_list)
+ self.assertEqual(ret, export.SpanExportResult.FAILURE)
diff --git a/opentelemetry-sdk/tests/trace/propagation/__init__.py b/opentelemetry-sdk/tests/trace/propagation/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-sdk/tests/trace/test_globals.py b/opentelemetry-sdk/tests/trace/test_globals.py
new file mode 100644
index 0000000000..ab57ff018a
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/test_globals.py
@@ -0,0 +1,25 @@
+# type:ignore
+import unittest
+from logging import WARNING
+
+from opentelemetry import trace
+from opentelemetry.sdk.trace import TracerProvider # type:ignore
+
+
+class TestGlobals(unittest.TestCase):
+ def test_tracer_provider_override_warning(self):
+ """trace.set_tracer_provider should throw a warning when overridden"""
+ trace.set_tracer_provider(TracerProvider())
+ tracer_provider = trace.get_tracer_provider()
+ with self.assertLogs(level=WARNING) as test:
+ trace.set_tracer_provider(TracerProvider())
+ self.assertEqual(
+ test.output,
+ [
+ (
+ "WARNING:opentelemetry.trace:Overriding of current "
+ "TracerProvider is not allowed"
+ )
+ ],
+ )
+ self.assertIs(tracer_provider, trace.get_tracer_provider())
diff --git a/opentelemetry-sdk/tests/trace/test_implementation.py b/opentelemetry-sdk/tests/trace/test_implementation.py
new file mode 100644
index 0000000000..961e68d986
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/test_implementation.py
@@ -0,0 +1,49 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry.sdk import trace
+from opentelemetry.trace import INVALID_SPAN, INVALID_SPAN_CONTEXT
+
+
+class TestTracerImplementation(unittest.TestCase):
+ """
+ This test is in place to ensure the SDK implementation of the API
+ is returning values that are valid. The same tests have been added
+ to the API with different expected results. See issue for more details:
+ https://github.com/open-telemetry/opentelemetry-python/issues/142
+ """
+
+ def test_tracer(self):
+ tracer = trace.TracerProvider().get_tracer(__name__)
+ with tracer.start_span("test") as span:
+ self.assertNotEqual(span.get_span_context(), INVALID_SPAN_CONTEXT)
+ self.assertNotEqual(span, INVALID_SPAN)
+ self.assertIs(span.is_recording(), True)
+ with tracer.start_span("test2") as span2:
+ self.assertNotEqual(
+ span2.get_span_context(), INVALID_SPAN_CONTEXT
+ )
+ self.assertNotEqual(span2, INVALID_SPAN)
+ self.assertIs(span2.is_recording(), True)
+
+ def test_span(self):
+ with self.assertRaises(Exception):
+ # pylint: disable=no-value-for-parameter
+ span = trace._Span()
+
+ span = trace._Span("name", INVALID_SPAN_CONTEXT)
+ self.assertEqual(span.get_span_context(), INVALID_SPAN_CONTEXT)
+ self.assertIs(span.is_recording(), True)
diff --git a/opentelemetry-sdk/tests/trace/test_sampling.py b/opentelemetry-sdk/tests/trace/test_sampling.py
new file mode 100644
index 0000000000..e976b0f551
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/test_sampling.py
@@ -0,0 +1,539 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import contextlib
+import sys
+import typing
+import unittest
+
+from opentelemetry import context as context_api
+from opentelemetry import trace
+from opentelemetry.sdk.trace import sampling
+
+TO_DEFAULT = trace.TraceFlags(trace.TraceFlags.DEFAULT)
+TO_SAMPLED = trace.TraceFlags(trace.TraceFlags.SAMPLED)
+
+
+class TestDecision(unittest.TestCase):
+ def test_is_recording(self):
+ self.assertTrue(
+ sampling.Decision.is_recording(sampling.Decision.RECORD_ONLY)
+ )
+ self.assertTrue(
+ sampling.Decision.is_recording(sampling.Decision.RECORD_AND_SAMPLE)
+ )
+ self.assertFalse(
+ sampling.Decision.is_recording(sampling.Decision.DROP)
+ )
+
+ def test_is_sampled(self):
+ self.assertFalse(
+ sampling.Decision.is_sampled(sampling.Decision.RECORD_ONLY)
+ )
+ self.assertTrue(
+ sampling.Decision.is_sampled(sampling.Decision.RECORD_AND_SAMPLE)
+ )
+ self.assertFalse(sampling.Decision.is_sampled(sampling.Decision.DROP))
+
+
+class TestSamplingResult(unittest.TestCase):
+ def test_ctr(self):
+ attributes = {"asd": "test"}
+ trace_state = {}
+ # pylint: disable=E1137
+ trace_state["test"] = "123"
+ result = sampling.SamplingResult(
+ sampling.Decision.RECORD_ONLY, attributes, trace_state
+ )
+ self.assertIs(result.decision, sampling.Decision.RECORD_ONLY)
+ with self.assertRaises(TypeError):
+ result.attributes["test"] = "mess-this-up"
+ self.assertTrue(len(result.attributes), 1)
+ self.assertEqual(result.attributes["asd"], "test")
+ self.assertEqual(result.trace_state["test"], "123")
+
+
+class TestSampler(unittest.TestCase):
+ def _create_parent(
+ self, trace_flags: trace.TraceFlags, is_remote=False, trace_state=None
+ ) -> typing.Optional[context_api.Context]:
+ if trace_flags is None:
+ return None
+ return trace.set_span_in_context(
+ self._create_parent_span(trace_flags, is_remote, trace_state)
+ )
+
+ @staticmethod
+ def _create_parent_span(
+ trace_flags: trace.TraceFlags, is_remote=False, trace_state=None
+ ) -> trace.NonRecordingSpan:
+ return trace.NonRecordingSpan(
+ trace.SpanContext(
+ 0xDEADBEEF,
+ 0xDEADBEF0,
+ is_remote=is_remote,
+ trace_flags=trace_flags,
+ trace_state=trace_state,
+ )
+ )
+
+ def test_always_on(self):
+ trace_state = trace.TraceState([("key", "value")])
+ test_data = (TO_DEFAULT, TO_SAMPLED, None)
+
+ for trace_flags in test_data:
+ with self.subTest(trace_flags=trace_flags):
+ context = self._create_parent(trace_flags, False, trace_state)
+ sample_result = sampling.ALWAYS_ON.should_sample(
+ context,
+ 0xDEADBEF1,
+ "sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "true"},
+ )
+
+ self.assertTrue(sample_result.decision.is_sampled())
+ self.assertEqual(
+ sample_result.attributes, {"sampled.expect": "true"}
+ )
+ if context is not None:
+ self.assertEqual(sample_result.trace_state, trace_state)
+ else:
+ self.assertIsNone(sample_result.trace_state)
+
+ def test_always_off(self):
+ trace_state = trace.TraceState([("key", "value")])
+ test_data = (TO_DEFAULT, TO_SAMPLED, None)
+ for trace_flags in test_data:
+ with self.subTest(trace_flags=trace_flags):
+ context = self._create_parent(trace_flags, False, trace_state)
+ sample_result = sampling.ALWAYS_OFF.should_sample(
+ context,
+ 0xDEADBEF1,
+ "sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "false"},
+ )
+ self.assertFalse(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {})
+ if context is not None:
+ self.assertEqual(sample_result.trace_state, trace_state)
+ else:
+ self.assertIsNone(sample_result.trace_state)
+
+ def test_default_on(self):
+ trace_state = trace.TraceState([("key", "value")])
+ context = self._create_parent(TO_DEFAULT, False, trace_state)
+ sample_result = sampling.DEFAULT_ON.should_sample(
+ context,
+ 0xDEADBEF1,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "false"},
+ )
+ self.assertFalse(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {})
+ self.assertEqual(sample_result.trace_state, trace_state)
+
+ context = self._create_parent(TO_SAMPLED, False, trace_state)
+ sample_result = sampling.DEFAULT_ON.should_sample(
+ context,
+ 0xDEADBEF1,
+ "sampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "true"},
+ )
+ self.assertTrue(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {"sampled.expect": "true"})
+ self.assertEqual(sample_result.trace_state, trace_state)
+
+ sample_result = sampling.DEFAULT_ON.should_sample(
+ None,
+ 0xDEADBEF1,
+ "no parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "true"},
+ )
+ self.assertTrue(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {"sampled.expect": "true"})
+ self.assertIsNone(sample_result.trace_state)
+
+ def test_default_off(self):
+ trace_state = trace.TraceState([("key", "value")])
+ context = self._create_parent(TO_DEFAULT, False, trace_state)
+ sample_result = sampling.DEFAULT_OFF.should_sample(
+ context,
+ 0xDEADBEF1,
+ "unsampled parent, sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect", "false"},
+ )
+ self.assertFalse(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {})
+ self.assertEqual(sample_result.trace_state, trace_state)
+
+ context = self._create_parent(TO_SAMPLED, False, trace_state)
+ sample_result = sampling.DEFAULT_OFF.should_sample(
+ context,
+ 0xDEADBEF1,
+ "sampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "true"},
+ )
+ self.assertTrue(sample_result.decision.is_sampled())
+ self.assertEqual(sample_result.attributes, {"sampled.expect": "true"})
+ self.assertEqual(sample_result.trace_state, trace_state)
+
+ default_off = sampling.DEFAULT_OFF.should_sample(
+ None,
+ 0xDEADBEF1,
+ "unsampled parent, sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "false"},
+ )
+ self.assertFalse(default_off.decision.is_sampled())
+ self.assertEqual(default_off.attributes, {})
+ self.assertIsNone(default_off.trace_state)
+
+ def test_probability_sampler(self):
+ sampler = sampling.TraceIdRatioBased(0.5)
+
+ # Check that we sample based on the trace ID if the parent context is
+ # null
+ # trace_state should also be empty since it is based off of parent
+ sampled_result = sampler.should_sample(
+ None,
+ 0x7FFFFFFFFFFFFFFF,
+ "sampled true",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "true"},
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled.expect": "true"})
+ self.assertIsNone(sampled_result.trace_state)
+
+ not_sampled_result = sampler.should_sample(
+ None,
+ 0x8000000000000000,
+ "sampled false",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled.expect": "false"},
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertIsNone(sampled_result.trace_state)
+
+ def test_probability_sampler_zero(self):
+ default_off = sampling.TraceIdRatioBased(0.0)
+ self.assertFalse(
+ default_off.should_sample(
+ None, 0x0, "span name"
+ ).decision.is_sampled()
+ )
+
+ def test_probability_sampler_one(self):
+ default_off = sampling.TraceIdRatioBased(1.0)
+ self.assertTrue(
+ default_off.should_sample(
+ None, 0xFFFFFFFFFFFFFFFF, "span name"
+ ).decision.is_sampled()
+ )
+
+ def test_probability_sampler_limits(self):
+
+ # Sample one of every 2^64 (= 5e-20) traces. This is the lowest
+ # possible meaningful sampling rate, only traces with trace ID 0x0
+ # should get sampled.
+ almost_always_off = sampling.TraceIdRatioBased(2**-64)
+ self.assertTrue(
+ almost_always_off.should_sample(
+ None, 0x0, "span name"
+ ).decision.is_sampled()
+ )
+ self.assertFalse(
+ almost_always_off.should_sample(
+ None, 0x1, "span name"
+ ).decision.is_sampled()
+ )
+ self.assertEqual(
+ sampling.TraceIdRatioBased.get_bound_for_rate(2**-64), 0x1
+ )
+
+ # Sample every trace with trace ID less than 0xffffffffffffffff. In
+ # principle this is the highest possible sampling rate less than 1, but
+ # we can't actually express this rate as a float!
+ #
+ # In practice, the highest possible sampling rate is:
+ #
+ # 1 - sys.float_info.epsilon
+
+ almost_always_on = sampling.TraceIdRatioBased(1 - 2**-64)
+ self.assertTrue(
+ almost_always_on.should_sample(
+ None, 0xFFFFFFFFFFFFFFFE, "span name"
+ ).decision.is_sampled()
+ )
+
+ # These tests are logically consistent, but fail because of the float
+ # precision issue above. Changing the sampler to check fewer bytes of
+ # the trace ID will cause these to pass.
+
+ # self.assertFalse(
+ # almost_always_on.should_sample(
+ # None,
+ # 0xFFFFFFFFFFFFFFFF,
+ # "span name",
+ # ).decision.is_sampled()
+ # )
+ # self.assertEqual(
+ # sampling.TraceIdRatioBased.get_bound_for_rate(1 - 2 ** -64)),
+ # 0xFFFFFFFFFFFFFFFF,
+ # )
+
+ # Check that a sampler with the highest effective sampling rate < 1
+ # refuses to sample traces with trace ID 0xffffffffffffffff.
+ almost_almost_always_on = sampling.TraceIdRatioBased(
+ 1 - sys.float_info.epsilon
+ )
+ self.assertFalse(
+ almost_almost_always_on.should_sample(
+ None, 0xFFFFFFFFFFFFFFFF, "span name"
+ ).decision.is_sampled()
+ )
+ # Check that the highest effective sampling rate is actually lower than
+ # the highest theoretical sampling rate. If this test fails the test
+ # above is wrong.
+ self.assertLess(
+ almost_almost_always_on.bound,
+ 0xFFFFFFFFFFFFFFFF,
+ )
+
+ # pylint:disable=too-many-statements
+ def exec_parent_based(self, parent_sampling_context):
+ trace_state = trace.TraceState([("key", "value")])
+ sampler = sampling.ParentBased(sampling.ALWAYS_ON)
+ # Check that the sampling decision matches the parent context if given
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_DEFAULT,
+ trace_state=trace_state,
+ )
+ ) as context:
+ # local, not sampled
+ not_sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertEqual(not_sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_DEFAULT,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(
+ root=sampling.ALWAYS_OFF,
+ local_parent_not_sampled=sampling.ALWAYS_ON,
+ )
+ # local, not sampled -> opposite sampler
+ sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled": "false"})
+ self.assertEqual(sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_SAMPLED,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(sampling.ALWAYS_OFF)
+ # local, sampled
+ sampled_result = sampler.should_sample(
+ context,
+ 0x8000000000000000,
+ "sampled parent, sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "true"},
+ trace_state=trace_state,
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled": "true"})
+ self.assertEqual(sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_SAMPLED,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(
+ root=sampling.ALWAYS_ON,
+ local_parent_sampled=sampling.ALWAYS_OFF,
+ )
+ # local, sampled -> opposite sampler
+ not_sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ trace_state=trace_state,
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertEqual(not_sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_DEFAULT,
+ is_remote=True,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(sampling.ALWAYS_ON)
+ # remote, not sampled
+ not_sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ trace_state=trace_state,
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertEqual(not_sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_DEFAULT,
+ is_remote=True,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(
+ root=sampling.ALWAYS_OFF,
+ remote_parent_not_sampled=sampling.ALWAYS_ON,
+ )
+ # remote, not sampled -> opposite sampler
+ sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled": "false"})
+ self.assertEqual(sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_SAMPLED,
+ is_remote=True,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(sampling.ALWAYS_OFF)
+ # remote, sampled
+ sampled_result = sampler.should_sample(
+ context,
+ 0x8000000000000000,
+ "sampled parent, sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "true"},
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled": "true"})
+ self.assertEqual(sampled_result.trace_state, trace_state)
+
+ with parent_sampling_context(
+ self._create_parent_span(
+ trace_flags=TO_SAMPLED,
+ is_remote=True,
+ trace_state=trace_state,
+ )
+ ) as context:
+ sampler = sampling.ParentBased(
+ root=sampling.ALWAYS_ON,
+ remote_parent_sampled=sampling.ALWAYS_OFF,
+ )
+ # remote, sampled -> opposite sampler
+ not_sampled_result = sampler.should_sample(
+ context,
+ 0x7FFFFFFFFFFFFFFF,
+ "unsampled parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertEqual(not_sampled_result.trace_state, trace_state)
+
+ # for root span follow decision of root sampler
+ with parent_sampling_context(trace.INVALID_SPAN) as context:
+ sampler = sampling.ParentBased(sampling.ALWAYS_OFF)
+ not_sampled_result = sampler.should_sample(
+ context,
+ 0x8000000000000000,
+ "parent, sampling off",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "false"},
+ )
+ self.assertFalse(not_sampled_result.decision.is_sampled())
+ self.assertEqual(not_sampled_result.attributes, {})
+ self.assertIsNone(not_sampled_result.trace_state)
+
+ with parent_sampling_context(trace.INVALID_SPAN) as context:
+ sampler = sampling.ParentBased(sampling.ALWAYS_ON)
+ sampled_result = sampler.should_sample(
+ context,
+ 0x8000000000000000,
+ "no parent, sampling on",
+ trace.SpanKind.INTERNAL,
+ attributes={"sampled": "true"},
+ trace_state=trace_state,
+ )
+ self.assertTrue(sampled_result.decision.is_sampled())
+ self.assertEqual(sampled_result.attributes, {"sampled": "true"})
+ self.assertIsNone(sampled_result.trace_state)
+
+ def test_parent_based_explicit_parent_context(self):
+ @contextlib.contextmanager
+ def explicit_parent_context(span: trace.Span):
+ yield trace.set_span_in_context(span)
+
+ self.exec_parent_based(explicit_parent_context)
+
+ def test_parent_based_implicit_parent_context(self):
+ @contextlib.contextmanager
+ def implicit_parent_context(span: trace.Span):
+ token = context_api.attach(trace.set_span_in_context(span))
+ yield None
+ context_api.detach(token)
+
+ self.exec_parent_based(implicit_parent_context)
diff --git a/opentelemetry-sdk/tests/trace/test_span_processor.py b/opentelemetry-sdk/tests/trace/test_span_processor.py
new file mode 100644
index 0000000000..b8568fc7a1
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/test_span_processor.py
@@ -0,0 +1,309 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import abc
+import time
+import typing
+import unittest
+from threading import Event
+from typing import Optional
+from unittest import mock
+
+from opentelemetry import trace as trace_api
+from opentelemetry.context import Context
+from opentelemetry.sdk import trace
+
+
+def span_event_start_fmt(span_processor_name, span_name):
+ return span_processor_name + ":" + span_name + ":start"
+
+
+def span_event_end_fmt(span_processor_name, span_name):
+ return span_processor_name + ":" + span_name + ":end"
+
+
+class MySpanProcessor(trace.SpanProcessor):
+ def __init__(self, name, span_list):
+ self.name = name
+ self.span_list = span_list
+
+ def on_start(
+ self, span: "trace.Span", parent_context: Optional[Context] = None
+ ) -> None:
+ self.span_list.append(span_event_start_fmt(self.name, span.name))
+
+ def on_end(self, span: "trace.Span") -> None:
+ self.span_list.append(span_event_end_fmt(self.name, span.name))
+
+
+class TestSpanProcessor(unittest.TestCase):
+ def test_span_processor(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_calls_list = [] # filled by MySpanProcessor
+ expected_list = [] # filled by hand
+
+ # Span processors are created but not added to the tracer yet
+ sp1 = MySpanProcessor("SP1", spans_calls_list)
+ sp2 = MySpanProcessor("SP2", spans_calls_list)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ pass
+
+ # at this point lists must be empty
+ self.assertEqual(len(spans_calls_list), 0)
+
+ # add single span processor
+ tracer_provider.add_span_processor(sp1)
+
+ with tracer.start_as_current_span("foo"):
+ expected_list.append(span_event_start_fmt("SP1", "foo"))
+
+ with tracer.start_as_current_span("bar"):
+ expected_list.append(span_event_start_fmt("SP1", "bar"))
+
+ with tracer.start_as_current_span("baz"):
+ expected_list.append(span_event_start_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+
+ self.assertListEqual(spans_calls_list, expected_list)
+
+ spans_calls_list.clear()
+ expected_list.clear()
+
+ # go for multiple span processors
+ tracer_provider.add_span_processor(sp2)
+
+ with tracer.start_as_current_span("foo"):
+ expected_list.append(span_event_start_fmt("SP1", "foo"))
+ expected_list.append(span_event_start_fmt("SP2", "foo"))
+
+ with tracer.start_as_current_span("bar"):
+ expected_list.append(span_event_start_fmt("SP1", "bar"))
+ expected_list.append(span_event_start_fmt("SP2", "bar"))
+
+ with tracer.start_as_current_span("baz"):
+ expected_list.append(span_event_start_fmt("SP1", "baz"))
+ expected_list.append(span_event_start_fmt("SP2", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+ expected_list.append(span_event_end_fmt("SP2", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+ expected_list.append(span_event_end_fmt("SP2", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+ expected_list.append(span_event_end_fmt("SP2", "foo"))
+
+ # compare if two lists are the same
+ self.assertListEqual(spans_calls_list, expected_list)
+
+ def test_add_span_processor_after_span_creation(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_calls_list = [] # filled by MySpanProcessor
+ expected_list = [] # filled by hand
+
+ # Span processors are created but not added to the tracer yet
+ sp = MySpanProcessor("SP1", spans_calls_list)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ # add span processor after spans have been created
+ tracer_provider.add_span_processor(sp)
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+
+ self.assertListEqual(spans_calls_list, expected_list)
+
+
+class MultiSpanProcessorTestBase(abc.ABC):
+ @abc.abstractmethod
+ def create_multi_span_processor(
+ self,
+ ) -> typing.Union[
+ trace.SynchronousMultiSpanProcessor, trace.ConcurrentMultiSpanProcessor
+ ]:
+ pass
+
+ @staticmethod
+ def create_default_span() -> trace_api.Span:
+ span_context = trace_api.SpanContext(37, 73, is_remote=False)
+ return trace_api.NonRecordingSpan(span_context)
+
+ def test_on_start(self):
+ multi_processor = self.create_multi_span_processor()
+
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 5)]
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+
+ span = self.create_default_span()
+ context = Context()
+ multi_processor.on_start(span, parent_context=context)
+
+ for mock_processor in mocks:
+ mock_processor.on_start.assert_called_once_with(
+ span, parent_context=context
+ )
+ multi_processor.shutdown()
+
+ def test_on_end(self):
+ multi_processor = self.create_multi_span_processor()
+
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 5)]
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+
+ span = self.create_default_span()
+ multi_processor.on_end(span)
+
+ for mock_processor in mocks:
+ mock_processor.on_end.assert_called_once_with(span)
+ multi_processor.shutdown()
+
+ def test_on_shutdown(self):
+ multi_processor = self.create_multi_span_processor()
+
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 5)]
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+
+ multi_processor.shutdown()
+
+ for mock_processor in mocks:
+ mock_processor.shutdown.assert_called_once_with()
+
+ def test_force_flush(self):
+ multi_processor = self.create_multi_span_processor()
+
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 5)]
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+ timeout_millis = 100
+
+ flushed = multi_processor.force_flush(timeout_millis)
+
+ # pylint: disable=no-member
+ self.assertTrue(flushed)
+ for mock_processor in mocks:
+ # pylint: disable=no-member
+ self.assertEqual(1, mock_processor.force_flush.call_count)
+ multi_processor.shutdown()
+
+
+class TestSynchronousMultiSpanProcessor(
+ MultiSpanProcessorTestBase, unittest.TestCase
+):
+ def create_multi_span_processor(
+ self,
+ ) -> trace.SynchronousMultiSpanProcessor:
+ return trace.SynchronousMultiSpanProcessor()
+
+ def test_force_flush_late_by_timeout(self):
+ multi_processor = trace.SynchronousMultiSpanProcessor()
+
+ def delayed_flush(_):
+ time.sleep(0.055)
+
+ mock_processor1 = mock.Mock(spec=trace.SpanProcessor)
+ mock_processor1.force_flush = mock.Mock(side_effect=delayed_flush)
+ multi_processor.add_span_processor(mock_processor1)
+ mock_processor2 = mock.Mock(spec=trace.SpanProcessor)
+ multi_processor.add_span_processor(mock_processor2)
+
+ flushed = multi_processor.force_flush(50)
+
+ self.assertFalse(flushed)
+ self.assertEqual(1, mock_processor1.force_flush.call_count)
+ self.assertEqual(0, mock_processor2.force_flush.call_count)
+
+ def test_force_flush_late_by_span_processor(self):
+ multi_processor = trace.SynchronousMultiSpanProcessor()
+
+ mock_processor1 = mock.Mock(spec=trace.SpanProcessor)
+ mock_processor1.force_flush = mock.Mock(return_value=False)
+ multi_processor.add_span_processor(mock_processor1)
+ mock_processor2 = mock.Mock(spec=trace.SpanProcessor)
+ multi_processor.add_span_processor(mock_processor2)
+
+ flushed = multi_processor.force_flush(50)
+ self.assertFalse(flushed)
+ self.assertEqual(1, mock_processor1.force_flush.call_count)
+ self.assertEqual(0, mock_processor2.force_flush.call_count)
+
+
+class TestConcurrentMultiSpanProcessor(
+ MultiSpanProcessorTestBase, unittest.TestCase
+):
+ def create_multi_span_processor(
+ self,
+ ) -> trace.ConcurrentMultiSpanProcessor:
+ return trace.ConcurrentMultiSpanProcessor(3)
+
+ def test_force_flush_late_by_timeout(self):
+ multi_processor = trace.ConcurrentMultiSpanProcessor(5)
+ wait_event = Event()
+
+ def delayed_flush(_):
+ wait_event.wait()
+
+ late_mock = mock.Mock(spec=trace.SpanProcessor)
+ late_mock.force_flush = mock.Mock(side_effect=delayed_flush)
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 4)]
+ mocks.insert(0, late_mock)
+
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+
+ flushed = multi_processor.force_flush(timeout_millis=10)
+ # let the thread executing the late_mock continue
+ wait_event.set()
+
+ self.assertFalse(flushed)
+ for mock_processor in mocks:
+ self.assertEqual(1, mock_processor.force_flush.call_count)
+ multi_processor.shutdown()
+
+ def test_force_flush_late_by_span_processor(self):
+ multi_processor = trace.ConcurrentMultiSpanProcessor(5)
+
+ late_mock = mock.Mock(spec=trace.SpanProcessor)
+ late_mock.force_flush = mock.Mock(return_value=False)
+ mocks = [mock.Mock(spec=trace.SpanProcessor) for _ in range(0, 4)]
+ mocks.insert(0, late_mock)
+
+ for mock_processor in mocks:
+ multi_processor.add_span_processor(mock_processor)
+
+ flushed = multi_processor.force_flush()
+
+ self.assertFalse(flushed)
+ for mock_processor in mocks:
+ self.assertEqual(1, mock_processor.force_flush.call_count)
+ multi_processor.shutdown()
diff --git a/opentelemetry-sdk/tests/trace/test_trace.py b/opentelemetry-sdk/tests/trace/test_trace.py
new file mode 100644
index 0000000000..4150d60d10
--- /dev/null
+++ b/opentelemetry-sdk/tests/trace/test_trace.py
@@ -0,0 +1,1962 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+
+import shutil
+import subprocess
+import unittest
+from importlib import reload
+from logging import ERROR, WARNING
+from random import randint
+from time import time_ns
+from typing import Optional
+from unittest import mock
+from unittest.mock import Mock, patch
+
+from opentelemetry import trace as trace_api
+from opentelemetry.context import Context
+from opentelemetry.sdk import resources, trace
+from opentelemetry.sdk.environment_variables import (
+ OTEL_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_LINK_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT,
+ OTEL_SPAN_EVENT_COUNT_LIMIT,
+ OTEL_SPAN_LINK_COUNT_LIMIT,
+ OTEL_TRACES_SAMPLER,
+ OTEL_TRACES_SAMPLER_ARG,
+)
+from opentelemetry.sdk.trace import Resource, TracerProvider
+from opentelemetry.sdk.trace.id_generator import RandomIdGenerator
+from opentelemetry.sdk.trace.sampling import (
+ ALWAYS_OFF,
+ ALWAYS_ON,
+ Decision,
+ ParentBased,
+ StaticSampler,
+)
+from opentelemetry.sdk.util import BoundedDict, ns_to_iso_str
+from opentelemetry.sdk.util.instrumentation import InstrumentationInfo
+from opentelemetry.test.spantestutil import (
+ get_span_with_dropped_attributes_events_links,
+ new_tracer,
+)
+from opentelemetry.trace import (
+ Status,
+ StatusCode,
+ get_tracer,
+ set_tracer_provider,
+)
+
+
+class TestTracer(unittest.TestCase):
+ def test_no_deprecated_warning(self):
+ with self.assertRaises(AssertionError):
+ with self.assertWarns(DeprecationWarning):
+ TracerProvider(Mock(), Mock()).get_tracer(Mock(), Mock())
+
+ # This is being added here to make sure the filter on
+ # InstrumentationInfo does not affect other DeprecationWarnings that
+ # may be raised.
+ with self.assertWarns(DeprecationWarning):
+ BoundedDict(0)
+
+ def test_extends_api(self):
+ tracer = new_tracer()
+ self.assertIsInstance(tracer, trace.Tracer)
+ self.assertIsInstance(tracer, trace_api.Tracer)
+
+ def test_shutdown(self):
+ tracer_provider = trace.TracerProvider()
+
+ mock_processor1 = mock.Mock(spec=trace.SpanProcessor)
+ tracer_provider.add_span_processor(mock_processor1)
+
+ mock_processor2 = mock.Mock(spec=trace.SpanProcessor)
+ tracer_provider.add_span_processor(mock_processor2)
+
+ tracer_provider.shutdown()
+
+ self.assertEqual(mock_processor1.shutdown.call_count, 1)
+ self.assertEqual(mock_processor2.shutdown.call_count, 1)
+
+ shutdown_python_code = """
+import atexit
+from unittest import mock
+
+from opentelemetry.sdk import trace
+
+mock_processor = mock.Mock(spec=trace.SpanProcessor)
+
+def print_shutdown_count():
+ print(mock_processor.shutdown.call_count)
+
+# atexit hooks are called in inverse order they are added, so do this before
+# creating the tracer
+atexit.register(print_shutdown_count)
+
+tracer_provider = trace.TracerProvider({tracer_parameters})
+tracer_provider.add_span_processor(mock_processor)
+
+{tracer_shutdown}
+"""
+
+ def run_general_code(shutdown_on_exit, explicit_shutdown):
+ tracer_parameters = ""
+ tracer_shutdown = ""
+
+ if not shutdown_on_exit:
+ tracer_parameters = "shutdown_on_exit=False"
+
+ if explicit_shutdown:
+ tracer_shutdown = "tracer_provider.shutdown()"
+
+ return subprocess.check_output(
+ [
+ # use shutil to avoid calling python outside the
+ # virtualenv on windows.
+ shutil.which("python"),
+ "-c",
+ shutdown_python_code.format(
+ tracer_parameters=tracer_parameters,
+ tracer_shutdown=tracer_shutdown,
+ ),
+ ]
+ )
+
+ # test default shutdown_on_exit (True)
+ out = run_general_code(True, False)
+ self.assertTrue(out.startswith(b"1"))
+
+ # test that shutdown is called only once even if Tracer.shutdown is
+ # called explicitly
+ out = run_general_code(True, True)
+ self.assertTrue(out.startswith(b"1"))
+
+ # test shutdown_on_exit=False
+ out = run_general_code(False, False)
+ self.assertTrue(out.startswith(b"0"))
+
+ def test_tracer_provider_accepts_concurrent_multi_span_processor(self):
+ span_processor = trace.ConcurrentMultiSpanProcessor(2)
+ tracer_provider = trace.TracerProvider(
+ active_span_processor=span_processor
+ )
+
+ # pylint: disable=protected-access
+ self.assertEqual(
+ span_processor, tracer_provider._active_span_processor
+ )
+
+
+class TestTracerSampling(unittest.TestCase):
+ def tearDown(self):
+ reload(trace)
+
+ def test_default_sampler(self):
+ tracer = new_tracer()
+
+ # Check that the default tracer creates real spans via the default
+ # sampler
+ root_span = tracer.start_span(name="root span", context=None)
+ ctx = trace_api.set_span_in_context(root_span)
+ self.assertIsInstance(root_span, trace.Span)
+ child_span = tracer.start_span(name="child span", context=ctx)
+ self.assertIsInstance(child_span, trace.Span)
+ self.assertTrue(root_span.context.trace_flags.sampled)
+ self.assertEqual(
+ root_span.get_span_context().trace_flags,
+ trace_api.TraceFlags.SAMPLED,
+ )
+ self.assertEqual(
+ child_span.get_span_context().trace_flags,
+ trace_api.TraceFlags.SAMPLED,
+ )
+
+ def test_default_sampler_type(self):
+ tracer_provider = trace.TracerProvider()
+ self.verify_default_sampler(tracer_provider)
+
+ @mock.patch("opentelemetry.sdk.trace.sampling._get_from_env_or_default")
+ def test_sampler_no_sampling(self, _get_from_env_or_default):
+ tracer_provider = trace.TracerProvider(ALWAYS_OFF)
+ tracer = tracer_provider.get_tracer(__name__)
+
+ # Check that the default tracer creates no-op spans if the sampler
+ # decides not to sampler
+ root_span = tracer.start_span(name="root span", context=None)
+ ctx = trace_api.set_span_in_context(root_span)
+ self.assertIsInstance(root_span, trace_api.NonRecordingSpan)
+ child_span = tracer.start_span(name="child span", context=ctx)
+ self.assertIsInstance(child_span, trace_api.NonRecordingSpan)
+ self.assertEqual(
+ root_span.get_span_context().trace_flags,
+ trace_api.TraceFlags.DEFAULT,
+ )
+ self.assertEqual(
+ child_span.get_span_context().trace_flags,
+ trace_api.TraceFlags.DEFAULT,
+ )
+ self.assertFalse(_get_from_env_or_default.called)
+
+ @mock.patch.dict("os.environ", {OTEL_TRACES_SAMPLER: "always_off"})
+ def test_sampler_with_env(self):
+ # pylint: disable=protected-access
+ reload(trace)
+ tracer_provider = trace.TracerProvider()
+ self.assertIsInstance(tracer_provider.sampler, StaticSampler)
+ self.assertEqual(tracer_provider.sampler._decision, Decision.DROP)
+
+ tracer = tracer_provider.get_tracer(__name__)
+
+ root_span = tracer.start_span(name="root span", context=None)
+ # Should be no-op
+ self.assertIsInstance(root_span, trace_api.NonRecordingSpan)
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_TRACES_SAMPLER: "parentbased_traceidratio",
+ OTEL_TRACES_SAMPLER_ARG: "0.25",
+ },
+ )
+ def test_ratio_sampler_with_env(self):
+ # pylint: disable=protected-access
+ reload(trace)
+ tracer_provider = trace.TracerProvider()
+ self.assertIsInstance(tracer_provider.sampler, ParentBased)
+ self.assertEqual(tracer_provider.sampler._root.rate, 0.25)
+
+ def verify_default_sampler(self, tracer_provider):
+ self.assertIsInstance(tracer_provider.sampler, ParentBased)
+ # pylint: disable=protected-access
+ self.assertEqual(tracer_provider.sampler._root, ALWAYS_ON)
+
+
+class TestSpanCreation(unittest.TestCase):
+ def test_start_span_invalid_spancontext(self):
+ """If an invalid span context is passed as the parent, the created
+ span should use a new span id.
+
+ Invalid span contexts should also not be added as a parent. This
+ eliminates redundant error handling logic in exporters.
+ """
+ tracer = new_tracer()
+ parent_context = trace_api.set_span_in_context(
+ trace_api.INVALID_SPAN_CONTEXT
+ )
+ new_span = tracer.start_span("root", context=parent_context)
+ self.assertTrue(new_span.context.is_valid)
+ self.assertIsNone(new_span.parent)
+
+ def test_instrumentation_info(self):
+ tracer_provider = trace.TracerProvider()
+ schema_url = "https://opentelemetry.io/schemas/1.3.0"
+ tracer1 = tracer_provider.get_tracer("instr1")
+ tracer2 = tracer_provider.get_tracer("instr2", "1.3b3", schema_url)
+ span1 = tracer1.start_span("s1")
+ span2 = tracer2.start_span("s2")
+ with self.assertWarns(DeprecationWarning):
+ self.assertEqual(
+ span1.instrumentation_info, InstrumentationInfo("instr1", "")
+ )
+ with self.assertWarns(DeprecationWarning):
+ self.assertEqual(
+ span2.instrumentation_info,
+ InstrumentationInfo("instr2", "1.3b3", schema_url),
+ )
+
+ with self.assertWarns(DeprecationWarning):
+ self.assertEqual(span2.instrumentation_info.schema_url, schema_url)
+ with self.assertWarns(DeprecationWarning):
+ self.assertEqual(span2.instrumentation_info.version, "1.3b3")
+ with self.assertWarns(DeprecationWarning):
+ self.assertEqual(span2.instrumentation_info.name, "instr2")
+
+ with self.assertWarns(DeprecationWarning):
+ self.assertLess(
+ span1.instrumentation_info, span2.instrumentation_info
+ ) # Check sortability.
+
+ def test_invalid_instrumentation_info(self):
+ tracer_provider = trace.TracerProvider()
+ with self.assertLogs(level=ERROR):
+ tracer1 = tracer_provider.get_tracer("")
+ with self.assertLogs(level=ERROR):
+ tracer2 = tracer_provider.get_tracer(None)
+
+ self.assertIsInstance(
+ tracer1.instrumentation_info, InstrumentationInfo
+ )
+ span1 = tracer1.start_span("foo")
+ self.assertTrue(span1.is_recording())
+ self.assertEqual(tracer1.instrumentation_info.schema_url, "")
+ self.assertEqual(tracer1.instrumentation_info.version, "")
+ self.assertEqual(tracer1.instrumentation_info.name, "")
+
+ self.assertIsInstance(
+ tracer2.instrumentation_info, InstrumentationInfo
+ )
+ span2 = tracer2.start_span("bar")
+ self.assertTrue(span2.is_recording())
+ self.assertEqual(tracer2.instrumentation_info.schema_url, "")
+ self.assertEqual(tracer2.instrumentation_info.version, "")
+ self.assertEqual(tracer2.instrumentation_info.name, "")
+
+ self.assertEqual(
+ tracer1.instrumentation_info, tracer2.instrumentation_info
+ )
+
+ def test_span_processor_for_source(self):
+ tracer_provider = trace.TracerProvider()
+ tracer1 = tracer_provider.get_tracer("instr1")
+ tracer2 = tracer_provider.get_tracer("instr2", "1.3b3")
+ span1 = tracer1.start_span("s1")
+ span2 = tracer2.start_span("s2")
+
+ # pylint:disable=protected-access
+ self.assertIs(
+ span1._span_processor, tracer_provider._active_span_processor
+ )
+ self.assertIs(
+ span2._span_processor, tracer_provider._active_span_processor
+ )
+
+ def test_start_span_implicit(self):
+ tracer = new_tracer()
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ root = tracer.start_span("root")
+ self.assertIsNotNone(root.start_time)
+ self.assertIsNone(root.end_time)
+ self.assertEqual(root.kind, trace_api.SpanKind.INTERNAL)
+
+ with trace_api.use_span(root, True):
+ self.assertIs(trace_api.get_current_span(), root)
+
+ with tracer.start_span(
+ "child", kind=trace_api.SpanKind.CLIENT
+ ) as child:
+ self.assertIs(child.parent, root.get_span_context())
+ self.assertEqual(child.kind, trace_api.SpanKind.CLIENT)
+
+ self.assertIsNotNone(child.start_time)
+ self.assertIsNone(child.end_time)
+
+ # The new child span should inherit the parent's context but
+ # get a new span ID.
+ root_context = root.get_span_context()
+ child_context = child.get_span_context()
+ self.assertEqual(root_context.trace_id, child_context.trace_id)
+ self.assertNotEqual(
+ root_context.span_id, child_context.span_id
+ )
+ self.assertEqual(
+ root_context.trace_state, child_context.trace_state
+ )
+ self.assertEqual(
+ root_context.trace_flags, child_context.trace_flags
+ )
+
+ # Verify start_span() did not set the current span.
+ self.assertIs(trace_api.get_current_span(), root)
+
+ self.assertIsNotNone(child.end_time)
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+ self.assertIsNotNone(root.end_time)
+
+ def test_start_span_explicit(self):
+ tracer = new_tracer()
+
+ other_parent = trace._Span(
+ "name",
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED),
+ ),
+ )
+
+ other_parent_context = trace_api.set_span_in_context(other_parent)
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ root = tracer.start_span("root")
+ self.assertIsNotNone(root.start_time)
+ self.assertIsNone(root.end_time)
+
+ # Test with the implicit root span
+ with trace_api.use_span(root, True):
+ self.assertIs(trace_api.get_current_span(), root)
+
+ with tracer.start_span("stepchild", other_parent_context) as child:
+ # The child's parent should be the one passed in,
+ # not the current span.
+ self.assertNotEqual(child.parent, root)
+ self.assertIs(child.parent, other_parent.get_span_context())
+
+ self.assertIsNotNone(child.start_time)
+ self.assertIsNone(child.end_time)
+
+ # The child should inherit its context from the explicit
+ # parent, not the current span.
+ child_context = child.get_span_context()
+ self.assertEqual(
+ other_parent.get_span_context().trace_id,
+ child_context.trace_id,
+ )
+ self.assertNotEqual(
+ other_parent.get_span_context().span_id,
+ child_context.span_id,
+ )
+ self.assertEqual(
+ other_parent.get_span_context().trace_state,
+ child_context.trace_state,
+ )
+ self.assertEqual(
+ other_parent.get_span_context().trace_flags,
+ child_context.trace_flags,
+ )
+
+ # Verify start_span() did not set the current span.
+ self.assertIs(trace_api.get_current_span(), root)
+
+ # Verify ending the child did not set the current span.
+ self.assertIs(trace_api.get_current_span(), root)
+ self.assertIsNotNone(child.end_time)
+
+ def test_start_as_current_span_implicit(self):
+ tracer = new_tracer()
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ with tracer.start_as_current_span("root") as root:
+ self.assertIs(trace_api.get_current_span(), root)
+
+ with tracer.start_as_current_span("child") as child:
+ self.assertIs(trace_api.get_current_span(), child)
+ self.assertIs(child.parent, root.get_span_context())
+
+ # After exiting the child's scope the parent should become the
+ # current span again.
+ self.assertIs(trace_api.get_current_span(), root)
+ self.assertIsNotNone(child.end_time)
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+ self.assertIsNotNone(root.end_time)
+
+ def test_start_as_current_span_explicit(self):
+ tracer = new_tracer()
+
+ other_parent = trace._Span(
+ "name",
+ trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED),
+ ),
+ )
+ other_parent_ctx = trace_api.set_span_in_context(other_parent)
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ # Test with the implicit root span
+ with tracer.start_as_current_span("root") as root:
+ self.assertIs(trace_api.get_current_span(), root)
+
+ self.assertIsNotNone(root.start_time)
+ self.assertIsNone(root.end_time)
+
+ with tracer.start_as_current_span(
+ "stepchild", other_parent_ctx
+ ) as child:
+ # The child should become the current span as usual, but its
+ # parent should be the one passed in, not the
+ # previously-current span.
+ self.assertIs(trace_api.get_current_span(), child)
+ self.assertNotEqual(child.parent, root)
+ self.assertIs(child.parent, other_parent.get_span_context())
+
+ # After exiting the child's scope the last span on the stack should
+ # become current, not the child's parent.
+ self.assertNotEqual(trace_api.get_current_span(), other_parent)
+ self.assertIs(trace_api.get_current_span(), root)
+ self.assertIsNotNone(child.end_time)
+
+ def test_start_as_current_span_decorator(self):
+ tracer = new_tracer()
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ @tracer.start_as_current_span("root")
+ def func():
+ root = trace_api.get_current_span()
+
+ with tracer.start_as_current_span("child") as child:
+ self.assertIs(trace_api.get_current_span(), child)
+ self.assertIs(child.parent, root.get_span_context())
+
+ # After exiting the child's scope the parent should become the
+ # current span again.
+ self.assertIs(trace_api.get_current_span(), root)
+ self.assertIsNotNone(child.end_time)
+
+ return root
+
+ root1 = func()
+
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+ self.assertIsNotNone(root1.end_time)
+
+ # Second call must create a new span
+ root2 = func()
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+ self.assertIsNotNone(root2.end_time)
+ self.assertIsNot(root1, root2)
+
+ def test_start_as_current_span_no_end_on_exit(self):
+ tracer = new_tracer()
+
+ with tracer.start_as_current_span("root", end_on_exit=False) as root:
+ self.assertIsNone(root.end_time)
+
+ self.assertIsNone(root.end_time)
+
+ def test_explicit_span_resource(self):
+ resource = resources.Resource.create({})
+ tracer_provider = trace.TracerProvider(resource=resource)
+ tracer = tracer_provider.get_tracer(__name__)
+ span = tracer.start_span("root")
+ self.assertIs(span.resource, resource)
+
+ def test_default_span_resource(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+ span = tracer.start_span("root")
+ # pylint: disable=protected-access
+ self.assertIsInstance(span.resource, resources.Resource)
+ self.assertEqual(
+ span.resource.attributes.get(resources.SERVICE_NAME),
+ "unknown_service",
+ )
+ self.assertEqual(
+ span.resource.attributes.get(resources.TELEMETRY_SDK_LANGUAGE),
+ "python",
+ )
+ self.assertEqual(
+ span.resource.attributes.get(resources.TELEMETRY_SDK_NAME),
+ "opentelemetry",
+ )
+ self.assertEqual(
+ span.resource.attributes.get(resources.TELEMETRY_SDK_VERSION),
+ resources._OPENTELEMETRY_SDK_VERSION,
+ )
+
+ def test_span_context_remote_flag(self):
+ tracer = new_tracer()
+
+ span = tracer.start_span("foo")
+ self.assertFalse(span.context.is_remote)
+
+ def test_disallow_direct_span_creation(self):
+ with self.assertRaises(TypeError):
+ # pylint: disable=abstract-class-instantiated
+ trace.Span("name", mock.Mock(spec=trace_api.SpanContext))
+
+ def test_surplus_span_links(self):
+ # pylint: disable=protected-access
+ max_links = trace.SpanLimits().max_links
+ links = [
+ trace_api.Link(trace_api.SpanContext(0x1, idx, is_remote=False))
+ for idx in range(0, 16 + max_links)
+ ]
+ tracer = new_tracer()
+ with tracer.start_as_current_span("span", links=links) as root:
+ self.assertEqual(len(root.links), max_links)
+
+ def test_surplus_span_attributes(self):
+ # pylint: disable=protected-access
+ max_attrs = trace.SpanLimits().max_span_attributes
+ attributes = {str(idx): idx for idx in range(0, 16 + max_attrs)}
+ tracer = new_tracer()
+ with tracer.start_as_current_span(
+ "span", attributes=attributes
+ ) as root:
+ self.assertEqual(len(root.attributes), max_attrs)
+
+
+class TestReadableSpan(unittest.TestCase):
+ def test_links(self):
+ span = trace.ReadableSpan("test")
+ self.assertEqual(span.links, ())
+
+ span = trace.ReadableSpan(
+ "test",
+ links=[trace_api.Link(context=trace_api.INVALID_SPAN_CONTEXT)] * 2,
+ )
+ self.assertEqual(len(span.links), 2)
+ for link in span.links:
+ self.assertFalse(link.context.is_valid)
+
+ def test_events(self):
+ span = trace.ReadableSpan("test")
+ self.assertEqual(span.events, ())
+ events = [
+ trace.Event("foo1", {"bar1": "baz1"}),
+ trace.Event("foo2", {"bar2": "baz2"}),
+ ]
+ span = trace.ReadableSpan("test", events=events)
+ self.assertEqual(span.events, tuple(events))
+
+
+class TestSpan(unittest.TestCase):
+ # pylint: disable=too-many-public-methods
+
+ def setUp(self):
+ self.tracer = new_tracer()
+
+ def test_basic_span(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ self.assertEqual(span.name, "name")
+
+ def test_attributes(self):
+ with self.tracer.start_as_current_span("root") as root:
+ root.set_attributes(
+ {
+ "http.request.method": "GET",
+ "url.full": "https://example.com:779/path/12/?q=d#123",
+ }
+ )
+
+ root.set_attribute("http.response.status_code", 200)
+ root.set_attribute("http.status_text", "OK")
+ root.set_attribute("misc.pi", 3.14)
+
+ # Setting an attribute with the same key as an existing attribute
+ # SHOULD overwrite the existing attribute's value.
+ root.set_attribute("attr-key", "attr-value1")
+ root.set_attribute("attr-key", "attr-value2")
+
+ root.set_attribute("empty-list", [])
+ list_of_bools = [True, True, False]
+ root.set_attribute("list-of-bools", list_of_bools)
+ list_of_numerics = [123, 314, 0]
+ root.set_attribute("list-of-numerics", list_of_numerics)
+
+ self.assertEqual(len(root.attributes), 9)
+ self.assertEqual(root.attributes["http.request.method"], "GET")
+ self.assertEqual(
+ root.attributes["url.full"],
+ "https://example.com:779/path/12/?q=d#123",
+ )
+ self.assertEqual(root.attributes["http.response.status_code"], 200)
+ self.assertEqual(root.attributes["http.status_text"], "OK")
+ self.assertEqual(root.attributes["misc.pi"], 3.14)
+ self.assertEqual(root.attributes["attr-key"], "attr-value2")
+ self.assertEqual(root.attributes["empty-list"], ())
+ self.assertEqual(
+ root.attributes["list-of-bools"], (True, True, False)
+ )
+ list_of_bools.append(False)
+ self.assertEqual(
+ root.attributes["list-of-bools"], (True, True, False)
+ )
+ self.assertEqual(
+ root.attributes["list-of-numerics"], (123, 314, 0)
+ )
+ list_of_numerics.append(227)
+ self.assertEqual(
+ root.attributes["list-of-numerics"], (123, 314, 0)
+ )
+
+ attributes = {
+ "attr-key": "val",
+ "attr-key2": "val2",
+ "attr-in-both": "span-attr",
+ }
+ with self.tracer.start_as_current_span(
+ "root2", attributes=attributes
+ ) as root:
+ self.assertEqual(len(root.attributes), 3)
+ self.assertEqual(root.attributes["attr-key"], "val")
+ self.assertEqual(root.attributes["attr-key2"], "val2")
+ self.assertEqual(root.attributes["attr-in-both"], "span-attr")
+
+ def test_invalid_attribute_values(self):
+ with self.tracer.start_as_current_span("root") as root:
+ with self.assertLogs(level=WARNING):
+ root.set_attributes(
+ {"correct-value": "foo", "non-primitive-data-type": {}}
+ )
+
+ with self.assertLogs(level=WARNING):
+ root.set_attribute("non-primitive-data-type", {})
+ with self.assertLogs(level=WARNING):
+ root.set_attribute(
+ "list-of-mixed-data-types-numeric-first",
+ [123, False, "string"],
+ )
+ with self.assertLogs(level=WARNING):
+ root.set_attribute(
+ "list-of-mixed-data-types-non-numeric-first",
+ [False, 123, "string"],
+ )
+ with self.assertLogs(level=WARNING):
+ root.set_attribute(
+ "list-with-non-primitive-data-type", [{}, 123]
+ )
+ with self.assertLogs(level=WARNING):
+ root.set_attribute("list-with-numeric-and-bool", [1, True])
+
+ with self.assertLogs(level=WARNING):
+ root.set_attribute("", 123)
+ with self.assertLogs(level=WARNING):
+ root.set_attribute(None, 123)
+
+ self.assertEqual(len(root.attributes), 1)
+ self.assertEqual(root.attributes["correct-value"], "foo")
+
+ def test_byte_type_attribute_value(self):
+ with self.tracer.start_as_current_span("root") as root:
+ with self.assertLogs(level=WARNING):
+ root.set_attribute(
+ "invalid-byte-type-attribute",
+ b"\xd8\xe1\xb7\xeb\xa8\xe5 \xd2\xb7\xe1",
+ )
+ self.assertFalse(
+ "invalid-byte-type-attribute" in root.attributes
+ )
+
+ root.set_attribute("valid-byte-type-attribute", b"valid byte")
+ self.assertTrue(
+ isinstance(root.attributes["valid-byte-type-attribute"], str)
+ )
+
+ def test_sampling_attributes(self):
+ sampling_attributes = {
+ "sampler-attr": "sample-val",
+ "attr-in-both": "decision-attr",
+ }
+ tracer_provider = trace.TracerProvider(
+ StaticSampler(Decision.RECORD_AND_SAMPLE)
+ )
+
+ self.tracer = tracer_provider.get_tracer(__name__)
+
+ with self.tracer.start_as_current_span(
+ name="root2", attributes=sampling_attributes
+ ) as root:
+ self.assertEqual(len(root.attributes), 2)
+ self.assertEqual(root.attributes["sampler-attr"], "sample-val")
+ self.assertEqual(root.attributes["attr-in-both"], "decision-attr")
+ self.assertEqual(
+ root.get_span_context().trace_flags,
+ trace_api.TraceFlags.SAMPLED,
+ )
+
+ def test_events(self):
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ with self.tracer.start_as_current_span("root") as root:
+ # only event name
+ root.add_event("event0")
+
+ # event name and attributes
+ root.add_event(
+ "event1", {"name": "pluto", "some_bools": [True, False]}
+ )
+
+ # event name, attributes and timestamp
+ now = time_ns()
+ root.add_event("event2", {"name": ["birthday"]}, now)
+
+ mutable_list = ["original_contents"]
+ root.add_event("event3", {"name": mutable_list})
+
+ self.assertEqual(len(root.events), 4)
+
+ self.assertEqual(root.events[0].name, "event0")
+ self.assertEqual(root.events[0].attributes, {})
+
+ self.assertEqual(root.events[1].name, "event1")
+ self.assertEqual(
+ root.events[1].attributes,
+ {"name": "pluto", "some_bools": (True, False)},
+ )
+
+ self.assertEqual(root.events[2].name, "event2")
+ self.assertEqual(
+ root.events[2].attributes, {"name": ("birthday",)}
+ )
+ self.assertEqual(root.events[2].timestamp, now)
+
+ self.assertEqual(root.events[3].name, "event3")
+ self.assertEqual(
+ root.events[3].attributes, {"name": ("original_contents",)}
+ )
+ mutable_list = ["new_contents"]
+ self.assertEqual(
+ root.events[3].attributes, {"name": ("original_contents",)}
+ )
+
+ def test_events_are_immutable(self):
+ event_properties = [
+ prop for prop in dir(trace.EventBase) if not prop.startswith("_")
+ ]
+
+ with self.tracer.start_as_current_span("root") as root:
+ root.add_event("event0", {"name": ["birthday"]})
+ event = root.events[0]
+
+ for prop in event_properties:
+ with self.assertRaises(AttributeError):
+ setattr(event, prop, "something")
+
+ def test_event_attributes_are_immutable(self):
+ with self.tracer.start_as_current_span("root") as root:
+ root.add_event("event0", {"name": ["birthday"]})
+ event = root.events[0]
+
+ with self.assertRaises(TypeError):
+ event.attributes["name"][0] = "happy"
+
+ with self.assertRaises(TypeError):
+ event.attributes["name"] = "hello"
+
+ def test_invalid_event_attributes(self):
+ self.assertEqual(trace_api.get_current_span(), trace_api.INVALID_SPAN)
+
+ with self.tracer.start_as_current_span("root") as root:
+ with self.assertLogs(level=WARNING):
+ root.add_event(
+ "event0", {"attr1": True, "attr2": ["hi", False]}
+ )
+ with self.assertLogs(level=WARNING):
+ root.add_event("event0", {"attr1": {}})
+ with self.assertLogs(level=WARNING):
+ root.add_event("event0", {"attr1": [[True]]})
+ with self.assertLogs(level=WARNING):
+ root.add_event("event0", {"attr1": [{}], "attr2": [1, 2]})
+
+ self.assertEqual(len(root.events), 4)
+ self.assertEqual(root.events[0].attributes, {"attr1": True})
+ self.assertEqual(root.events[1].attributes, {})
+ self.assertEqual(root.events[2].attributes, {})
+ self.assertEqual(root.events[3].attributes, {"attr2": (1, 2)})
+
+ def test_links(self):
+ id_generator = RandomIdGenerator()
+ other_context1 = trace_api.SpanContext(
+ trace_id=id_generator.generate_trace_id(),
+ span_id=id_generator.generate_span_id(),
+ is_remote=False,
+ )
+ other_context2 = trace_api.SpanContext(
+ trace_id=id_generator.generate_trace_id(),
+ span_id=id_generator.generate_span_id(),
+ is_remote=False,
+ )
+
+ links = (
+ trace_api.Link(other_context1),
+ trace_api.Link(other_context2, {"name": "neighbor"}),
+ )
+ with self.tracer.start_as_current_span("root", links=links) as root:
+
+ self.assertEqual(len(root.links), 2)
+ self.assertEqual(
+ root.links[0].context.trace_id, other_context1.trace_id
+ )
+ self.assertEqual(
+ root.links[0].context.span_id, other_context1.span_id
+ )
+ self.assertEqual(0, len(root.links[0].attributes))
+ self.assertEqual(
+ root.links[1].context.trace_id, other_context2.trace_id
+ )
+ self.assertEqual(
+ root.links[1].context.span_id, other_context2.span_id
+ )
+ self.assertEqual(root.links[1].attributes, {"name": "neighbor"})
+
+ with self.assertRaises(TypeError):
+ root.links[1].attributes["name"] = "new_neighbour"
+
+ def test_update_name(self):
+ with self.tracer.start_as_current_span("root") as root:
+ # name
+ root.update_name("toor")
+ self.assertEqual(root.name, "toor")
+
+ def test_start_span(self):
+ """Start twice, end a not started"""
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+
+ # end not started span
+ self.assertRaises(RuntimeError, span.end)
+
+ span.start()
+ start_time = span.start_time
+ with self.assertLogs(level=WARNING):
+ span.start()
+ self.assertEqual(start_time, span.start_time)
+
+ self.assertIsNotNone(span.status)
+ self.assertIs(span.status.status_code, trace_api.StatusCode.UNSET)
+
+ # status
+ new_status = trace_api.status.Status(
+ trace_api.StatusCode.ERROR, "Test description"
+ )
+ span.set_status(new_status)
+ self.assertIs(span.status.status_code, trace_api.StatusCode.ERROR)
+ self.assertIs(span.status.description, "Test description")
+
+ def test_start_accepts_context(self):
+ # pylint: disable=no-self-use
+ span_processor = mock.Mock(spec=trace.SpanProcessor)
+ span = trace._Span(
+ "name",
+ mock.Mock(spec=trace_api.SpanContext),
+ span_processor=span_processor,
+ )
+ context = Context()
+ span.start(parent_context=context)
+ span_processor.on_start.assert_called_once_with(
+ span, parent_context=context
+ )
+
+ def test_span_override_start_and_end_time(self):
+ """Span sending custom start_time and end_time values"""
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ start_time = 123
+ span.start(start_time)
+ self.assertEqual(start_time, span.start_time)
+ end_time = 456
+ span.end(end_time)
+ self.assertEqual(end_time, span.end_time)
+
+ def test_span_set_status(self):
+
+ span1 = self.tracer.start_span("span1")
+ span1.set_status(Status(status_code=StatusCode.ERROR))
+ self.assertEqual(span1.status.status_code, StatusCode.ERROR)
+ self.assertEqual(span1.status.description, None)
+
+ span2 = self.tracer.start_span("span2")
+ span2.set_status(
+ Status(status_code=StatusCode.ERROR, description="desc")
+ )
+ self.assertEqual(span2.status.status_code, StatusCode.ERROR)
+ self.assertEqual(span2.status.description, "desc")
+
+ span3 = self.tracer.start_span("span3")
+ span3.set_status(StatusCode.ERROR)
+ self.assertEqual(span3.status.status_code, StatusCode.ERROR)
+ self.assertEqual(span3.status.description, None)
+
+ span4 = self.tracer.start_span("span4")
+ span4.set_status(StatusCode.ERROR, "span4 desc")
+ self.assertEqual(span4.status.status_code, StatusCode.ERROR)
+ self.assertEqual(span4.status.description, "span4 desc")
+
+ span5 = self.tracer.start_span("span5")
+ with self.assertLogs(level=WARNING):
+ span5.set_status(
+ Status(status_code=StatusCode.ERROR, description="desc"),
+ description="ignored",
+ )
+ self.assertEqual(span5.status.status_code, StatusCode.ERROR)
+ self.assertEqual(span5.status.description, "desc")
+
+ def test_ended_span(self):
+ """Events, attributes are not allowed after span is ended"""
+
+ root = self.tracer.start_span("root")
+
+ # everything should be empty at the beginning
+ self.assertEqual(len(root.attributes), 0)
+ self.assertEqual(len(root.events), 0)
+ self.assertEqual(len(root.links), 0)
+
+ # call end first time
+ root.end()
+ end_time0 = root.end_time
+
+ # call it a second time
+ with self.assertLogs(level=WARNING):
+ root.end()
+ # end time shouldn't be changed
+ self.assertEqual(end_time0, root.end_time)
+
+ with self.assertLogs(level=WARNING):
+ root.set_attribute("http.request.method", "GET")
+ self.assertEqual(len(root.attributes), 0)
+
+ with self.assertLogs(level=WARNING):
+ root.add_event("event1")
+ self.assertEqual(len(root.events), 0)
+
+ with self.assertLogs(level=WARNING):
+ root.update_name("xxx")
+ self.assertEqual(root.name, "root")
+
+ new_status = trace_api.status.Status(
+ trace_api.StatusCode.ERROR, "Test description"
+ )
+
+ with self.assertLogs(level=WARNING):
+ root.set_status(new_status)
+ self.assertEqual(root.status.status_code, trace_api.StatusCode.UNSET)
+
+ def test_error_status(self):
+ def error_status_test(context):
+ with self.assertRaises(AssertionError):
+ with context as root:
+ raise AssertionError("unknown")
+ self.assertIs(root.status.status_code, StatusCode.ERROR)
+ self.assertEqual(
+ root.status.description, "AssertionError: unknown"
+ )
+
+ error_status_test(
+ trace.TracerProvider().get_tracer(__name__).start_span("root")
+ )
+ error_status_test(
+ trace.TracerProvider()
+ .get_tracer(__name__)
+ .start_as_current_span("root")
+ )
+
+ def test_status_cannot_override_ok(self):
+ def error_status_test(context):
+ with self.assertRaises(AssertionError):
+ with context as root:
+ root.set_status(trace_api.status.Status(StatusCode.OK))
+ raise AssertionError("unknown")
+ self.assertIs(root.status.status_code, StatusCode.OK)
+ self.assertIsNone(root.status.description)
+
+ error_status_test(
+ trace.TracerProvider().get_tracer(__name__).start_span("root")
+ )
+ error_status_test(
+ trace.TracerProvider()
+ .get_tracer(__name__)
+ .start_as_current_span("root")
+ )
+
+ def test_status_cannot_set_unset(self):
+ def unset_status_test(context):
+ with self.assertRaises(AssertionError):
+ with context as root:
+ raise AssertionError("unknown")
+ root.set_status(trace_api.status.Status(StatusCode.UNSET))
+ self.assertIs(root.status.status_code, StatusCode.ERROR)
+ self.assertEqual(
+ root.status.description, "AssertionError: unknown"
+ )
+
+ with self.assertLogs(level=WARNING):
+ unset_status_test(
+ trace.TracerProvider().get_tracer(__name__).start_span("root")
+ )
+ with self.assertLogs(level=WARNING):
+ unset_status_test(
+ trace.TracerProvider()
+ .get_tracer(__name__)
+ .start_as_current_span("root")
+ )
+
+ def test_last_status_wins(self):
+ def error_status_test(context):
+ with self.assertRaises(AssertionError):
+ with context as root:
+ raise AssertionError("unknown")
+ root.set_status(trace_api.status.Status(StatusCode.OK))
+ self.assertIs(root.status.status_code, StatusCode.OK)
+ self.assertIsNone(root.status.description)
+
+ error_status_test(
+ trace.TracerProvider().get_tracer(__name__).start_span("root")
+ )
+ error_status_test(
+ trace.TracerProvider()
+ .get_tracer(__name__)
+ .start_as_current_span("root")
+ )
+
+ def test_record_exception(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ try:
+ raise ValueError("invalid")
+ except ValueError as err:
+ span.record_exception(err)
+ exception_event = span.events[0]
+ self.assertEqual("exception", exception_event.name)
+ self.assertEqual(
+ "invalid", exception_event.attributes["exception.message"]
+ )
+ self.assertEqual(
+ "ValueError", exception_event.attributes["exception.type"]
+ )
+ self.assertIn(
+ "ValueError: invalid",
+ exception_event.attributes["exception.stacktrace"],
+ )
+
+ def test_record_exception_with_attributes(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ try:
+ raise RuntimeError("error")
+ except RuntimeError as err:
+ attributes = {"has_additional_attributes": True}
+ span.record_exception(err, attributes)
+ exception_event = span.events[0]
+ self.assertEqual("exception", exception_event.name)
+ self.assertEqual(
+ "error", exception_event.attributes["exception.message"]
+ )
+ self.assertEqual(
+ "RuntimeError", exception_event.attributes["exception.type"]
+ )
+ self.assertEqual(
+ "False", exception_event.attributes["exception.escaped"]
+ )
+ self.assertIn(
+ "RuntimeError: error",
+ exception_event.attributes["exception.stacktrace"],
+ )
+ self.assertIn("has_additional_attributes", exception_event.attributes)
+ self.assertEqual(
+ True, exception_event.attributes["has_additional_attributes"]
+ )
+
+ def test_record_exception_escaped(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ try:
+ raise RuntimeError("error")
+ except RuntimeError as err:
+ span.record_exception(exception=err, escaped=True)
+ exception_event = span.events[0]
+ self.assertEqual("exception", exception_event.name)
+ self.assertEqual(
+ "error", exception_event.attributes["exception.message"]
+ )
+ self.assertEqual(
+ "RuntimeError", exception_event.attributes["exception.type"]
+ )
+ self.assertIn(
+ "RuntimeError: error",
+ exception_event.attributes["exception.stacktrace"],
+ )
+ self.assertEqual(
+ "True", exception_event.attributes["exception.escaped"]
+ )
+
+ def test_record_exception_with_timestamp(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ try:
+ raise RuntimeError("error")
+ except RuntimeError as err:
+ timestamp = 1604238587112021089
+ span.record_exception(err, timestamp=timestamp)
+ exception_event = span.events[0]
+ self.assertEqual("exception", exception_event.name)
+ self.assertEqual(
+ "error", exception_event.attributes["exception.message"]
+ )
+ self.assertEqual(
+ "RuntimeError", exception_event.attributes["exception.type"]
+ )
+ self.assertIn(
+ "RuntimeError: error",
+ exception_event.attributes["exception.stacktrace"],
+ )
+ self.assertEqual(1604238587112021089, exception_event.timestamp)
+
+ def test_record_exception_with_attributes_and_timestamp(self):
+ span = trace._Span("name", mock.Mock(spec=trace_api.SpanContext))
+ try:
+ raise RuntimeError("error")
+ except RuntimeError as err:
+ attributes = {"has_additional_attributes": True}
+ timestamp = 1604238587112021089
+ span.record_exception(err, attributes, timestamp)
+ exception_event = span.events[0]
+ self.assertEqual("exception", exception_event.name)
+ self.assertEqual(
+ "error", exception_event.attributes["exception.message"]
+ )
+ self.assertEqual(
+ "RuntimeError", exception_event.attributes["exception.type"]
+ )
+ self.assertIn(
+ "RuntimeError: error",
+ exception_event.attributes["exception.stacktrace"],
+ )
+ self.assertIn("has_additional_attributes", exception_event.attributes)
+ self.assertEqual(
+ True, exception_event.attributes["has_additional_attributes"]
+ )
+ self.assertEqual(1604238587112021089, exception_event.timestamp)
+
+ def test_record_exception_context_manager(self):
+ span = None
+ try:
+ with self.tracer.start_as_current_span("span") as span:
+ raise RuntimeError("example error")
+ except RuntimeError:
+ pass
+ finally:
+ self.assertEqual(len(span.events), 1)
+ event = span.events[0]
+ self.assertEqual("exception", event.name)
+ self.assertEqual(
+ "RuntimeError", event.attributes["exception.type"]
+ )
+ self.assertEqual(
+ "example error", event.attributes["exception.message"]
+ )
+
+ stacktrace = """in test_record_exception_context_manager
+ raise RuntimeError("example error")
+RuntimeError: example error"""
+ self.assertIn(stacktrace, event.attributes["exception.stacktrace"])
+
+ try:
+ with self.tracer.start_as_current_span(
+ "span", record_exception=False
+ ) as span:
+ raise RuntimeError("example error")
+ except RuntimeError:
+ pass
+ finally:
+ self.assertEqual(len(span.events), 0)
+
+
+def span_event_start_fmt(span_processor_name, span_name):
+ return span_processor_name + ":" + span_name + ":start"
+
+
+def span_event_end_fmt(span_processor_name, span_name):
+ return span_processor_name + ":" + span_name + ":end"
+
+
+class MySpanProcessor(trace.SpanProcessor):
+ def __init__(self, name, span_list):
+ self.name = name
+ self.span_list = span_list
+
+ def on_start(
+ self, span: "trace.Span", parent_context: Optional[Context] = None
+ ) -> None:
+ self.span_list.append(span_event_start_fmt(self.name, span.name))
+
+ def on_end(self, span: "trace.ReadableSpan") -> None:
+ self.span_list.append(span_event_end_fmt(self.name, span.name))
+
+
+class TestSpanProcessor(unittest.TestCase):
+ def test_span_processor(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_calls_list = [] # filled by MySpanProcessor
+ expected_list = [] # filled by hand
+
+ # Span processors are created but not added to the tracer yet
+ sp1 = MySpanProcessor("SP1", spans_calls_list)
+ sp2 = MySpanProcessor("SP2", spans_calls_list)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ pass
+
+ # at this point lists must be empty
+ self.assertEqual(len(spans_calls_list), 0)
+
+ # add single span processor
+ tracer_provider.add_span_processor(sp1)
+
+ with tracer.start_as_current_span("foo"):
+ expected_list.append(span_event_start_fmt("SP1", "foo"))
+
+ with tracer.start_as_current_span("bar"):
+ expected_list.append(span_event_start_fmt("SP1", "bar"))
+
+ with tracer.start_as_current_span("baz"):
+ expected_list.append(span_event_start_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+
+ self.assertListEqual(spans_calls_list, expected_list)
+
+ spans_calls_list.clear()
+ expected_list.clear()
+
+ # go for multiple span processors
+ tracer_provider.add_span_processor(sp2)
+
+ with tracer.start_as_current_span("foo"):
+ expected_list.append(span_event_start_fmt("SP1", "foo"))
+ expected_list.append(span_event_start_fmt("SP2", "foo"))
+
+ with tracer.start_as_current_span("bar"):
+ expected_list.append(span_event_start_fmt("SP1", "bar"))
+ expected_list.append(span_event_start_fmt("SP2", "bar"))
+
+ with tracer.start_as_current_span("baz"):
+ expected_list.append(span_event_start_fmt("SP1", "baz"))
+ expected_list.append(span_event_start_fmt("SP2", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+ expected_list.append(span_event_end_fmt("SP2", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+ expected_list.append(span_event_end_fmt("SP2", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+ expected_list.append(span_event_end_fmt("SP2", "foo"))
+
+ # compare if two lists are the same
+ self.assertListEqual(spans_calls_list, expected_list)
+
+ def test_add_span_processor_after_span_creation(self):
+ tracer_provider = trace.TracerProvider()
+ tracer = tracer_provider.get_tracer(__name__)
+
+ spans_calls_list = [] # filled by MySpanProcessor
+ expected_list = [] # filled by hand
+
+ # Span processors are created but not added to the tracer yet
+ sp = MySpanProcessor("SP1", spans_calls_list)
+
+ with tracer.start_as_current_span("foo"):
+ with tracer.start_as_current_span("bar"):
+ with tracer.start_as_current_span("baz"):
+ # add span processor after spans have been created
+ tracer_provider.add_span_processor(sp)
+
+ expected_list.append(span_event_end_fmt("SP1", "baz"))
+
+ expected_list.append(span_event_end_fmt("SP1", "bar"))
+
+ expected_list.append(span_event_end_fmt("SP1", "foo"))
+
+ self.assertListEqual(spans_calls_list, expected_list)
+
+ def test_to_json(self):
+ context = trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED),
+ )
+ parent = trace._Span("parent-name", context, resource=Resource({}))
+ span = trace._Span(
+ "span-name", context, resource=Resource({}), parent=parent.context
+ )
+
+ self.assertEqual(
+ span.to_json(),
+ """{
+ "name": "span-name",
+ "context": {
+ "trace_id": "0x000000000000000000000000deadbeef",
+ "span_id": "0x00000000deadbef0",
+ "trace_state": "[]"
+ },
+ "kind": "SpanKind.INTERNAL",
+ "parent_id": "0x00000000deadbef0",
+ "start_time": null,
+ "end_time": null,
+ "status": {
+ "status_code": "UNSET"
+ },
+ "attributes": {},
+ "events": [],
+ "links": [],
+ "resource": {
+ "attributes": {},
+ "schema_url": ""
+ }
+}""",
+ )
+ self.assertEqual(
+ span.to_json(indent=None),
+ '{"name": "span-name", "context": {"trace_id": "0x000000000000000000000000deadbeef", "span_id": "0x00000000deadbef0", "trace_state": "[]"}, "kind": "SpanKind.INTERNAL", "parent_id": "0x00000000deadbef0", "start_time": null, "end_time": null, "status": {"status_code": "UNSET"}, "attributes": {}, "events": [], "links": [], "resource": {"attributes": {}, "schema_url": ""}}',
+ )
+
+ def test_attributes_to_json(self):
+ context = trace_api.SpanContext(
+ trace_id=0x000000000000000000000000DEADBEEF,
+ span_id=0x00000000DEADBEF0,
+ is_remote=False,
+ trace_flags=trace_api.TraceFlags(trace_api.TraceFlags.SAMPLED),
+ )
+ span = trace._Span("span-name", context, resource=Resource({}))
+ span.set_attribute("key", "value")
+ span.add_event("event", {"key2": "value2"}, 123)
+ date_str = ns_to_iso_str(123)
+ self.assertEqual(
+ span.to_json(indent=None),
+ '{"name": "span-name", "context": {"trace_id": "0x000000000000000000000000deadbeef", "span_id": "0x00000000deadbef0", "trace_state": "[]"}, "kind": "SpanKind.INTERNAL", "parent_id": null, "start_time": null, "end_time": null, "status": {"status_code": "UNSET"}, "attributes": {"key": "value"}, "events": [{"name": "event", "timestamp": "'
+ + date_str
+ + '", "attributes": {"key2": "value2"}}], "links": [], "resource": {"attributes": {}, "schema_url": ""}}',
+ )
+
+
+class TestSpanLimits(unittest.TestCase):
+ # pylint: disable=protected-access
+
+ long_val = "v" * 1000
+
+ def _assert_attr_length(self, attr_val, max_len):
+ if isinstance(attr_val, str):
+ expected = self.long_val
+ if max_len is not None:
+ expected = expected[:max_len]
+ self.assertEqual(attr_val, expected)
+
+ def test_limits_defaults(self):
+ limits = trace.SpanLimits()
+ self.assertEqual(
+ limits.max_attributes,
+ trace._DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.assertEqual(
+ limits.max_span_attributes,
+ trace._DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.assertEqual(
+ limits.max_event_attributes,
+ trace._DEFAULT_OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.assertEqual(
+ limits.max_link_attributes,
+ trace._DEFAULT_OTEL_LINK_ATTRIBUTE_COUNT_LIMIT,
+ )
+ self.assertEqual(
+ limits.max_events, trace._DEFAULT_OTEL_SPAN_EVENT_COUNT_LIMIT
+ )
+ self.assertEqual(
+ limits.max_links, trace._DEFAULT_OTEL_SPAN_LINK_COUNT_LIMIT
+ )
+ self.assertIsNone(limits.max_attribute_length)
+ self.assertIsNone(limits.max_span_attribute_length)
+
+ def test_limits_attribute_length_limits_code(self):
+ # global limit unset while span limit is set
+ limits = trace.SpanLimits(max_span_attribute_length=22)
+ self.assertIsNone(limits.max_attribute_length)
+ self.assertEqual(limits.max_span_attribute_length, 22)
+
+ # span limit falls back to global limit when no value is provided
+ limits = trace.SpanLimits(max_attribute_length=22)
+ self.assertEqual(limits.max_attribute_length, 22)
+ self.assertEqual(limits.max_span_attribute_length, 22)
+
+ # global and span limits set to different values
+ limits = trace.SpanLimits(
+ max_attribute_length=22, max_span_attribute_length=33
+ )
+ self.assertEqual(limits.max_attribute_length, 22)
+ self.assertEqual(limits.max_span_attribute_length, 33)
+
+ def test_limits_values_code(self):
+ (
+ max_attributes,
+ max_span_attributes,
+ max_link_attributes,
+ max_event_attributes,
+ max_events,
+ max_links,
+ max_attr_length,
+ max_span_attr_length,
+ ) = (
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ )
+ limits = trace.SpanLimits(
+ max_events=max_events,
+ max_links=max_links,
+ max_attributes=max_attributes,
+ max_span_attributes=max_span_attributes,
+ max_event_attributes=max_event_attributes,
+ max_link_attributes=max_link_attributes,
+ max_attribute_length=max_attr_length,
+ max_span_attribute_length=max_span_attr_length,
+ )
+ self.assertEqual(limits.max_events, max_events)
+ self.assertEqual(limits.max_links, max_links)
+ self.assertEqual(limits.max_attributes, max_attributes)
+ self.assertEqual(limits.max_span_attributes, max_span_attributes)
+ self.assertEqual(limits.max_event_attributes, max_event_attributes)
+ self.assertEqual(limits.max_link_attributes, max_link_attributes)
+ self.assertEqual(limits.max_attribute_length, max_attr_length)
+ self.assertEqual(
+ limits.max_span_attribute_length, max_span_attr_length
+ )
+
+ def test_limits_values_env(self):
+ (
+ max_attributes,
+ max_span_attributes,
+ max_link_attributes,
+ max_event_attributes,
+ max_events,
+ max_links,
+ max_attr_length,
+ max_span_attr_length,
+ ) = (
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ randint(0, 10000),
+ )
+ with mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_ATTRIBUTE_COUNT_LIMIT: str(max_attributes),
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT: str(max_span_attributes),
+ OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT: str(max_event_attributes),
+ OTEL_LINK_ATTRIBUTE_COUNT_LIMIT: str(max_link_attributes),
+ OTEL_SPAN_EVENT_COUNT_LIMIT: str(max_events),
+ OTEL_SPAN_LINK_COUNT_LIMIT: str(max_links),
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT: str(max_attr_length),
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT: str(
+ max_span_attr_length
+ ),
+ },
+ ):
+ limits = trace.SpanLimits()
+ self.assertEqual(limits.max_events, max_events)
+ self.assertEqual(limits.max_links, max_links)
+ self.assertEqual(limits.max_attributes, max_attributes)
+ self.assertEqual(limits.max_span_attributes, max_span_attributes)
+ self.assertEqual(limits.max_event_attributes, max_event_attributes)
+ self.assertEqual(limits.max_link_attributes, max_link_attributes)
+ self.assertEqual(limits.max_attribute_length, max_attr_length)
+ self.assertEqual(
+ limits.max_span_attribute_length, max_span_attr_length
+ )
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT: "13",
+ OTEL_SPAN_EVENT_COUNT_LIMIT: "7",
+ OTEL_SPAN_LINK_COUNT_LIMIT: "4",
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT: "11",
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT: "15",
+ },
+ )
+ def test_span_limits_env(self):
+ self._test_span_limits(
+ new_tracer(),
+ max_attrs=13,
+ max_events=7,
+ max_links=4,
+ max_attr_len=11,
+ max_span_attr_len=15,
+ )
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_ATTRIBUTE_COUNT_LIMIT: "13",
+ OTEL_SPAN_EVENT_COUNT_LIMIT: "7",
+ OTEL_SPAN_LINK_COUNT_LIMIT: "4",
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT: "11",
+ },
+ )
+ def test_span_limits_global_env(self):
+ self._test_span_limits(
+ new_tracer(),
+ max_attrs=13,
+ max_events=7,
+ max_links=4,
+ max_attr_len=11,
+ max_span_attr_len=11,
+ )
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT: "10",
+ OTEL_SPAN_EVENT_COUNT_LIMIT: "20",
+ OTEL_SPAN_LINK_COUNT_LIMIT: "30",
+ OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT: "40",
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT: "50",
+ },
+ )
+ def test_span_limits_default_to_env(self):
+ self._test_span_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_attributes=None,
+ max_events=None,
+ max_links=None,
+ max_attribute_length=None,
+ max_span_attribute_length=None,
+ )
+ ),
+ max_attrs=10,
+ max_events=20,
+ max_links=30,
+ max_attr_len=40,
+ max_span_attr_len=50,
+ )
+
+ def test_span_limits_code(self):
+ self._test_span_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_attributes=11,
+ max_events=15,
+ max_links=13,
+ max_attribute_length=9,
+ max_span_attribute_length=25,
+ )
+ ),
+ max_attrs=11,
+ max_events=15,
+ max_links=13,
+ max_attr_len=9,
+ max_span_attr_len=25,
+ )
+
+ @mock.patch.dict(
+ "os.environ",
+ {
+ OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT: "",
+ OTEL_SPAN_EVENT_COUNT_LIMIT: "",
+ OTEL_SPAN_LINK_COUNT_LIMIT: "",
+ OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT: "",
+ },
+ )
+ def test_span_no_limits_env(self):
+ self._test_span_no_limits(new_tracer())
+
+ def test_span_no_limits_code(self):
+ self._test_span_no_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_span_attributes=trace.SpanLimits.UNSET,
+ max_links=trace.SpanLimits.UNSET,
+ max_events=trace.SpanLimits.UNSET,
+ max_attribute_length=trace.SpanLimits.UNSET,
+ )
+ )
+ )
+
+ def test_span_zero_global_limit(self):
+ self._test_span_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_attributes=0,
+ max_events=0,
+ max_links=0,
+ )
+ ),
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ )
+
+ def test_span_zero_global_nonzero_model(self):
+ self._test_span_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_attributes=0,
+ max_events=0,
+ max_links=0,
+ max_span_attributes=15,
+ max_span_attribute_length=25,
+ )
+ ),
+ 15,
+ 0,
+ 0,
+ 0,
+ 25,
+ )
+
+ def test_span_zero_global_unset_model(self):
+ self._test_span_no_limits(
+ new_tracer(
+ span_limits=trace.SpanLimits(
+ max_attributes=0,
+ max_span_attributes=trace.SpanLimits.UNSET,
+ max_links=trace.SpanLimits.UNSET,
+ max_events=trace.SpanLimits.UNSET,
+ max_attribute_length=trace.SpanLimits.UNSET,
+ )
+ )
+ )
+
+ def test_dropped_attributes(self):
+ span = get_span_with_dropped_attributes_events_links()
+ self.assertEqual(1, span.dropped_links)
+ self.assertEqual(2, span.dropped_attributes)
+ self.assertEqual(3, span.dropped_events)
+ self.assertEqual(2, span.events[0].attributes.dropped)
+ self.assertEqual(2, span.links[0].attributes.dropped)
+
+ def _test_span_limits(
+ self,
+ tracer,
+ max_attrs,
+ max_events,
+ max_links,
+ max_attr_len,
+ max_span_attr_len,
+ ):
+ id_generator = RandomIdGenerator()
+ some_links = [
+ trace_api.Link(
+ trace_api.SpanContext(
+ trace_id=id_generator.generate_trace_id(),
+ span_id=id_generator.generate_span_id(),
+ is_remote=False,
+ ),
+ attributes={"k": self.long_val},
+ )
+ for _ in range(100)
+ ]
+
+ some_attrs = {
+ f"init_attribute_{idx}": self.long_val for idx in range(100)
+ }
+ with tracer.start_as_current_span(
+ "root", links=some_links, attributes=some_attrs
+ ) as root:
+ self.assertEqual(len(root.links), max_links)
+ self.assertEqual(len(root.attributes), max_attrs)
+ for idx in range(100):
+ root.set_attribute(f"my_str_attribute_{idx}", self.long_val)
+ root.set_attribute(
+ f"my_byte_attribute_{idx}", self.long_val.encode()
+ )
+ root.set_attribute(
+ f"my_int_attribute_{idx}", self.long_val.encode()
+ )
+ root.add_event(
+ f"my_event_{idx}", attributes={"k": self.long_val}
+ )
+
+ self.assertEqual(len(root.attributes), max_attrs)
+ self.assertEqual(len(root.events), max_events)
+
+ for link in root.links:
+ for attr_val in link.attributes.values():
+ self._assert_attr_length(attr_val, max_attr_len)
+
+ for event in root.events:
+ for attr_val in event.attributes.values():
+ self._assert_attr_length(attr_val, max_attr_len)
+
+ for attr_val in root.attributes.values():
+ self._assert_attr_length(attr_val, max_span_attr_len)
+
+ def _test_span_no_limits(self, tracer):
+ num_links = int(trace._DEFAULT_OTEL_SPAN_LINK_COUNT_LIMIT) + randint(
+ 1, 100
+ )
+
+ id_generator = RandomIdGenerator()
+ some_links = [
+ trace_api.Link(
+ trace_api.SpanContext(
+ trace_id=id_generator.generate_trace_id(),
+ span_id=id_generator.generate_span_id(),
+ is_remote=False,
+ )
+ )
+ for _ in range(num_links)
+ ]
+ with tracer.start_as_current_span("root", links=some_links) as root:
+ self.assertEqual(len(root.links), num_links)
+
+ num_events = int(trace._DEFAULT_OTEL_SPAN_EVENT_COUNT_LIMIT) + randint(
+ 1, 100
+ )
+ with tracer.start_as_current_span("root") as root:
+ for idx in range(num_events):
+ root.add_event(
+ f"my_event_{idx}", attributes={"k": self.long_val}
+ )
+
+ self.assertEqual(len(root.events), num_events)
+
+ num_attributes = int(
+ trace._DEFAULT_OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT
+ ) + randint(1, 100)
+ with tracer.start_as_current_span("root") as root:
+ for idx in range(num_attributes):
+ root.set_attribute(f"my_attribute_{idx}", self.long_val)
+
+ self.assertEqual(len(root.attributes), num_attributes)
+ for attr_val in root.attributes.values():
+ self.assertEqual(attr_val, self.long_val)
+
+
+class TestTraceFlags(unittest.TestCase):
+ def test_constant_default(self):
+ self.assertEqual(trace_api.TraceFlags.DEFAULT, 0)
+
+ def test_constant_sampled(self):
+ self.assertEqual(trace_api.TraceFlags.SAMPLED, 1)
+
+ def test_get_default(self):
+ self.assertEqual(
+ trace_api.TraceFlags.get_default(), trace_api.TraceFlags.DEFAULT
+ )
+
+ def test_sampled_true(self):
+ self.assertTrue(trace_api.TraceFlags(0xF1).sampled)
+
+ def test_sampled_false(self):
+ self.assertFalse(trace_api.TraceFlags(0xF0).sampled)
+
+ def test_constant_default_trace_options(self):
+ self.assertEqual(
+ trace_api.DEFAULT_TRACE_OPTIONS, trace_api.TraceFlags.DEFAULT
+ )
+
+
+class TestParentChildSpanException(unittest.TestCase):
+ def test_parent_child_span_exception(self):
+ """
+ Tests that a parent span has its status set to ERROR when a child span
+ raises an exception even when the child span has its
+ ``record_exception`` and ``set_status_on_exception`` attributes
+ set to ``False``.
+ """
+
+ set_tracer_provider(TracerProvider())
+ tracer = get_tracer(__name__)
+
+ exception = Exception("exception")
+
+ exception_type = exception.__class__.__name__
+ exception_message = exception.args[0]
+
+ try:
+ with tracer.start_as_current_span(
+ "parent",
+ ) as parent_span:
+ with tracer.start_as_current_span(
+ "child",
+ record_exception=False,
+ set_status_on_exception=False,
+ ) as child_span:
+ raise exception
+
+ except Exception: # pylint: disable=broad-except
+ pass
+
+ self.assertTrue(child_span.status.is_ok)
+ self.assertIsNone(child_span.status.description)
+ self.assertTupleEqual(child_span.events, ())
+
+ self.assertFalse(parent_span.status.is_ok)
+ self.assertEqual(
+ parent_span.status.description,
+ f"{exception_type}: {exception_message}",
+ )
+ self.assertEqual(
+ parent_span.events[0].attributes["exception.type"], exception_type
+ )
+ self.assertEqual(
+ parent_span.events[0].attributes["exception.message"],
+ exception_message,
+ )
+
+ def test_child_parent_span_exception(self):
+ """
+ Tests that a child span does not have its status set to ERROR when a
+ parent span raises an exception and the parent span has its
+ ``record_exception`` and ``set_status_on_exception`` attributes
+ set to ``False``.
+ """
+
+ set_tracer_provider(TracerProvider())
+ tracer = get_tracer(__name__)
+
+ exception = Exception("exception")
+
+ try:
+ with tracer.start_as_current_span(
+ "parent",
+ record_exception=False,
+ set_status_on_exception=False,
+ ) as parent_span:
+ with tracer.start_as_current_span(
+ "child",
+ ) as child_span:
+ pass
+ raise exception
+
+ except Exception: # pylint: disable=broad-except
+ pass
+
+ self.assertTrue(child_span.status.is_ok)
+ self.assertIsNone(child_span.status.description)
+ self.assertTupleEqual(child_span.events, ())
+
+ self.assertTrue(parent_span.status.is_ok)
+ self.assertIsNone(parent_span.status.description)
+ self.assertTupleEqual(parent_span.events, ())
+
+
+# pylint: disable=protected-access
+class TestTracerProvider(unittest.TestCase):
+ @patch("opentelemetry.sdk.trace.sampling._get_from_env_or_default")
+ @patch.object(Resource, "create")
+ def test_tracer_provider_init_default(self, resource_patch, sample_patch):
+ tracer_provider = trace.TracerProvider()
+ self.assertTrue(
+ isinstance(tracer_provider.id_generator, RandomIdGenerator)
+ )
+ resource_patch.assert_called_once()
+ self.assertIsNotNone(tracer_provider._resource)
+ sample_patch.assert_called_once()
+ self.assertIsNotNone(tracer_provider._span_limits)
+ self.assertIsNotNone(tracer_provider._atexit_handler)
diff --git a/opentelemetry-semantic-conventions/LICENSE b/opentelemetry-semantic-conventions/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/opentelemetry-semantic-conventions/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/opentelemetry-semantic-conventions/README.rst b/opentelemetry-semantic-conventions/README.rst
new file mode 100644
index 0000000000..e5a40e739c
--- /dev/null
+++ b/opentelemetry-semantic-conventions/README.rst
@@ -0,0 +1,37 @@
+OpenTelemetry Semantic Conventions
+==================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-semantic-conventions.svg
+ :target: https://pypi.org/project/opentelemetry-semantic-conventions/
+
+This library contains generated code for the semantic conventions defined by the OpenTelemetry specification.
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-semantic-conventions
+
+Code Generation
+---------------
+
+These files were generated automatically from code in semconv_.
+To regenerate the code, run ``../scripts/semconv/generate.sh``.
+
+To build against a new release or specific commit of opentelemetry-specification_,
+update the ``SPEC_VERSION`` variable in
+``../scripts/semconv/generate.sh``. Then run the script and commit the changes.
+
+.. _opentelemetry-specification: https://github.com/open-telemetry/opentelemetry-specification
+.. _semconv: https://github.com/open-telemetry/opentelemetry-python/tree/main/scripts/semconv
+
+
+References
+----------
+
+* `OpenTelemetry Project `_
+* `OpenTelemetry Semantic Conventions Definitions `_
+* `generate.sh script `_
diff --git a/opentelemetry-semantic-conventions/pyproject.toml b/opentelemetry-semantic-conventions/pyproject.toml
new file mode 100644
index 0000000000..8ad02bb35c
--- /dev/null
+++ b/opentelemetry-semantic-conventions/pyproject.toml
@@ -0,0 +1,44 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-semantic-conventions"
+dynamic = ["version"]
+description = "OpenTelemetry Semantic Conventions"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/opentelemetry-semantic-conventions"
+
+[tool.hatch.version]
+path = "src/opentelemetry/semconv/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/opentelemetry-semantic-conventions/src/opentelemetry/semconv/__init__.py b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-semantic-conventions/src/opentelemetry/semconv/metrics/__init__.py b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/metrics/__init__.py
new file mode 100644
index 0000000000..9cd7cee94f
--- /dev/null
+++ b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/metrics/__init__.py
@@ -0,0 +1,211 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+class MetricInstruments:
+ SCHEMA_URL = "https://opentelemetry.io/schemas/v1.21.0"
+ """
+ The URL of the OpenTelemetry schema for these keys and values.
+ """
+
+ HTTP_SERVER_DURATION = "http.server.duration"
+ """
+ Measures the duration of inbound HTTP requests
+ Instrument: histogram
+ Unit: s
+ """
+
+ HTTP_SERVER_ACTIVE_REQUESTS = "http.server.active_requests"
+ """
+ Measures the number of concurrent HTTP requests that are currently in-flight
+ Instrument: updowncounter
+ Unit: {request}
+ """
+
+ HTTP_SERVER_REQUEST_SIZE = "http.server.request.size"
+ """
+ Measures the size of HTTP request messages (compressed)
+ Instrument: histogram
+ Unit: By
+ """
+
+ HTTP_SERVER_RESPONSE_SIZE = "http.server.response.size"
+ """
+ Measures the size of HTTP response messages (compressed)
+ Instrument: histogram
+ Unit: By
+ """
+
+ HTTP_CLIENT_DURATION = "http.client.duration"
+ """
+ Measures the duration of outbound HTTP requests
+ Instrument: histogram
+ Unit: s
+ """
+
+ HTTP_CLIENT_REQUEST_SIZE = "http.client.request.size"
+ """
+ Measures the size of HTTP request messages (compressed)
+ Instrument: histogram
+ Unit: By
+ """
+
+ HTTP_CLIENT_RESPONSE_SIZE = "http.client.response.size"
+ """
+ Measures the size of HTTP response messages (compressed)
+ Instrument: histogram
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_MEMORY_INIT = "process.runtime.jvm.memory.init"
+ """
+ Measure of initial memory requested
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_SYSTEM_CPU_UTILIZATION = (
+ "process.runtime.jvm.system.cpu.utilization"
+ )
+ """
+ Recent CPU utilization for the whole system as reported by the JVM
+ Instrument: gauge
+ Unit: 1
+ """
+
+ PROCESS_RUNTIME_JVM_SYSTEM_CPU_LOAD_1M = (
+ "process.runtime.jvm.system.cpu.load_1m"
+ )
+ """
+ Average CPU load of the whole system for the last minute as reported by the JVM
+ Instrument: gauge
+ Unit: 1
+ """
+
+ PROCESS_RUNTIME_JVM_BUFFER_USAGE = "process.runtime.jvm.buffer.usage"
+ """
+ Measure of memory used by buffers
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_BUFFER_LIMIT = "process.runtime.jvm.buffer.limit"
+ """
+ Measure of total memory capacity of buffers
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_BUFFER_COUNT = "process.runtime.jvm.buffer.count"
+ """
+ Number of buffers in the pool
+ Instrument: updowncounter
+ Unit: {buffer}
+ """
+
+ PROCESS_RUNTIME_JVM_MEMORY_USAGE = "process.runtime.jvm.memory.usage"
+ """
+ Measure of memory used
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_MEMORY_COMMITTED = (
+ "process.runtime.jvm.memory.committed"
+ )
+ """
+ Measure of memory committed
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_MEMORY_LIMIT = "process.runtime.jvm.memory.limit"
+ """
+ Measure of max obtainable memory
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_MEMORY_USAGE_AFTER_LAST_GC = (
+ "process.runtime.jvm.memory.usage_after_last_gc"
+ )
+ """
+ Measure of memory used, as measured after the most recent garbage collection event on this pool
+ Instrument: updowncounter
+ Unit: By
+ """
+
+ PROCESS_RUNTIME_JVM_GC_DURATION = "process.runtime.jvm.gc.duration"
+ """
+ Duration of JVM garbage collection actions
+ Instrument: histogram
+ Unit: s
+ """
+
+ PROCESS_RUNTIME_JVM_THREADS_COUNT = "process.runtime.jvm.threads.count"
+ """
+ Number of executing platform threads
+ Instrument: updowncounter
+ Unit: {thread}
+ """
+
+ PROCESS_RUNTIME_JVM_CLASSES_LOADED = "process.runtime.jvm.classes.loaded"
+ """
+ Number of classes loaded since JVM start
+ Instrument: counter
+ Unit: {class}
+ """
+
+ PROCESS_RUNTIME_JVM_CLASSES_UNLOADED = (
+ "process.runtime.jvm.classes.unloaded"
+ )
+ """
+ Number of classes unloaded since JVM start
+ Instrument: counter
+ Unit: {class}
+ """
+
+ PROCESS_RUNTIME_JVM_CLASSES_CURRENT_LOADED = (
+ "process.runtime.jvm.classes.current_loaded"
+ )
+ """
+ Number of classes currently loaded
+ Instrument: updowncounter
+ Unit: {class}
+ """
+
+ PROCESS_RUNTIME_JVM_CPU_TIME = "process.runtime.jvm.cpu.time"
+ """
+ CPU time used by the process as reported by the JVM
+ Instrument: counter
+ Unit: s
+ """
+
+ PROCESS_RUNTIME_JVM_CPU_RECENT_UTILIZATION = (
+ "process.runtime.jvm.cpu.recent_utilization"
+ )
+ """
+ Recent CPU utilization for the process as reported by the JVM
+ Instrument: gauge
+ Unit: 1
+ """
+
+ # Manually defined metrics
+
+ DB_CLIENT_CONNECTIONS_USAGE = "db.client.connections.usage"
+ """
+ The number of connections that are currently in state described by the `state` attribute
+ Instrument: UpDownCounter
+ Unit: {connection}
+ """
diff --git a/opentelemetry-semantic-conventions/src/opentelemetry/semconv/resource/__init__.py b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/resource/__init__.py
new file mode 100644
index 0000000000..590135934c
--- /dev/null
+++ b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/resource/__init__.py
@@ -0,0 +1,863 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+
+from enum import Enum
+
+
+class ResourceAttributes:
+ SCHEMA_URL = "https://opentelemetry.io/schemas/v1.21.0"
+ """
+ The URL of the OpenTelemetry schema for these keys and values.
+ """
+ BROWSER_BRANDS = "browser.brands"
+ """
+ Array of brand name and version separated by a space.
+ Note: This value is intended to be taken from the [UA client hints API](https://wicg.github.io/ua-client-hints/#interface) (`navigator.userAgentData.brands`).
+ """
+
+ BROWSER_PLATFORM = "browser.platform"
+ """
+ The platform on which the browser is running.
+ Note: This value is intended to be taken from the [UA client hints API](https://wicg.github.io/ua-client-hints/#interface) (`navigator.userAgentData.platform`). If unavailable, the legacy `navigator.platform` API SHOULD NOT be used instead and this attribute SHOULD be left unset in order for the values to be consistent.
+ The list of possible values is defined in the [W3C User-Agent Client Hints specification](https://wicg.github.io/ua-client-hints/#sec-ch-ua-platform). Note that some (but not all) of these values can overlap with values in the [`os.type` and `os.name` attributes](./os.md). However, for consistency, the values in the `browser.platform` attribute should capture the exact value that the user agent provides.
+ """
+
+ BROWSER_MOBILE = "browser.mobile"
+ """
+ A boolean that is true if the browser is running on a mobile device.
+ Note: This value is intended to be taken from the [UA client hints API](https://wicg.github.io/ua-client-hints/#interface) (`navigator.userAgentData.mobile`). If unavailable, this attribute SHOULD be left unset.
+ """
+
+ BROWSER_LANGUAGE = "browser.language"
+ """
+ Preferred language of the user using the browser.
+ Note: This value is intended to be taken from the Navigator API `navigator.language`.
+ """
+
+ USER_AGENT_ORIGINAL = "user_agent.original"
+ """
+ Full user-agent string provided by the browser.
+ Note: The user-agent value SHOULD be provided only from browsers that do not have a mechanism to retrieve brands and platform individually from the User-Agent Client Hints API. To retrieve the value, the legacy `navigator.userAgent` API can be used.
+ """
+
+ CLOUD_PROVIDER = "cloud.provider"
+ """
+ Name of the cloud provider.
+ """
+
+ CLOUD_ACCOUNT_ID = "cloud.account.id"
+ """
+ The cloud account ID the resource is assigned to.
+ """
+
+ CLOUD_REGION = "cloud.region"
+ """
+ The geographical region the resource is running.
+ Note: Refer to your provider's docs to see the available regions, for example [Alibaba Cloud regions](https://www.alibabacloud.com/help/doc-detail/40654.htm), [AWS regions](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/), [Azure regions](https://azure.microsoft.com/en-us/global-infrastructure/geographies/), [Google Cloud regions](https://cloud.google.com/about/locations), or [Tencent Cloud regions](https://www.tencentcloud.com/document/product/213/6091).
+ """
+
+ CLOUD_RESOURCE_ID = "cloud.resource_id"
+ """
+ Cloud provider-specific native identifier of the monitored cloud resource (e.g. an [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) on AWS, a [fully qualified resource ID](https://learn.microsoft.com/en-us/rest/api/resources/resources/get-by-id) on Azure, a [full resource name](https://cloud.google.com/apis/design/resource_names#full_resource_name) on GCP).
+ Note: On some cloud providers, it may not be possible to determine the full ID at startup,
+ so it may be necessary to set `cloud.resource_id` as a span attribute instead.
+
+ The exact value to use for `cloud.resource_id` depends on the cloud provider.
+ The following well-known definitions MUST be used if you set this attribute and they apply:
+
+ * **AWS Lambda:** The function [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
+ Take care not to use the "invoked ARN" directly but replace any
+ [alias suffix](https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html)
+ with the resolved function version, as the same runtime instance may be invokable with
+ multiple different aliases.
+ * **GCP:** The [URI of the resource](https://cloud.google.com/iam/docs/full-resource-names)
+ * **Azure:** The [Fully Qualified Resource ID](https://docs.microsoft.com/en-us/rest/api/resources/resources/get-by-id) of the invoked function,
+ *not* the function app, having the form
+ `/subscriptions//resourceGroups//providers/Microsoft.Web/sites//functions/`.
+ This means that a span attribute MUST be used, as an Azure function app can host multiple functions that would usually share
+ a TracerProvider.
+ """
+
+ CLOUD_AVAILABILITY_ZONE = "cloud.availability_zone"
+ """
+ Cloud regions often have multiple, isolated locations known as zones to increase availability. Availability zone represents the zone where the resource is running.
+ Note: Availability zones are called "zones" on Alibaba Cloud and Google Cloud.
+ """
+
+ CLOUD_PLATFORM = "cloud.platform"
+ """
+ The cloud platform in use.
+ Note: The prefix of the service SHOULD match the one specified in `cloud.provider`.
+ """
+
+ AWS_ECS_CONTAINER_ARN = "aws.ecs.container.arn"
+ """
+ The Amazon Resource Name (ARN) of an [ECS container instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html).
+ """
+
+ AWS_ECS_CLUSTER_ARN = "aws.ecs.cluster.arn"
+ """
+ The ARN of an [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html).
+ """
+
+ AWS_ECS_LAUNCHTYPE = "aws.ecs.launchtype"
+ """
+ The [launch type](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html) for an ECS task.
+ """
+
+ AWS_ECS_TASK_ARN = "aws.ecs.task.arn"
+ """
+ The ARN of an [ECS task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html).
+ """
+
+ AWS_ECS_TASK_FAMILY = "aws.ecs.task.family"
+ """
+ The task definition family this task definition is a member of.
+ """
+
+ AWS_ECS_TASK_REVISION = "aws.ecs.task.revision"
+ """
+ The revision for this task definition.
+ """
+
+ AWS_EKS_CLUSTER_ARN = "aws.eks.cluster.arn"
+ """
+ The ARN of an EKS cluster.
+ """
+
+ AWS_LOG_GROUP_NAMES = "aws.log.group.names"
+ """
+ The name(s) of the AWS log group(s) an application is writing to.
+ Note: Multiple log groups must be supported for cases like multi-container applications, where a single application has sidecar containers, and each write to their own log group.
+ """
+
+ AWS_LOG_GROUP_ARNS = "aws.log.group.arns"
+ """
+ The Amazon Resource Name(s) (ARN) of the AWS log group(s).
+ Note: See the [log group ARN format documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html#CWL_ARN_Format).
+ """
+
+ AWS_LOG_STREAM_NAMES = "aws.log.stream.names"
+ """
+ The name(s) of the AWS log stream(s) an application is writing to.
+ """
+
+ AWS_LOG_STREAM_ARNS = "aws.log.stream.arns"
+ """
+ The ARN(s) of the AWS log stream(s).
+ Note: See the [log stream ARN format documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html#CWL_ARN_Format). One log group can contain several log streams, so these ARNs necessarily identify both a log group and a log stream.
+ """
+
+ GCP_CLOUD_RUN_JOB_EXECUTION = "gcp.cloud_run.job.execution"
+ """
+ The name of the Cloud Run [execution](https://cloud.google.com/run/docs/managing/job-executions) being run for the Job, as set by the [`CLOUD_RUN_EXECUTION`](https://cloud.google.com/run/docs/container-contract#jobs-env-vars) environment variable.
+ """
+
+ GCP_CLOUD_RUN_JOB_TASK_INDEX = "gcp.cloud_run.job.task_index"
+ """
+ The index for a task within an execution as provided by the [`CLOUD_RUN_TASK_INDEX`](https://cloud.google.com/run/docs/container-contract#jobs-env-vars) environment variable.
+ """
+
+ GCP_GCE_INSTANCE_NAME = "gcp.gce.instance.name"
+ """
+ The instance name of a GCE instance. This is the value provided by `host.name`, the visible name of the instance in the Cloud Console UI, and the prefix for the default hostname of the instance as defined by the [default internal DNS name](https://cloud.google.com/compute/docs/internal-dns#instance-fully-qualified-domain-names).
+ """
+
+ GCP_GCE_INSTANCE_HOSTNAME = "gcp.gce.instance.hostname"
+ """
+ The hostname of a GCE instance. This is the full value of the default or [custom hostname](https://cloud.google.com/compute/docs/instances/custom-hostname-vm).
+ """
+
+ HEROKU_RELEASE_CREATION_TIMESTAMP = "heroku.release.creation_timestamp"
+ """
+ Time and date the release was created.
+ """
+
+ HEROKU_RELEASE_COMMIT = "heroku.release.commit"
+ """
+ Commit hash for the current release.
+ """
+
+ HEROKU_APP_ID = "heroku.app.id"
+ """
+ Unique identifier for the application.
+ """
+
+ CONTAINER_NAME = "container.name"
+ """
+ Container name used by container runtime.
+ """
+
+ CONTAINER_ID = "container.id"
+ """
+ Container ID. Usually a UUID, as for example used to [identify Docker containers](https://docs.docker.com/engine/reference/run/#container-identification). The UUID might be abbreviated.
+ """
+
+ CONTAINER_RUNTIME = "container.runtime"
+ """
+ The container runtime managing this container.
+ """
+
+ CONTAINER_IMAGE_NAME = "container.image.name"
+ """
+ Name of the image the container was built on.
+ """
+
+ CONTAINER_IMAGE_TAG = "container.image.tag"
+ """
+ Container image tag.
+ """
+
+ CONTAINER_IMAGE_ID = "container.image.id"
+ """
+ Runtime specific image identifier. Usually a hash algorithm followed by a UUID.
+ Note: Docker defines a sha256 of the image id; `container.image.id` corresponds to the `Image` field from the Docker container inspect [API](https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerInspect) endpoint.
+ K8s defines a link to the container registry repository with digest `"imageID": "registry.azurecr.io /namespace/service/dockerfile@sha256:bdeabd40c3a8a492eaf9e8e44d0ebbb84bac7ee25ac0cf8a7159d25f62555625"`.
+ OCI defines a digest of manifest.
+ """
+
+ CONTAINER_COMMAND = "container.command"
+ """
+ The command used to run the container (i.e. the command name).
+ Note: If using embedded credentials or sensitive data, it is recommended to remove them to prevent potential leakage.
+ """
+
+ CONTAINER_COMMAND_LINE = "container.command_line"
+ """
+ The full command run by the container as a single string representing the full command. [2].
+ """
+
+ CONTAINER_COMMAND_ARGS = "container.command_args"
+ """
+ All the command arguments (including the command/executable itself) run by the container. [2].
+ """
+
+ DEPLOYMENT_ENVIRONMENT = "deployment.environment"
+ """
+ Name of the [deployment environment](https://en.wikipedia.org/wiki/Deployment_environment) (aka deployment tier).
+ """
+
+ DEVICE_ID = "device.id"
+ """
+ A unique identifier representing the device.
+ Note: The device identifier MUST only be defined using the values outlined below. This value is not an advertising identifier and MUST NOT be used as such. On iOS (Swift or Objective-C), this value MUST be equal to the [vendor identifier](https://developer.apple.com/documentation/uikit/uidevice/1620059-identifierforvendor). On Android (Java or Kotlin), this value MUST be equal to the Firebase Installation ID or a globally unique UUID which is persisted across sessions in your application. More information can be found [here](https://developer.android.com/training/articles/user-data-ids) on best practices and exact implementation details. Caution should be taken when storing personal data or anything which can identify a user. GDPR and data protection laws may apply, ensure you do your own due diligence.
+ """
+
+ DEVICE_MODEL_IDENTIFIER = "device.model.identifier"
+ """
+ The model identifier for the device.
+ Note: It's recommended this value represents a machine readable version of the model identifier rather than the market or consumer-friendly name of the device.
+ """
+
+ DEVICE_MODEL_NAME = "device.model.name"
+ """
+ The marketing name for the device model.
+ Note: It's recommended this value represents a human readable version of the device model rather than a machine readable alternative.
+ """
+
+ DEVICE_MANUFACTURER = "device.manufacturer"
+ """
+ The name of the device manufacturer.
+ Note: The Android OS provides this field via [Build](https://developer.android.com/reference/android/os/Build#MANUFACTURER). iOS apps SHOULD hardcode the value `Apple`.
+ """
+
+ FAAS_NAME = "faas.name"
+ """
+ The name of the single function that this runtime instance executes.
+ Note: This is the name of the function as configured/deployed on the FaaS
+ platform and is usually different from the name of the callback
+ function (which may be stored in the
+ [`code.namespace`/`code.function`](/docs/general/general-attributes.md#source-code-attributes)
+ span attributes).
+
+ For some cloud providers, the above definition is ambiguous. The following
+ definition of function name MUST be used for this attribute
+ (and consequently the span name) for the listed cloud providers/products:
+
+ * **Azure:** The full name `/`, i.e., function app name
+ followed by a forward slash followed by the function name (this form
+ can also be seen in the resource JSON for the function).
+ This means that a span attribute MUST be used, as an Azure function
+ app can host multiple functions that would usually share
+ a TracerProvider (see also the `cloud.resource_id` attribute).
+ """
+
+ FAAS_VERSION = "faas.version"
+ """
+ The immutable version of the function being executed.
+ Note: Depending on the cloud provider and platform, use:
+
+ * **AWS Lambda:** The [function version](https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html)
+ (an integer represented as a decimal string).
+ * **Google Cloud Run (Services):** The [revision](https://cloud.google.com/run/docs/managing/revisions)
+ (i.e., the function name plus the revision suffix).
+ * **Google Cloud Functions:** The value of the
+ [`K_REVISION` environment variable](https://cloud.google.com/functions/docs/env-var#runtime_environment_variables_set_automatically).
+ * **Azure Functions:** Not applicable. Do not set this attribute.
+ """
+
+ FAAS_INSTANCE = "faas.instance"
+ """
+ The execution environment ID as a string, that will be potentially reused for other invocations to the same function/function version.
+ Note: * **AWS Lambda:** Use the (full) log stream name.
+ """
+
+ FAAS_MAX_MEMORY = "faas.max_memory"
+ """
+ The amount of memory available to the serverless function converted to Bytes.
+ Note: It's recommended to set this attribute since e.g. too little memory can easily stop a Java AWS Lambda function from working correctly. On AWS Lambda, the environment variable `AWS_LAMBDA_FUNCTION_MEMORY_SIZE` provides this information (which must be multiplied by 1,048,576).
+ """
+
+ HOST_ID = "host.id"
+ """
+ Unique host ID. For Cloud, this must be the instance_id assigned by the cloud provider. For non-containerized systems, this should be the `machine-id`. See the table below for the sources to use to determine the `machine-id` based on operating system.
+ """
+
+ HOST_NAME = "host.name"
+ """
+ Name of the host. On Unix systems, it may contain what the hostname command returns, or the fully qualified hostname, or another name specified by the user.
+ """
+
+ HOST_TYPE = "host.type"
+ """
+ Type of host. For Cloud, this must be the machine type.
+ """
+
+ HOST_ARCH = "host.arch"
+ """
+ The CPU architecture the host system is running on.
+ """
+
+ HOST_IMAGE_NAME = "host.image.name"
+ """
+ Name of the VM image or OS install the host was instantiated from.
+ """
+
+ HOST_IMAGE_ID = "host.image.id"
+ """
+ VM image ID or host OS image ID. For Cloud, this value is from the provider.
+ """
+
+ HOST_IMAGE_VERSION = "host.image.version"
+ """
+ The version string of the VM image or host OS as defined in [Version Attributes](README.md#version-attributes).
+ """
+
+ K8S_CLUSTER_NAME = "k8s.cluster.name"
+ """
+ The name of the cluster.
+ """
+
+ K8S_CLUSTER_UID = "k8s.cluster.uid"
+ """
+ A pseudo-ID for the cluster, set to the UID of the `kube-system` namespace.
+ Note: K8s does not have support for obtaining a cluster ID. If this is ever
+ added, we will recommend collecting the `k8s.cluster.uid` through the
+ official APIs. In the meantime, we are able to use the `uid` of the
+ `kube-system` namespace as a proxy for cluster ID. Read on for the
+ rationale.
+
+ Every object created in a K8s cluster is assigned a distinct UID. The
+ `kube-system` namespace is used by Kubernetes itself and will exist
+ for the lifetime of the cluster. Using the `uid` of the `kube-system`
+ namespace is a reasonable proxy for the K8s ClusterID as it will only
+ change if the cluster is rebuilt. Furthermore, Kubernetes UIDs are
+ UUIDs as standardized by
+ [ISO/IEC 9834-8 and ITU-T X.667](https://www.itu.int/ITU-T/studygroups/com17/oid.html).
+ Which states:
+
+ > If generated according to one of the mechanisms defined in Rec.
+ ITU-T X.667 | ISO/IEC 9834-8, a UUID is either guaranteed to be
+ different from all other UUIDs generated before 3603 A.D., or is
+ extremely likely to be different (depending on the mechanism chosen).
+
+ Therefore, UIDs between clusters should be extremely unlikely to
+ conflict.
+ """
+
+ K8S_NODE_NAME = "k8s.node.name"
+ """
+ The name of the Node.
+ """
+
+ K8S_NODE_UID = "k8s.node.uid"
+ """
+ The UID of the Node.
+ """
+
+ K8S_NAMESPACE_NAME = "k8s.namespace.name"
+ """
+ The name of the namespace that the pod is running in.
+ """
+
+ K8S_POD_UID = "k8s.pod.uid"
+ """
+ The UID of the Pod.
+ """
+
+ K8S_POD_NAME = "k8s.pod.name"
+ """
+ The name of the Pod.
+ """
+
+ K8S_CONTAINER_NAME = "k8s.container.name"
+ """
+ The name of the Container from Pod specification, must be unique within a Pod. Container runtime usually uses different globally unique name (`container.name`).
+ """
+
+ K8S_CONTAINER_RESTART_COUNT = "k8s.container.restart_count"
+ """
+ Number of times the container was restarted. This attribute can be used to identify a particular container (running or stopped) within a container spec.
+ """
+
+ K8S_REPLICASET_UID = "k8s.replicaset.uid"
+ """
+ The UID of the ReplicaSet.
+ """
+
+ K8S_REPLICASET_NAME = "k8s.replicaset.name"
+ """
+ The name of the ReplicaSet.
+ """
+
+ K8S_DEPLOYMENT_UID = "k8s.deployment.uid"
+ """
+ The UID of the Deployment.
+ """
+
+ K8S_DEPLOYMENT_NAME = "k8s.deployment.name"
+ """
+ The name of the Deployment.
+ """
+
+ K8S_STATEFULSET_UID = "k8s.statefulset.uid"
+ """
+ The UID of the StatefulSet.
+ """
+
+ K8S_STATEFULSET_NAME = "k8s.statefulset.name"
+ """
+ The name of the StatefulSet.
+ """
+
+ K8S_DAEMONSET_UID = "k8s.daemonset.uid"
+ """
+ The UID of the DaemonSet.
+ """
+
+ K8S_DAEMONSET_NAME = "k8s.daemonset.name"
+ """
+ The name of the DaemonSet.
+ """
+
+ K8S_JOB_UID = "k8s.job.uid"
+ """
+ The UID of the Job.
+ """
+
+ K8S_JOB_NAME = "k8s.job.name"
+ """
+ The name of the Job.
+ """
+
+ K8S_CRONJOB_UID = "k8s.cronjob.uid"
+ """
+ The UID of the CronJob.
+ """
+
+ K8S_CRONJOB_NAME = "k8s.cronjob.name"
+ """
+ The name of the CronJob.
+ """
+
+ OS_TYPE = "os.type"
+ """
+ The operating system type.
+ """
+
+ OS_DESCRIPTION = "os.description"
+ """
+ Human readable (not intended to be parsed) OS version information, like e.g. reported by `ver` or `lsb_release -a` commands.
+ """
+
+ OS_NAME = "os.name"
+ """
+ Human readable operating system name.
+ """
+
+ OS_VERSION = "os.version"
+ """
+ The version string of the operating system as defined in [Version Attributes](/docs/resource/README.md#version-attributes).
+ """
+
+ PROCESS_PID = "process.pid"
+ """
+ Process identifier (PID).
+ """
+
+ PROCESS_PARENT_PID = "process.parent_pid"
+ """
+ Parent Process identifier (PID).
+ """
+
+ PROCESS_EXECUTABLE_NAME = "process.executable.name"
+ """
+ The name of the process executable. On Linux based systems, can be set to the `Name` in `proc/[pid]/status`. On Windows, can be set to the base name of `GetProcessImageFileNameW`.
+ """
+
+ PROCESS_EXECUTABLE_PATH = "process.executable.path"
+ """
+ The full path to the process executable. On Linux based systems, can be set to the target of `proc/[pid]/exe`. On Windows, can be set to the result of `GetProcessImageFileNameW`.
+ """
+
+ PROCESS_COMMAND = "process.command"
+ """
+ The command used to launch the process (i.e. the command name). On Linux based systems, can be set to the zeroth string in `proc/[pid]/cmdline`. On Windows, can be set to the first parameter extracted from `GetCommandLineW`.
+ """
+
+ PROCESS_COMMAND_LINE = "process.command_line"
+ """
+ The full command used to launch the process as a single string representing the full command. On Windows, can be set to the result of `GetCommandLineW`. Do not set this if you have to assemble it just for monitoring; use `process.command_args` instead.
+ """
+
+ PROCESS_COMMAND_ARGS = "process.command_args"
+ """
+ All the command arguments (including the command/executable itself) as received by the process. On Linux-based systems (and some other Unixoid systems supporting procfs), can be set according to the list of null-delimited strings extracted from `proc/[pid]/cmdline`. For libc-based executables, this would be the full argv vector passed to `main`.
+ """
+
+ PROCESS_OWNER = "process.owner"
+ """
+ The username of the user that owns the process.
+ """
+
+ PROCESS_RUNTIME_NAME = "process.runtime.name"
+ """
+ The name of the runtime of this process. For compiled native binaries, this SHOULD be the name of the compiler.
+ """
+
+ PROCESS_RUNTIME_VERSION = "process.runtime.version"
+ """
+ The version of the runtime of this process, as returned by the runtime without modification.
+ """
+
+ PROCESS_RUNTIME_DESCRIPTION = "process.runtime.description"
+ """
+ An additional description about the runtime of the process, for example a specific vendor customization of the runtime environment.
+ """
+
+ SERVICE_NAME = "service.name"
+ """
+ Logical name of the service.
+ Note: MUST be the same for all instances of horizontally scaled services. If the value was not specified, SDKs MUST fallback to `unknown_service:` concatenated with [`process.executable.name`](process.md#process), e.g. `unknown_service:bash`. If `process.executable.name` is not available, the value MUST be set to `unknown_service`.
+ """
+
+ SERVICE_VERSION = "service.version"
+ """
+ The version string of the service API or implementation. The format is not defined by these conventions.
+ """
+
+ SERVICE_NAMESPACE = "service.namespace"
+ """
+ A namespace for `service.name`.
+ Note: A string value having a meaning that helps to distinguish a group of services, for example the team name that owns a group of services. `service.name` is expected to be unique within the same namespace. If `service.namespace` is not specified in the Resource then `service.name` is expected to be unique for all services that have no explicit namespace defined (so the empty/unspecified namespace is simply one more valid namespace). Zero-length namespace string is assumed equal to unspecified namespace.
+ """
+
+ SERVICE_INSTANCE_ID = "service.instance.id"
+ """
+ The string ID of the service instance.
+ Note: MUST be unique for each instance of the same `service.namespace,service.name` pair (in other words `service.namespace,service.name,service.instance.id` triplet MUST be globally unique). The ID helps to distinguish instances of the same service that exist at the same time (e.g. instances of a horizontally scaled service). It is preferable for the ID to be persistent and stay the same for the lifetime of the service instance, however it is acceptable that the ID is ephemeral and changes during important lifetime events for the service (e.g. service restarts). If the service has no inherent unique ID that can be used as the value of this attribute it is recommended to generate a random Version 1 or Version 4 RFC 4122 UUID (services aiming for reproducible UUIDs may also use Version 5, see RFC 4122 for more recommendations).
+ """
+
+ TELEMETRY_SDK_NAME = "telemetry.sdk.name"
+ """
+ The name of the telemetry SDK as defined above.
+ Note: The OpenTelemetry SDK MUST set the `telemetry.sdk.name` attribute to `opentelemetry`.
+ If another SDK, like a fork or a vendor-provided implementation, is used, this SDK MUST set the
+ `telemetry.sdk.name` attribute to the fully-qualified class or module name of this SDK's main entry point
+ or another suitable identifier depending on the language.
+ The identifier `opentelemetry` is reserved and MUST NOT be used in this case.
+ All custom identifiers SHOULD be stable across different versions of an implementation.
+ """
+
+ TELEMETRY_SDK_LANGUAGE = "telemetry.sdk.language"
+ """
+ The language of the telemetry SDK.
+ """
+
+ TELEMETRY_SDK_VERSION = "telemetry.sdk.version"
+ """
+ The version string of the telemetry SDK.
+ """
+
+ TELEMETRY_AUTO_VERSION = "telemetry.auto.version"
+ """
+ The version string of the auto instrumentation agent, if used.
+ """
+
+ WEBENGINE_NAME = "webengine.name"
+ """
+ The name of the web engine.
+ """
+
+ WEBENGINE_VERSION = "webengine.version"
+ """
+ The version of the web engine.
+ """
+
+ WEBENGINE_DESCRIPTION = "webengine.description"
+ """
+ Additional description of the web engine (e.g. detailed version and edition information).
+ """
+
+ OTEL_SCOPE_NAME = "otel.scope.name"
+ """
+ The name of the instrumentation scope - (`InstrumentationScope.Name` in OTLP).
+ """
+
+ OTEL_SCOPE_VERSION = "otel.scope.version"
+ """
+ The version of the instrumentation scope - (`InstrumentationScope.Version` in OTLP).
+ """
+
+ OTEL_LIBRARY_NAME = "otel.library.name"
+ """
+ Deprecated, use the `otel.scope.name` attribute.
+ """
+
+ OTEL_LIBRARY_VERSION = "otel.library.version"
+ """
+ Deprecated, use the `otel.scope.version` attribute.
+ """
+
+ # Manually defined deprecated attributes
+
+ FAAS_ID = "faas.id"
+ """
+ Deprecated, use the `cloud.resource.id` attribute.
+ """
+
+
+class CloudProviderValues(Enum):
+ ALIBABA_CLOUD = "alibaba_cloud"
+ """Alibaba Cloud."""
+
+ AWS = "aws"
+ """Amazon Web Services."""
+
+ AZURE = "azure"
+ """Microsoft Azure."""
+
+ GCP = "gcp"
+ """Google Cloud Platform."""
+
+ HEROKU = "heroku"
+ """Heroku Platform as a Service."""
+
+ IBM_CLOUD = "ibm_cloud"
+ """IBM Cloud."""
+
+ TENCENT_CLOUD = "tencent_cloud"
+ """Tencent Cloud."""
+
+
+class CloudPlatformValues(Enum):
+ ALIBABA_CLOUD_ECS = "alibaba_cloud_ecs"
+ """Alibaba Cloud Elastic Compute Service."""
+
+ ALIBABA_CLOUD_FC = "alibaba_cloud_fc"
+ """Alibaba Cloud Function Compute."""
+
+ ALIBABA_CLOUD_OPENSHIFT = "alibaba_cloud_openshift"
+ """Red Hat OpenShift on Alibaba Cloud."""
+
+ AWS_EC2 = "aws_ec2"
+ """AWS Elastic Compute Cloud."""
+
+ AWS_ECS = "aws_ecs"
+ """AWS Elastic Container Service."""
+
+ AWS_EKS = "aws_eks"
+ """AWS Elastic Kubernetes Service."""
+
+ AWS_LAMBDA = "aws_lambda"
+ """AWS Lambda."""
+
+ AWS_ELASTIC_BEANSTALK = "aws_elastic_beanstalk"
+ """AWS Elastic Beanstalk."""
+
+ AWS_APP_RUNNER = "aws_app_runner"
+ """AWS App Runner."""
+
+ AWS_OPENSHIFT = "aws_openshift"
+ """Red Hat OpenShift on AWS (ROSA)."""
+
+ AZURE_VM = "azure_vm"
+ """Azure Virtual Machines."""
+
+ AZURE_CONTAINER_INSTANCES = "azure_container_instances"
+ """Azure Container Instances."""
+
+ AZURE_AKS = "azure_aks"
+ """Azure Kubernetes Service."""
+
+ AZURE_FUNCTIONS = "azure_functions"
+ """Azure Functions."""
+
+ AZURE_APP_SERVICE = "azure_app_service"
+ """Azure App Service."""
+
+ AZURE_OPENSHIFT = "azure_openshift"
+ """Azure Red Hat OpenShift."""
+
+ GCP_BARE_METAL_SOLUTION = "gcp_bare_metal_solution"
+ """Google Bare Metal Solution (BMS)."""
+
+ GCP_COMPUTE_ENGINE = "gcp_compute_engine"
+ """Google Cloud Compute Engine (GCE)."""
+
+ GCP_CLOUD_RUN = "gcp_cloud_run"
+ """Google Cloud Run."""
+
+ GCP_KUBERNETES_ENGINE = "gcp_kubernetes_engine"
+ """Google Cloud Kubernetes Engine (GKE)."""
+
+ GCP_CLOUD_FUNCTIONS = "gcp_cloud_functions"
+ """Google Cloud Functions (GCF)."""
+
+ GCP_APP_ENGINE = "gcp_app_engine"
+ """Google Cloud App Engine (GAE)."""
+
+ GCP_OPENSHIFT = "gcp_openshift"
+ """Red Hat OpenShift on Google Cloud."""
+
+ IBM_CLOUD_OPENSHIFT = "ibm_cloud_openshift"
+ """Red Hat OpenShift on IBM Cloud."""
+
+ TENCENT_CLOUD_CVM = "tencent_cloud_cvm"
+ """Tencent Cloud Cloud Virtual Machine (CVM)."""
+
+ TENCENT_CLOUD_EKS = "tencent_cloud_eks"
+ """Tencent Cloud Elastic Kubernetes Service (EKS)."""
+
+ TENCENT_CLOUD_SCF = "tencent_cloud_scf"
+ """Tencent Cloud Serverless Cloud Function (SCF)."""
+
+
+class AwsEcsLaunchtypeValues(Enum):
+ EC2 = "ec2"
+ """ec2."""
+
+ FARGATE = "fargate"
+ """fargate."""
+
+
+class HostArchValues(Enum):
+ AMD64 = "amd64"
+ """AMD64."""
+
+ ARM32 = "arm32"
+ """ARM32."""
+
+ ARM64 = "arm64"
+ """ARM64."""
+
+ IA64 = "ia64"
+ """Itanium."""
+
+ PPC32 = "ppc32"
+ """32-bit PowerPC."""
+
+ PPC64 = "ppc64"
+ """64-bit PowerPC."""
+
+ S390X = "s390x"
+ """IBM z/Architecture."""
+
+ X86 = "x86"
+ """32-bit x86."""
+
+
+class OsTypeValues(Enum):
+ WINDOWS = "windows"
+ """Microsoft Windows."""
+
+ LINUX = "linux"
+ """Linux."""
+
+ DARWIN = "darwin"
+ """Apple Darwin."""
+
+ FREEBSD = "freebsd"
+ """FreeBSD."""
+
+ NETBSD = "netbsd"
+ """NetBSD."""
+
+ OPENBSD = "openbsd"
+ """OpenBSD."""
+
+ DRAGONFLYBSD = "dragonflybsd"
+ """DragonFly BSD."""
+
+ HPUX = "hpux"
+ """HP-UX (Hewlett Packard Unix)."""
+
+ AIX = "aix"
+ """AIX (Advanced Interactive eXecutive)."""
+
+ SOLARIS = "solaris"
+ """SunOS, Oracle Solaris."""
+
+ Z_OS = "z_os"
+ """IBM z/OS."""
+
+
+class TelemetrySdkLanguageValues(Enum):
+ CPP = "cpp"
+ """cpp."""
+
+ DOTNET = "dotnet"
+ """dotnet."""
+
+ ERLANG = "erlang"
+ """erlang."""
+
+ GO = "go"
+ """go."""
+
+ JAVA = "java"
+ """java."""
+
+ NODEJS = "nodejs"
+ """nodejs."""
+
+ PHP = "php"
+ """php."""
+
+ PYTHON = "python"
+ """python."""
+
+ RUBY = "ruby"
+ """ruby."""
+
+ RUST = "rust"
+ """rust."""
+
+ SWIFT = "swift"
+ """swift."""
+
+ WEBJS = "webjs"
+ """webjs."""
diff --git a/opentelemetry-semantic-conventions/src/opentelemetry/semconv/trace/__init__.py b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/trace/__init__.py
new file mode 100644
index 0000000000..48df586dc1
--- /dev/null
+++ b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/trace/__init__.py
@@ -0,0 +1,2191 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+
+from enum import Enum
+
+from deprecated import deprecated
+
+
+class SpanAttributes:
+ SCHEMA_URL = "https://opentelemetry.io/schemas/v1.21.0"
+ """
+ The URL of the OpenTelemetry schema for these keys and values.
+ """
+ CLIENT_ADDRESS = "client.address"
+ """
+ Client address - unix domain socket name, IPv4 or IPv6 address.
+ Note: When observed from the server side, and when communicating through an intermediary, `client.address` SHOULD represent client address behind any intermediaries (e.g. proxies) if it's available.
+ """
+
+ CLIENT_PORT = "client.port"
+ """
+ Client port number.
+ Note: When observed from the server side, and when communicating through an intermediary, `client.port` SHOULD represent client port behind any intermediaries (e.g. proxies) if it's available.
+ """
+
+ CLIENT_SOCKET_ADDRESS = "client.socket.address"
+ """
+ Immediate client peer address - unix domain socket name, IPv4 or IPv6 address.
+ """
+
+ CLIENT_SOCKET_PORT = "client.socket.port"
+ """
+ Immediate client peer port number.
+ """
+
+ HTTP_METHOD = "http.method"
+ """
+ Deprecated, use `http.request.method` instead.
+ """
+
+ HTTP_STATUS_CODE = "http.status_code"
+ """
+ Deprecated, use `http.response.status_code` instead.
+ """
+
+ HTTP_SCHEME = "http.scheme"
+ """
+ Deprecated, use `url.scheme` instead.
+ """
+
+ HTTP_URL = "http.url"
+ """
+ Deprecated, use `url.full` instead.
+ """
+
+ HTTP_TARGET = "http.target"
+ """
+ Deprecated, use `url.path` and `url.query` instead.
+ """
+
+ HTTP_REQUEST_CONTENT_LENGTH = "http.request_content_length"
+ """
+ Deprecated, use `http.request.body.size` instead.
+ """
+
+ HTTP_RESPONSE_CONTENT_LENGTH = "http.response_content_length"
+ """
+ Deprecated, use `http.response.body.size` instead.
+ """
+
+ NET_SOCK_PEER_NAME = "net.sock.peer.name"
+ """
+ Deprecated, use `server.socket.domain` on client spans.
+ """
+
+ NET_SOCK_PEER_ADDR = "net.sock.peer.addr"
+ """
+ Deprecated, use `server.socket.address` on client spans and `client.socket.address` on server spans.
+ """
+
+ NET_SOCK_PEER_PORT = "net.sock.peer.port"
+ """
+ Deprecated, use `server.socket.port` on client spans and `client.socket.port` on server spans.
+ """
+
+ NET_PEER_NAME = "net.peer.name"
+ """
+ Deprecated, use `server.address` on client spans and `client.address` on server spans.
+ """
+
+ NET_PEER_PORT = "net.peer.port"
+ """
+ Deprecated, use `server.port` on client spans and `client.port` on server spans.
+ """
+
+ NET_HOST_NAME = "net.host.name"
+ """
+ Deprecated, use `server.address`.
+ """
+
+ NET_HOST_PORT = "net.host.port"
+ """
+ Deprecated, use `server.port`.
+ """
+
+ NET_SOCK_HOST_ADDR = "net.sock.host.addr"
+ """
+ Deprecated, use `server.socket.address`.
+ """
+
+ NET_SOCK_HOST_PORT = "net.sock.host.port"
+ """
+ Deprecated, use `server.socket.port`.
+ """
+
+ NET_TRANSPORT = "net.transport"
+ """
+ Deprecated, use `network.transport`.
+ """
+
+ NET_PROTOCOL_NAME = "net.protocol.name"
+ """
+ Deprecated, use `network.protocol.name`.
+ """
+
+ NET_PROTOCOL_VERSION = "net.protocol.version"
+ """
+ Deprecated, use `network.protocol.version`.
+ """
+
+ NET_SOCK_FAMILY = "net.sock.family"
+ """
+ Deprecated, use `network.transport` and `network.type`.
+ """
+
+ DESTINATION_DOMAIN = "destination.domain"
+ """
+ The domain name of the destination system.
+ Note: This value may be a host name, a fully qualified domain name, or another host naming format.
+ """
+
+ DESTINATION_ADDRESS = "destination.address"
+ """
+ Peer address, for example IP address or UNIX socket name.
+ """
+
+ DESTINATION_PORT = "destination.port"
+ """
+ Peer port number.
+ """
+
+ EXCEPTION_TYPE = "exception.type"
+ """
+ The type of the exception (its fully-qualified class name, if applicable). The dynamic type of the exception should be preferred over the static type in languages that support it.
+ """
+
+ EXCEPTION_MESSAGE = "exception.message"
+ """
+ The exception message.
+ """
+
+ EXCEPTION_STACKTRACE = "exception.stacktrace"
+ """
+ A stacktrace as a string in the natural representation for the language runtime. The representation is to be determined and documented by each language SIG.
+ """
+
+ HTTP_REQUEST_METHOD = "http.request.method"
+ """
+ HTTP request method.
+ Note: HTTP request method value SHOULD be "known" to the instrumentation.
+ By default, this convention defines "known" methods as the ones listed in [RFC9110](https://www.rfc-editor.org/rfc/rfc9110.html#name-methods)
+ and the PATCH method defined in [RFC5789](https://www.rfc-editor.org/rfc/rfc5789.html).
+
+ If the HTTP request method is not known to instrumentation, it MUST set the `http.request.method` attribute to `_OTHER` and, except if reporting a metric, MUST
+ set the exact method received in the request line as value of the `http.request.method_original` attribute.
+
+ If the HTTP instrumentation could end up converting valid HTTP request methods to `_OTHER`, then it MUST provide a way to override
+ the list of known HTTP methods. If this override is done via environment variable, then the environment variable MUST be named
+ OTEL_INSTRUMENTATION_HTTP_KNOWN_METHODS and support a comma-separated list of case-sensitive known HTTP methods
+ (this list MUST be a full override of the default known method, it is not a list of known methods in addition to the defaults).
+
+ HTTP method names are case-sensitive and `http.request.method` attribute value MUST match a known HTTP method name exactly.
+ Instrumentations for specific web frameworks that consider HTTP methods to be case insensitive, SHOULD populate a canonical equivalent.
+ Tracing instrumentations that do so, MUST also set `http.request.method_original` to the original value.
+ """
+
+ HTTP_RESPONSE_STATUS_CODE = "http.response.status_code"
+ """
+ [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6).
+ """
+
+ NETWORK_PROTOCOL_NAME = "network.protocol.name"
+ """
+ [OSI Application Layer](https://osi-model.com/application-layer/) or non-OSI equivalent. The value SHOULD be normalized to lowercase.
+ """
+
+ NETWORK_PROTOCOL_VERSION = "network.protocol.version"
+ """
+ Version of the application layer protocol used. See note below.
+ Note: `network.protocol.version` refers to the version of the protocol used and might be different from the protocol client's version. If the HTTP client used has a version of `0.27.2`, but sends HTTP version `1.1`, this attribute should be set to `1.1`.
+ """
+
+ SERVER_ADDRESS = "server.address"
+ """
+ Host identifier of the ["URI origin"](https://www.rfc-editor.org/rfc/rfc9110.html#name-uri-origin) HTTP request is sent to.
+ Note: Determined by using the first of the following that applies
+
+ - Host identifier of the [request target](https://www.rfc-editor.org/rfc/rfc9110.html#target.resource)
+ if it's sent in absolute-form
+ - Host identifier of the `Host` header
+
+ SHOULD NOT be set if capturing it would require an extra DNS lookup.
+ """
+
+ SERVER_PORT = "server.port"
+ """
+ Port identifier of the ["URI origin"](https://www.rfc-editor.org/rfc/rfc9110.html#name-uri-origin) HTTP request is sent to.
+ Note: When [request target](https://www.rfc-editor.org/rfc/rfc9110.html#target.resource) is absolute URI, `server.port` MUST match URI port identifier, otherwise it MUST match `Host` header port identifier.
+ """
+
+ HTTP_ROUTE = "http.route"
+ """
+ The matched route (path template in the format used by the respective server framework). See note below.
+ Note: MUST NOT be populated when this is not supported by the HTTP server framework as the route attribute should have low-cardinality and the URI path can NOT substitute it.
+ SHOULD include the [application root](/docs/http/http-spans.md#http-server-definitions) if there is one.
+ """
+
+ URL_SCHEME = "url.scheme"
+ """
+ The [URI scheme](https://www.rfc-editor.org/rfc/rfc3986#section-3.1) component identifying the used protocol.
+ """
+
+ EVENT_NAME = "event.name"
+ """
+ The name identifies the event.
+ """
+
+ EVENT_DOMAIN = "event.domain"
+ """
+ The domain identifies the business context for the events.
+ Note: Events across different domains may have same `event.name`, yet be
+ unrelated events.
+ """
+
+ LOG_RECORD_UID = "log.record.uid"
+ """
+ A unique identifier for the Log Record.
+ Note: If an id is provided, other log records with the same id will be considered duplicates and can be removed safely. This means, that two distinguishable log records MUST have different values.
+ The id MAY be an [Universally Unique Lexicographically Sortable Identifier (ULID)](https://github.com/ulid/spec), but other identifiers (e.g. UUID) may be used as needed.
+ """
+
+ FEATURE_FLAG_KEY = "feature_flag.key"
+ """
+ The unique identifier of the feature flag.
+ """
+
+ FEATURE_FLAG_PROVIDER_NAME = "feature_flag.provider_name"
+ """
+ The name of the service provider that performs the flag evaluation.
+ """
+
+ FEATURE_FLAG_VARIANT = "feature_flag.variant"
+ """
+ SHOULD be a semantic identifier for a value. If one is unavailable, a stringified version of the value can be used.
+ Note: A semantic identifier, commonly referred to as a variant, provides a means
+ for referring to a value without including the value itself. This can
+ provide additional context for understanding the meaning behind a value.
+ For example, the variant `red` maybe be used for the value `#c05543`.
+
+ A stringified version of the value can be used in situations where a
+ semantic identifier is unavailable. String representation of the value
+ should be determined by the implementer.
+ """
+
+ LOG_IOSTREAM = "log.iostream"
+ """
+ The stream associated with the log. See below for a list of well-known values.
+ """
+
+ LOG_FILE_NAME = "log.file.name"
+ """
+ The basename of the file.
+ """
+
+ LOG_FILE_PATH = "log.file.path"
+ """
+ The full path to the file.
+ """
+
+ LOG_FILE_NAME_RESOLVED = "log.file.name_resolved"
+ """
+ The basename of the file, with symlinks resolved.
+ """
+
+ LOG_FILE_PATH_RESOLVED = "log.file.path_resolved"
+ """
+ The full path to the file, with symlinks resolved.
+ """
+
+ SERVER_SOCKET_ADDRESS = "server.socket.address"
+ """
+ Physical server IP address or Unix socket address. If set from the client, should simply use the socket's peer address, and not attempt to find any actual server IP (i.e., if set from client, this may represent some proxy server instead of the logical server).
+ """
+
+ POOL = "pool"
+ """
+ Name of the buffer pool.
+ Note: Pool names are generally obtained via [BufferPoolMXBean#getName()](https://docs.oracle.com/en/java/javase/11/docs/api/java.management/java/lang/management/BufferPoolMXBean.html#getName()).
+ """
+
+ TYPE = "type"
+ """
+ The type of memory.
+ """
+
+ SERVER_SOCKET_DOMAIN = "server.socket.domain"
+ """
+ The domain name of an immediate peer.
+ Note: Typically observed from the client side, and represents a proxy or other intermediary domain name.
+ """
+
+ SERVER_SOCKET_PORT = "server.socket.port"
+ """
+ Physical server port.
+ """
+
+ SOURCE_DOMAIN = "source.domain"
+ """
+ The domain name of the source system.
+ Note: This value may be a host name, a fully qualified domain name, or another host naming format.
+ """
+
+ SOURCE_ADDRESS = "source.address"
+ """
+ Source address, for example IP address or Unix socket name.
+ """
+
+ SOURCE_PORT = "source.port"
+ """
+ Source port number.
+ """
+
+ AWS_LAMBDA_INVOKED_ARN = "aws.lambda.invoked_arn"
+ """
+ The full invoked ARN as provided on the `Context` passed to the function (`Lambda-Runtime-Invoked-Function-Arn` header on the `/runtime/invocation/next` applicable).
+ Note: This may be different from `cloud.resource_id` if an alias is involved.
+ """
+
+ CLOUDEVENTS_EVENT_ID = "cloudevents.event_id"
+ """
+ The [event_id](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#id) uniquely identifies the event.
+ """
+
+ CLOUDEVENTS_EVENT_SOURCE = "cloudevents.event_source"
+ """
+ The [source](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#source-1) identifies the context in which an event happened.
+ """
+
+ CLOUDEVENTS_EVENT_SPEC_VERSION = "cloudevents.event_spec_version"
+ """
+ The [version of the CloudEvents specification](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#specversion) which the event uses.
+ """
+
+ CLOUDEVENTS_EVENT_TYPE = "cloudevents.event_type"
+ """
+ The [event_type](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#type) contains a value describing the type of event related to the originating occurrence.
+ """
+
+ CLOUDEVENTS_EVENT_SUBJECT = "cloudevents.event_subject"
+ """
+ The [subject](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#subject) of the event in the context of the event producer (identified by source).
+ """
+
+ OPENTRACING_REF_TYPE = "opentracing.ref_type"
+ """
+ Parent-child Reference type.
+ Note: The causal relationship between a child Span and a parent Span.
+ """
+
+ DB_SYSTEM = "db.system"
+ """
+ An identifier for the database management system (DBMS) product being used. See below for a list of well-known identifiers.
+ """
+
+ DB_CONNECTION_STRING = "db.connection_string"
+ """
+ The connection string used to connect to the database. It is recommended to remove embedded credentials.
+ """
+
+ DB_USER = "db.user"
+ """
+ Username for accessing the database.
+ """
+
+ DB_JDBC_DRIVER_CLASSNAME = "db.jdbc.driver_classname"
+ """
+ The fully-qualified class name of the [Java Database Connectivity (JDBC)](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) driver used to connect.
+ """
+
+ DB_NAME = "db.name"
+ """
+ This attribute is used to report the name of the database being accessed. For commands that switch the database, this should be set to the target database (even if the command fails).
+ Note: In some SQL databases, the database name to be used is called "schema name". In case there are multiple layers that could be considered for database name (e.g. Oracle instance name and schema name), the database name to be used is the more specific layer (e.g. Oracle schema name).
+ """
+
+ DB_STATEMENT = "db.statement"
+ """
+ The database statement being executed.
+ """
+
+ DB_OPERATION = "db.operation"
+ """
+ The name of the operation being executed, e.g. the [MongoDB command name](https://docs.mongodb.com/manual/reference/command/#database-operations) such as `findAndModify`, or the SQL keyword.
+ Note: When setting this to an SQL keyword, it is not recommended to attempt any client-side parsing of `db.statement` just to get this property, but it should be set if the operation name is provided by the library being instrumented. If the SQL statement has an ambiguous operation, or performs more than one operation, this value may be omitted.
+ """
+
+ NETWORK_TRANSPORT = "network.transport"
+ """
+ [OSI Transport Layer](https://osi-model.com/transport-layer/) or [Inter-process Communication method](https://en.wikipedia.org/wiki/Inter-process_communication). The value SHOULD be normalized to lowercase.
+ """
+
+ NETWORK_TYPE = "network.type"
+ """
+ [OSI Network Layer](https://osi-model.com/network-layer/) or non-OSI equivalent. The value SHOULD be normalized to lowercase.
+ """
+
+ DB_MSSQL_INSTANCE_NAME = "db.mssql.instance_name"
+ """
+ The Microsoft SQL Server [instance name](https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=sql-server-ver15) connecting to. This name is used to determine the port of a named instance.
+ Note: If setting a `db.mssql.instance_name`, `server.port` is no longer required (but still recommended if non-standard).
+ """
+
+ DB_CASSANDRA_PAGE_SIZE = "db.cassandra.page_size"
+ """
+ The fetch size used for paging, i.e. how many rows will be returned at once.
+ """
+
+ DB_CASSANDRA_CONSISTENCY_LEVEL = "db.cassandra.consistency_level"
+ """
+ The consistency level of the query. Based on consistency values from [CQL](https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/dml/dmlConfigConsistency.html).
+ """
+
+ DB_CASSANDRA_TABLE = "db.cassandra.table"
+ """
+ The name of the primary table that the operation is acting upon, including the keyspace name (if applicable).
+ Note: This mirrors the db.sql.table attribute but references cassandra rather than sql. It is not recommended to attempt any client-side parsing of `db.statement` just to get this property, but it should be set if it is provided by the library being instrumented. If the operation is acting upon an anonymous table, or more than one table, this value MUST NOT be set.
+ """
+
+ DB_CASSANDRA_IDEMPOTENCE = "db.cassandra.idempotence"
+ """
+ Whether or not the query is idempotent.
+ """
+
+ DB_CASSANDRA_SPECULATIVE_EXECUTION_COUNT = (
+ "db.cassandra.speculative_execution_count"
+ )
+ """
+ The number of times a query was speculatively executed. Not set or `0` if the query was not executed speculatively.
+ """
+
+ DB_CASSANDRA_COORDINATOR_ID = "db.cassandra.coordinator.id"
+ """
+ The ID of the coordinating node for a query.
+ """
+
+ DB_CASSANDRA_COORDINATOR_DC = "db.cassandra.coordinator.dc"
+ """
+ The data center of the coordinating node for a query.
+ """
+
+ DB_REDIS_DATABASE_INDEX = "db.redis.database_index"
+ """
+ The index of the database being accessed as used in the [`SELECT` command](https://redis.io/commands/select), provided as an integer. To be used instead of the generic `db.name` attribute.
+ """
+
+ DB_MONGODB_COLLECTION = "db.mongodb.collection"
+ """
+ The collection being accessed within the database stated in `db.name`.
+ """
+
+ URL_FULL = "url.full"
+ """
+ Absolute URL describing a network resource according to [RFC3986](https://www.rfc-editor.org/rfc/rfc3986).
+ Note: For network calls, URL usually has `scheme://host[:port][path][?query][#fragment]` format, where the fragment is not transmitted over HTTP, but if it is known, it should be included nevertheless.
+ `url.full` MUST NOT contain credentials passed via URL in form of `https://username:password@www.example.com/`. In such case username and password should be redacted and attribute's value should be `https://REDACTED:REDACTED@www.example.com/`.
+ `url.full` SHOULD capture the absolute URL when it is available (or can be reconstructed) and SHOULD NOT be validated or modified except for sanitizing purposes.
+ """
+
+ DB_SQL_TABLE = "db.sql.table"
+ """
+ The name of the primary table that the operation is acting upon, including the database name (if applicable).
+ Note: It is not recommended to attempt any client-side parsing of `db.statement` just to get this property, but it should be set if it is provided by the library being instrumented. If the operation is acting upon an anonymous table, or more than one table, this value MUST NOT be set.
+ """
+
+ DB_COSMOSDB_CLIENT_ID = "db.cosmosdb.client_id"
+ """
+ Unique Cosmos client instance id.
+ """
+
+ DB_COSMOSDB_OPERATION_TYPE = "db.cosmosdb.operation_type"
+ """
+ CosmosDB Operation Type.
+ """
+
+ USER_AGENT_ORIGINAL = "user_agent.original"
+ """
+ Full user-agent string is generated by Cosmos DB SDK.
+ Note: The user-agent value is generated by SDK which is a combination of
`sdk_version` : Current version of SDK. e.g. 'cosmos-netstandard-sdk/3.23.0'
`direct_pkg_version` : Direct package version used by Cosmos DB SDK. e.g. '3.23.1'
`number_of_client_instances` : Number of cosmos client instances created by the application. e.g. '1'
`type_of_machine_architecture` : Machine architecture. e.g. 'X64'
`operating_system` : Operating System. e.g. 'Linux 5.4.0-1098-azure 104 18'
`runtime_framework` : Runtime Framework. e.g. '.NET Core 3.1.32'
`failover_information` : Generated key to determine if region failover enabled.
+ Format Reg-{D (Disabled discovery)}-S(application region)|L(List of preferred regions)|N(None, user did not configure it).
+ Default value is "NS".
+ """
+
+ DB_COSMOSDB_CONNECTION_MODE = "db.cosmosdb.connection_mode"
+ """
+ Cosmos client connection mode.
+ """
+
+ DB_COSMOSDB_CONTAINER = "db.cosmosdb.container"
+ """
+ Cosmos DB container name.
+ """
+
+ DB_COSMOSDB_REQUEST_CONTENT_LENGTH = "db.cosmosdb.request_content_length"
+ """
+ Request payload size in bytes.
+ """
+
+ DB_COSMOSDB_STATUS_CODE = "db.cosmosdb.status_code"
+ """
+ Cosmos DB status code.
+ """
+
+ DB_COSMOSDB_SUB_STATUS_CODE = "db.cosmosdb.sub_status_code"
+ """
+ Cosmos DB sub status code.
+ """
+
+ DB_COSMOSDB_REQUEST_CHARGE = "db.cosmosdb.request_charge"
+ """
+ RU consumed for that operation.
+ """
+
+ OTEL_STATUS_CODE = "otel.status_code"
+ """
+ Name of the code, either "OK" or "ERROR". MUST NOT be set if the status code is UNSET.
+ """
+
+ OTEL_STATUS_DESCRIPTION = "otel.status_description"
+ """
+ Description of the Status if it has a value, otherwise not set.
+ """
+
+ FAAS_TRIGGER = "faas.trigger"
+ """
+ Type of the trigger which caused this function invocation.
+ Note: For the server/consumer span on the incoming side,
+ `faas.trigger` MUST be set.
+
+ Clients invoking FaaS instances usually cannot set `faas.trigger`,
+ since they would typically need to look in the payload to determine
+ the event type. If clients set it, it should be the same as the
+ trigger that corresponding incoming would have (i.e., this has
+ nothing to do with the underlying transport used to make the API
+ call to invoke the lambda, which is often HTTP).
+ """
+
+ FAAS_INVOCATION_ID = "faas.invocation_id"
+ """
+ The invocation ID of the current function invocation.
+ """
+
+ CLOUD_RESOURCE_ID = "cloud.resource_id"
+ """
+ Cloud provider-specific native identifier of the monitored cloud resource (e.g. an [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) on AWS, a [fully qualified resource ID](https://learn.microsoft.com/en-us/rest/api/resources/resources/get-by-id) on Azure, a [full resource name](https://cloud.google.com/apis/design/resource_names#full_resource_name) on GCP).
+ Note: On some cloud providers, it may not be possible to determine the full ID at startup,
+ so it may be necessary to set `cloud.resource_id` as a span attribute instead.
+
+ The exact value to use for `cloud.resource_id` depends on the cloud provider.
+ The following well-known definitions MUST be used if you set this attribute and they apply:
+
+ * **AWS Lambda:** The function [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
+ Take care not to use the "invoked ARN" directly but replace any
+ [alias suffix](https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html)
+ with the resolved function version, as the same runtime instance may be invokable with
+ multiple different aliases.
+ * **GCP:** The [URI of the resource](https://cloud.google.com/iam/docs/full-resource-names)
+ * **Azure:** The [Fully Qualified Resource ID](https://docs.microsoft.com/en-us/rest/api/resources/resources/get-by-id) of the invoked function,
+ *not* the function app, having the form
+ `/subscriptions//resourceGroups//providers/Microsoft.Web/sites//functions/`.
+ This means that a span attribute MUST be used, as an Azure function app can host multiple functions that would usually share
+ a TracerProvider.
+ """
+
+ FAAS_DOCUMENT_COLLECTION = "faas.document.collection"
+ """
+ The name of the source on which the triggering operation was performed. For example, in Cloud Storage or S3 corresponds to the bucket name, and in Cosmos DB to the database name.
+ """
+
+ FAAS_DOCUMENT_OPERATION = "faas.document.operation"
+ """
+ Describes the type of the operation that was performed on the data.
+ """
+
+ FAAS_DOCUMENT_TIME = "faas.document.time"
+ """
+ A string containing the time when the data was accessed in the [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+ """
+
+ FAAS_DOCUMENT_NAME = "faas.document.name"
+ """
+ The document name/table subjected to the operation. For example, in Cloud Storage or S3 is the name of the file, and in Cosmos DB the table name.
+ """
+
+ URL_PATH = "url.path"
+ """
+ The [URI path](https://www.rfc-editor.org/rfc/rfc3986#section-3.3) component.
+ Note: When missing, the value is assumed to be `/`.
+ """
+
+ URL_QUERY = "url.query"
+ """
+ The [URI query](https://www.rfc-editor.org/rfc/rfc3986#section-3.4) component.
+ Note: Sensitive content provided in query string SHOULD be scrubbed when instrumentations can identify it.
+ """
+
+ MESSAGING_SYSTEM = "messaging.system"
+ """
+ A string identifying the messaging system.
+ """
+
+ MESSAGING_OPERATION = "messaging.operation"
+ """
+ A string identifying the kind of messaging operation as defined in the [Operation names](#operation-names) section above.
+ Note: If a custom value is used, it MUST be of low cardinality.
+ """
+
+ MESSAGING_BATCH_MESSAGE_COUNT = "messaging.batch.message_count"
+ """
+ The number of messages sent, received, or processed in the scope of the batching operation.
+ Note: Instrumentations SHOULD NOT set `messaging.batch.message_count` on spans that operate with a single message. When a messaging client library supports both batch and single-message API for the same operation, instrumentations SHOULD use `messaging.batch.message_count` for batching APIs and SHOULD NOT use it for single-message APIs.
+ """
+
+ MESSAGING_CLIENT_ID = "messaging.client_id"
+ """
+ A unique identifier for the client that consumes or produces a message.
+ """
+
+ MESSAGING_DESTINATION_NAME = "messaging.destination.name"
+ """
+ The message destination name.
+ Note: Destination name SHOULD uniquely identify a specific queue, topic or other entity within the broker. If
+ the broker does not have such notion, the destination name SHOULD uniquely identify the broker.
+ """
+
+ MESSAGING_DESTINATION_TEMPLATE = "messaging.destination.template"
+ """
+ Low cardinality representation of the messaging destination name.
+ Note: Destination names could be constructed from templates. An example would be a destination name involving a user name or product id. Although the destination name in this case is of high cardinality, the underlying template is of low cardinality and can be effectively used for grouping and aggregation.
+ """
+
+ MESSAGING_DESTINATION_TEMPORARY = "messaging.destination.temporary"
+ """
+ A boolean that is true if the message destination is temporary and might not exist anymore after messages are processed.
+ """
+
+ MESSAGING_DESTINATION_ANONYMOUS = "messaging.destination.anonymous"
+ """
+ A boolean that is true if the message destination is anonymous (could be unnamed or have auto-generated name).
+ """
+
+ MESSAGING_MESSAGE_ID = "messaging.message.id"
+ """
+ A value used by the messaging system as an identifier for the message, represented as a string.
+ """
+
+ MESSAGING_MESSAGE_CONVERSATION_ID = "messaging.message.conversation_id"
+ """
+ The [conversation ID](#conversations) identifying the conversation to which the message belongs, represented as a string. Sometimes called "Correlation ID".
+ """
+
+ MESSAGING_MESSAGE_PAYLOAD_SIZE_BYTES = (
+ "messaging.message.payload_size_bytes"
+ )
+ """
+ The (uncompressed) size of the message payload in bytes. Also use this attribute if it is unknown whether the compressed or uncompressed payload size is reported.
+ """
+
+ MESSAGING_MESSAGE_PAYLOAD_COMPRESSED_SIZE_BYTES = (
+ "messaging.message.payload_compressed_size_bytes"
+ )
+ """
+ The compressed size of the message payload in bytes.
+ """
+
+ FAAS_TIME = "faas.time"
+ """
+ A string containing the function invocation time in the [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+ """
+
+ FAAS_CRON = "faas.cron"
+ """
+ A string containing the schedule period as [Cron Expression](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm).
+ """
+
+ FAAS_COLDSTART = "faas.coldstart"
+ """
+ A boolean that is true if the serverless function is executed for the first time (aka cold-start).
+ """
+
+ FAAS_INVOKED_NAME = "faas.invoked_name"
+ """
+ The name of the invoked function.
+ Note: SHOULD be equal to the `faas.name` resource attribute of the invoked function.
+ """
+
+ FAAS_INVOKED_PROVIDER = "faas.invoked_provider"
+ """
+ The cloud provider of the invoked function.
+ Note: SHOULD be equal to the `cloud.provider` resource attribute of the invoked function.
+ """
+
+ FAAS_INVOKED_REGION = "faas.invoked_region"
+ """
+ The cloud region of the invoked function.
+ Note: SHOULD be equal to the `cloud.region` resource attribute of the invoked function.
+ """
+
+ NETWORK_CONNECTION_TYPE = "network.connection.type"
+ """
+ The internet connection type.
+ """
+
+ NETWORK_CONNECTION_SUBTYPE = "network.connection.subtype"
+ """
+ This describes more details regarding the connection.type. It may be the type of cell technology connection, but it could be used for describing details about a wifi connection.
+ """
+
+ NETWORK_CARRIER_NAME = "network.carrier.name"
+ """
+ The name of the mobile carrier.
+ """
+
+ NETWORK_CARRIER_MCC = "network.carrier.mcc"
+ """
+ The mobile carrier country code.
+ """
+
+ NETWORK_CARRIER_MNC = "network.carrier.mnc"
+ """
+ The mobile carrier network code.
+ """
+
+ NETWORK_CARRIER_ICC = "network.carrier.icc"
+ """
+ The ISO 3166-1 alpha-2 2-character country code associated with the mobile carrier network.
+ """
+
+ PEER_SERVICE = "peer.service"
+ """
+ The [`service.name`](/docs/resource/README.md#service) of the remote service. SHOULD be equal to the actual `service.name` resource attribute of the remote service if any.
+ """
+
+ ENDUSER_ID = "enduser.id"
+ """
+ Username or client_id extracted from the access token or [Authorization](https://tools.ietf.org/html/rfc7235#section-4.2) header in the inbound request from outside the system.
+ """
+
+ ENDUSER_ROLE = "enduser.role"
+ """
+ Actual/assumed role the client is making the request under extracted from token or application security context.
+ """
+
+ ENDUSER_SCOPE = "enduser.scope"
+ """
+ Scopes or granted authorities the client currently possesses extracted from token or application security context. The value would come from the scope associated with an [OAuth 2.0 Access Token](https://tools.ietf.org/html/rfc6749#section-3.3) or an attribute value in a [SAML 2.0 Assertion](http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html).
+ """
+
+ THREAD_ID = "thread.id"
+ """
+ Current "managed" thread ID (as opposed to OS thread ID).
+ """
+
+ THREAD_NAME = "thread.name"
+ """
+ Current thread name.
+ """
+
+ CODE_FUNCTION = "code.function"
+ """
+ The method or function name, or equivalent (usually rightmost part of the code unit's name).
+ """
+
+ CODE_NAMESPACE = "code.namespace"
+ """
+ The "namespace" within which `code.function` is defined. Usually the qualified class or module name, such that `code.namespace` + some separator + `code.function` form a unique identifier for the code unit.
+ """
+
+ CODE_FILEPATH = "code.filepath"
+ """
+ The source code file name that identifies the code unit as uniquely as possible (preferably an absolute file path).
+ """
+
+ CODE_LINENO = "code.lineno"
+ """
+ The line number in `code.filepath` best representing the operation. It SHOULD point within the code unit named in `code.function`.
+ """
+
+ CODE_COLUMN = "code.column"
+ """
+ The column number in `code.filepath` best representing the operation. It SHOULD point within the code unit named in `code.function`.
+ """
+
+ HTTP_REQUEST_METHOD_ORIGINAL = "http.request.method_original"
+ """
+ Original HTTP method sent by the client in the request line.
+ """
+
+ HTTP_REQUEST_BODY_SIZE = "http.request.body.size"
+ """
+ The size of the request payload body in bytes. This is the number of bytes transferred excluding headers and is often, but not always, present as the [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length) header. For requests using transport encoding, this should be the compressed size.
+ """
+
+ HTTP_RESPONSE_BODY_SIZE = "http.response.body.size"
+ """
+ The size of the response payload body in bytes. This is the number of bytes transferred excluding headers and is often, but not always, present as the [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length) header. For requests using transport encoding, this should be the compressed size.
+ """
+
+ HTTP_RESEND_COUNT = "http.resend_count"
+ """
+ The ordinal number of request resending attempt (for any reason, including redirects).
+ Note: The resend count SHOULD be updated each time an HTTP request gets resent by the client, regardless of what was the cause of the resending (e.g. redirection, authorization failure, 503 Server Unavailable, network issues, or any other).
+ """
+
+ RPC_SYSTEM = "rpc.system"
+ """
+ The value `aws-api`.
+ """
+
+ RPC_SERVICE = "rpc.service"
+ """
+ The name of the service to which a request is made, as returned by the AWS SDK.
+ Note: This is the logical name of the service from the RPC interface perspective, which can be different from the name of any implementing class. The `code.namespace` attribute may be used to store the latter (despite the attribute name, it may include a class name; e.g., class with method actually executing the call on the server side, RPC client stub class on the client side).
+ """
+
+ RPC_METHOD = "rpc.method"
+ """
+ The name of the operation corresponding to the request, as returned by the AWS SDK.
+ Note: This is the logical name of the method from the RPC interface perspective, which can be different from the name of any implementing method/function. The `code.function` attribute may be used to store the latter (e.g., method actually executing the call on the server side, RPC client stub method on the client side).
+ """
+
+ AWS_REQUEST_ID = "aws.request_id"
+ """
+ The AWS request ID as returned in the response headers `x-amz-request-id` or `x-amz-requestid`.
+ """
+
+ AWS_DYNAMODB_TABLE_NAMES = "aws.dynamodb.table_names"
+ """
+ The keys in the `RequestItems` object field.
+ """
+
+ AWS_DYNAMODB_CONSUMED_CAPACITY = "aws.dynamodb.consumed_capacity"
+ """
+ The JSON-serialized value of each item in the `ConsumedCapacity` response field.
+ """
+
+ AWS_DYNAMODB_ITEM_COLLECTION_METRICS = (
+ "aws.dynamodb.item_collection_metrics"
+ )
+ """
+ The JSON-serialized value of the `ItemCollectionMetrics` response field.
+ """
+
+ AWS_DYNAMODB_PROVISIONED_READ_CAPACITY = (
+ "aws.dynamodb.provisioned_read_capacity"
+ )
+ """
+ The value of the `ProvisionedThroughput.ReadCapacityUnits` request parameter.
+ """
+
+ AWS_DYNAMODB_PROVISIONED_WRITE_CAPACITY = (
+ "aws.dynamodb.provisioned_write_capacity"
+ )
+ """
+ The value of the `ProvisionedThroughput.WriteCapacityUnits` request parameter.
+ """
+
+ AWS_DYNAMODB_CONSISTENT_READ = "aws.dynamodb.consistent_read"
+ """
+ The value of the `ConsistentRead` request parameter.
+ """
+
+ AWS_DYNAMODB_PROJECTION = "aws.dynamodb.projection"
+ """
+ The value of the `ProjectionExpression` request parameter.
+ """
+
+ AWS_DYNAMODB_LIMIT = "aws.dynamodb.limit"
+ """
+ The value of the `Limit` request parameter.
+ """
+
+ AWS_DYNAMODB_ATTRIBUTES_TO_GET = "aws.dynamodb.attributes_to_get"
+ """
+ The value of the `AttributesToGet` request parameter.
+ """
+
+ AWS_DYNAMODB_INDEX_NAME = "aws.dynamodb.index_name"
+ """
+ The value of the `IndexName` request parameter.
+ """
+
+ AWS_DYNAMODB_SELECT = "aws.dynamodb.select"
+ """
+ The value of the `Select` request parameter.
+ """
+
+ AWS_DYNAMODB_GLOBAL_SECONDARY_INDEXES = (
+ "aws.dynamodb.global_secondary_indexes"
+ )
+ """
+ The JSON-serialized value of each item of the `GlobalSecondaryIndexes` request field.
+ """
+
+ AWS_DYNAMODB_LOCAL_SECONDARY_INDEXES = (
+ "aws.dynamodb.local_secondary_indexes"
+ )
+ """
+ The JSON-serialized value of each item of the `LocalSecondaryIndexes` request field.
+ """
+
+ AWS_DYNAMODB_EXCLUSIVE_START_TABLE = "aws.dynamodb.exclusive_start_table"
+ """
+ The value of the `ExclusiveStartTableName` request parameter.
+ """
+
+ AWS_DYNAMODB_TABLE_COUNT = "aws.dynamodb.table_count"
+ """
+ The the number of items in the `TableNames` response parameter.
+ """
+
+ AWS_DYNAMODB_SCAN_FORWARD = "aws.dynamodb.scan_forward"
+ """
+ The value of the `ScanIndexForward` request parameter.
+ """
+
+ AWS_DYNAMODB_SEGMENT = "aws.dynamodb.segment"
+ """
+ The value of the `Segment` request parameter.
+ """
+
+ AWS_DYNAMODB_TOTAL_SEGMENTS = "aws.dynamodb.total_segments"
+ """
+ The value of the `TotalSegments` request parameter.
+ """
+
+ AWS_DYNAMODB_COUNT = "aws.dynamodb.count"
+ """
+ The value of the `Count` response parameter.
+ """
+
+ AWS_DYNAMODB_SCANNED_COUNT = "aws.dynamodb.scanned_count"
+ """
+ The value of the `ScannedCount` response parameter.
+ """
+
+ AWS_DYNAMODB_ATTRIBUTE_DEFINITIONS = "aws.dynamodb.attribute_definitions"
+ """
+ The JSON-serialized value of each item in the `AttributeDefinitions` request field.
+ """
+
+ AWS_DYNAMODB_GLOBAL_SECONDARY_INDEX_UPDATES = (
+ "aws.dynamodb.global_secondary_index_updates"
+ )
+ """
+ The JSON-serialized value of each item in the the `GlobalSecondaryIndexUpdates` request field.
+ """
+
+ AWS_S3_BUCKET = "aws.s3.bucket"
+ """
+ The S3 bucket name the request refers to. Corresponds to the `--bucket` parameter of the [S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) operations.
+ Note: The `bucket` attribute is applicable to all S3 operations that reference a bucket, i.e. that require the bucket name as a mandatory parameter.
+ This applies to almost all S3 operations except `list-buckets`.
+ """
+
+ AWS_S3_KEY = "aws.s3.key"
+ """
+ The S3 object key the request refers to. Corresponds to the `--key` parameter of the [S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) operations.
+ Note: The `key` attribute is applicable to all object-related S3 operations, i.e. that require the object key as a mandatory parameter.
+ This applies in particular to the following operations:
+
+ - [copy-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html)
+ - [delete-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-object.html)
+ - [get-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object.html)
+ - [head-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html)
+ - [put-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html)
+ - [restore-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/restore-object.html)
+ - [select-object-content](https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html)
+ - [abort-multipart-upload](https://docs.aws.amazon.com/cli/latest/reference/s3api/abort-multipart-upload.html)
+ - [complete-multipart-upload](https://docs.aws.amazon.com/cli/latest/reference/s3api/complete-multipart-upload.html)
+ - [create-multipart-upload](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-multipart-upload.html)
+ - [list-parts](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-parts.html)
+ - [upload-part](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part.html)
+ - [upload-part-copy](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html).
+ """
+
+ AWS_S3_COPY_SOURCE = "aws.s3.copy_source"
+ """
+ The source object (in the form `bucket`/`key`) for the copy operation.
+ Note: The `copy_source` attribute applies to S3 copy operations and corresponds to the `--copy-source` parameter
+ of the [copy-object operation within the S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html).
+ This applies in particular to the following operations:
+
+ - [copy-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/copy-object.html)
+ - [upload-part-copy](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html).
+ """
+
+ AWS_S3_UPLOAD_ID = "aws.s3.upload_id"
+ """
+ Upload ID that identifies the multipart upload.
+ Note: The `upload_id` attribute applies to S3 multipart-upload operations and corresponds to the `--upload-id` parameter
+ of the [S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html) multipart operations.
+ This applies in particular to the following operations:
+
+ - [abort-multipart-upload](https://docs.aws.amazon.com/cli/latest/reference/s3api/abort-multipart-upload.html)
+ - [complete-multipart-upload](https://docs.aws.amazon.com/cli/latest/reference/s3api/complete-multipart-upload.html)
+ - [list-parts](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-parts.html)
+ - [upload-part](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part.html)
+ - [upload-part-copy](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html).
+ """
+
+ AWS_S3_DELETE = "aws.s3.delete"
+ """
+ The delete request container that specifies the objects to be deleted.
+ Note: The `delete` attribute is only applicable to the [delete-object](https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-object.html) operation.
+ The `delete` attribute corresponds to the `--delete` parameter of the
+ [delete-objects operation within the S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-objects.html).
+ """
+
+ AWS_S3_PART_NUMBER = "aws.s3.part_number"
+ """
+ The part number of the part being uploaded in a multipart-upload operation. This is a positive integer between 1 and 10,000.
+ Note: The `part_number` attribute is only applicable to the [upload-part](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part.html)
+ and [upload-part-copy](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html) operations.
+ The `part_number` attribute corresponds to the `--part-number` parameter of the
+ [upload-part operation within the S3 API](https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part.html).
+ """
+
+ GRAPHQL_OPERATION_NAME = "graphql.operation.name"
+ """
+ The name of the operation being executed.
+ """
+
+ GRAPHQL_OPERATION_TYPE = "graphql.operation.type"
+ """
+ The type of the operation being executed.
+ """
+
+ GRAPHQL_DOCUMENT = "graphql.document"
+ """
+ The GraphQL document being executed.
+ Note: The value may be sanitized to exclude sensitive information.
+ """
+
+ MESSAGING_RABBITMQ_DESTINATION_ROUTING_KEY = (
+ "messaging.rabbitmq.destination.routing_key"
+ )
+ """
+ RabbitMQ message routing key.
+ """
+
+ MESSAGING_KAFKA_MESSAGE_KEY = "messaging.kafka.message.key"
+ """
+ Message keys in Kafka are used for grouping alike messages to ensure they're processed on the same partition. They differ from `messaging.message.id` in that they're not unique. If the key is `null`, the attribute MUST NOT be set.
+ Note: If the key type is not string, it's string representation has to be supplied for the attribute. If the key has no unambiguous, canonical string form, don't include its value.
+ """
+
+ MESSAGING_KAFKA_CONSUMER_GROUP = "messaging.kafka.consumer.group"
+ """
+ Name of the Kafka Consumer Group that is handling the message. Only applies to consumers, not producers.
+ """
+
+ MESSAGING_KAFKA_DESTINATION_PARTITION = (
+ "messaging.kafka.destination.partition"
+ )
+ """
+ Partition the message is sent to.
+ """
+
+ MESSAGING_KAFKA_MESSAGE_OFFSET = "messaging.kafka.message.offset"
+ """
+ The offset of a record in the corresponding Kafka partition.
+ """
+
+ MESSAGING_KAFKA_MESSAGE_TOMBSTONE = "messaging.kafka.message.tombstone"
+ """
+ A boolean that is true if the message is a tombstone.
+ """
+
+ MESSAGING_ROCKETMQ_NAMESPACE = "messaging.rocketmq.namespace"
+ """
+ Namespace of RocketMQ resources, resources in different namespaces are individual.
+ """
+
+ MESSAGING_ROCKETMQ_CLIENT_GROUP = "messaging.rocketmq.client_group"
+ """
+ Name of the RocketMQ producer/consumer group that is handling the message. The client type is identified by the SpanKind.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_DELIVERY_TIMESTAMP = (
+ "messaging.rocketmq.message.delivery_timestamp"
+ )
+ """
+ The timestamp in milliseconds that the delay message is expected to be delivered to consumer.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_DELAY_TIME_LEVEL = (
+ "messaging.rocketmq.message.delay_time_level"
+ )
+ """
+ The delay time level for delay message, which determines the message delay time.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_GROUP = "messaging.rocketmq.message.group"
+ """
+ It is essential for FIFO message. Messages that belong to the same message group are always processed one by one within the same consumer group.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_TYPE = "messaging.rocketmq.message.type"
+ """
+ Type of message.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_TAG = "messaging.rocketmq.message.tag"
+ """
+ The secondary classifier of message besides topic.
+ """
+
+ MESSAGING_ROCKETMQ_MESSAGE_KEYS = "messaging.rocketmq.message.keys"
+ """
+ Key(s) of message, another way to mark message besides message id.
+ """
+
+ MESSAGING_ROCKETMQ_CONSUMPTION_MODEL = (
+ "messaging.rocketmq.consumption_model"
+ )
+ """
+ Model of message consumption. This only applies to consumer spans.
+ """
+
+ RPC_GRPC_STATUS_CODE = "rpc.grpc.status_code"
+ """
+ The [numeric status code](https://github.com/grpc/grpc/blob/v1.33.2/doc/statuscodes.md) of the gRPC request.
+ """
+
+ RPC_JSONRPC_VERSION = "rpc.jsonrpc.version"
+ """
+ Protocol version as in `jsonrpc` property of request/response. Since JSON-RPC 1.0 does not specify this, the value can be omitted.
+ """
+
+ RPC_JSONRPC_REQUEST_ID = "rpc.jsonrpc.request_id"
+ """
+ `id` property of request or response. Since protocol allows id to be int, string, `null` or missing (for notifications), value is expected to be cast to string for simplicity. Use empty string in case of `null` value. Omit entirely if this is a notification.
+ """
+
+ RPC_JSONRPC_ERROR_CODE = "rpc.jsonrpc.error_code"
+ """
+ `error.code` property of response if it is an error response.
+ """
+
+ RPC_JSONRPC_ERROR_MESSAGE = "rpc.jsonrpc.error_message"
+ """
+ `error.message` property of response if it is an error response.
+ """
+
+ MESSAGE_TYPE = "message.type"
+ """
+ Whether this is a received or sent message.
+ """
+
+ MESSAGE_ID = "message.id"
+ """
+ MUST be calculated as two different counters starting from `1` one for sent messages and one for received message.
+ Note: This way we guarantee that the values will be consistent between different implementations.
+ """
+
+ MESSAGE_COMPRESSED_SIZE = "message.compressed_size"
+ """
+ Compressed size of the message in bytes.
+ """
+
+ MESSAGE_UNCOMPRESSED_SIZE = "message.uncompressed_size"
+ """
+ Uncompressed size of the message in bytes.
+ """
+
+ RPC_CONNECT_RPC_ERROR_CODE = "rpc.connect_rpc.error_code"
+ """
+ The [error codes](https://connect.build/docs/protocol/#error-codes) of the Connect request. Error codes are always string values.
+ """
+
+ EXCEPTION_ESCAPED = "exception.escaped"
+ """
+ SHOULD be set to true if the exception event is recorded at a point where it is known that the exception is escaping the scope of the span.
+ Note: An exception is considered to have escaped (or left) the scope of a span,
+ if that span is ended while the exception is still logically "in flight".
+ This may be actually "in flight" in some languages (e.g. if the exception
+ is passed to a Context manager's `__exit__` method in Python) but will
+ usually be caught at the point of recording the exception in most languages.
+
+ It is usually not possible to determine at the point where an exception is thrown
+ whether it will escape the scope of a span.
+ However, it is trivial to know that an exception
+ will escape, if one checks for an active exception just before ending the span,
+ as done in the [example above](#recording-an-exception).
+
+ It follows that an exception may still escape the scope of the span
+ even if the `exception.escaped` attribute was not set or set to false,
+ since the event might have been recorded at a time where it was not
+ clear whether the exception will escape.
+ """
+
+ URL_FRAGMENT = "url.fragment"
+ """
+ The [URI fragment](https://www.rfc-editor.org/rfc/rfc3986#section-3.5) component.
+ """
+
+ # Manually defined deprecated attributes
+
+ NET_PEER_IP = "net.peer.ip"
+ """
+ Deprecated, use the `client.socket.address` attribute.
+ """
+
+ NET_HOST_IP = "net.host.ip"
+ """
+ Deprecated, use the `server.socket.address` attribute.
+ """
+
+ HTTP_SERVER_NAME = "http.server_name"
+ """
+ Deprecated, use the `server.address` attribute.
+ """
+
+ HTTP_HOST = "http.host"
+ """
+ Deprecated, use the `server.address` and `server.port` attributes.
+ """
+
+ HTTP_RETRY_COUNT = "http.retry_count"
+ """
+ Deprecated, use the `http.resend_count` attribute.
+ """
+
+ HTTP_REQUEST_CONTENT_LENGTH_UNCOMPRESSED = (
+ "http.request_content_length_uncompressed"
+ )
+ """
+ Deprecated, use the `http.request.body.size` attribute.
+ """
+
+ HTTP_RESPONSE_CONTENT_LENGTH_UNCOMPRESSED = (
+ "http.response_content_length_uncompressed"
+ )
+ """
+ Deprecated, use the `http.response.body.size` attribute.
+ """
+
+ MESSAGING_DESTINATION = "messaging.destination"
+ """
+ Deprecated, use the `messaging.destination.name` attribute.
+ """
+
+ MESSAGING_DESTINATION_KIND = "messaging.destination_kind"
+ """
+ Deprecated.
+ """
+
+ MESSAGING_TEMP_DESTINATION = "messaging.temp_destination"
+ """
+ Deprecated. Use `messaging.destination.temporary` attribute.
+ """
+
+ MESSAGING_PROTOCOL = "messaging.protocol"
+ """
+ Deprecated. Use `network.protocol.name` attribute.
+ """
+
+ MESSAGING_PROTOCOL_VERSION = "messaging.protocol_version"
+ """
+ Deprecated. Use `network.protocol.version` attribute.
+ """
+
+ MESSAGING_URL = "messaging.url"
+ """
+ Deprecated. Use `server.address` and `server.port` attributes.
+ """
+
+ MESSAGING_CONVERSATION_ID = "messaging.conversation_id"
+ """
+ Deprecated. Use `messaging.message.conversation.id` attribute.
+ """
+
+ MESSAGING_KAFKA_PARTITION = "messaging.kafka.partition"
+ """
+ Deprecated. Use `messaging.kafka.destination.partition` attribute.
+ """
+
+ FAAS_EXECUTION = "faas.execution"
+ """
+ Deprecated. Use `faas.invocation_id` attribute.
+ """
+
+ HTTP_USER_AGENT = "http.user_agent"
+ """
+ Deprecated. Use `user_agent.original` attribute.
+ """
+
+ MESSAGING_RABBITMQ_ROUTING_KEY = "messaging.rabbitmq.routing_key"
+ """
+ Deprecated. Use `messaging.rabbitmq.destination.routing_key` attribute.
+ """
+
+ MESSAGING_KAFKA_TOMBSTONE = "messaging.kafka.tombstone"
+ """
+ Deprecated. Use `messaging.kafka.destination.tombstone` attribute.
+ """
+
+ NET_APP_PROTOCOL_NAME = "net.app.protocol.name"
+ """
+ Deprecated. Use `network.protocol.name` attribute.
+ """
+
+ NET_APP_PROTOCOL_VERSION = "net.app.protocol.version"
+ """
+ Deprecated. Use `network.protocol.version` attribute.
+ """
+
+ HTTP_CLIENT_IP = "http.client_ip"
+ """
+ Deprecated. Use `client.address` attribute.
+ """
+
+ HTTP_FLAVOR = "http.flavor"
+ """
+ Deprecated. Use `network.protocol.name` and `network.protocol.version` attributes.
+ """
+
+ NET_HOST_CONNECTION_TYPE = "net.host.connection.type"
+ """
+ Deprecated. Use `network.connection.type` attribute.
+ """
+
+ NET_HOST_CONNECTION_SUBTYPE = "net.host.connection.subtype"
+ """
+ Deprecated. Use `network.connection.subtype` attribute.
+ """
+
+ NET_HOST_CARRIER_NAME = "net.host.carrier.name"
+ """
+ Deprecated. Use `network.carrier.name` attribute.
+ """
+
+ NET_HOST_CARRIER_MCC = "net.host.carrier.mcc"
+ """
+ Deprecated. Use `network.carrier.mcc` attribute.
+ """
+
+ NET_HOST_CARRIER_MNC = "net.host.carrier.mnc"
+ """
+ Deprecated. Use `network.carrier.mnc` attribute.
+ """
+
+ MESSAGING_CONSUMER_ID = "messaging.consumer_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+ MESSAGING_KAFKA_CLIENT_ID = "messaging.kafka.client_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+ MESSAGING_ROCKETMQ_CLIENT_ID = "messaging.rocketmq.client_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+
+@deprecated(
+ version="1.18.0",
+ reason="Removed from the specification in favor of `network.protocol.name` and `network.protocol.version` attributes",
+)
+class HttpFlavorValues(Enum):
+ HTTP_1_0 = "1.0"
+
+ HTTP_1_1 = "1.1"
+
+ HTTP_2_0 = "2.0"
+
+ HTTP_3_0 = "3.0"
+
+ SPDY = "SPDY"
+
+ QUIC = "QUIC"
+
+
+@deprecated(
+ version="1.18.0",
+ reason="Removed from the specification",
+)
+class MessagingDestinationKindValues(Enum):
+ QUEUE = "queue"
+ """A message sent to a queue."""
+
+ TOPIC = "topic"
+ """A message sent to a topic."""
+
+
+@deprecated(
+ version="1.21.0",
+ reason="Renamed to NetworkConnectionTypeValues",
+)
+class NetHostConnectionTypeValues(Enum):
+ WIFI = "wifi"
+ """wifi."""
+
+ WIRED = "wired"
+ """wired."""
+
+ CELL = "cell"
+ """cell."""
+
+ UNAVAILABLE = "unavailable"
+ """unavailable."""
+
+ UNKNOWN = "unknown"
+ """unknown."""
+
+
+@deprecated(
+ version="1.21.0",
+ reason="Renamed to NetworkConnectionSubtypeValues",
+)
+class NetHostConnectionSubtypeValues(Enum):
+ GPRS = "gprs"
+ """GPRS."""
+
+ EDGE = "edge"
+ """EDGE."""
+
+ UMTS = "umts"
+ """UMTS."""
+
+ CDMA = "cdma"
+ """CDMA."""
+
+ EVDO_0 = "evdo_0"
+ """EVDO Rel. 0."""
+
+ EVDO_A = "evdo_a"
+ """EVDO Rev. A."""
+
+ CDMA2000_1XRTT = "cdma2000_1xrtt"
+ """CDMA2000 1XRTT."""
+
+ HSDPA = "hsdpa"
+ """HSDPA."""
+
+ HSUPA = "hsupa"
+ """HSUPA."""
+
+ HSPA = "hspa"
+ """HSPA."""
+
+ IDEN = "iden"
+ """IDEN."""
+
+ EVDO_B = "evdo_b"
+ """EVDO Rev. B."""
+
+ LTE = "lte"
+ """LTE."""
+
+ EHRPD = "ehrpd"
+ """EHRPD."""
+
+ HSPAP = "hspap"
+ """HSPAP."""
+
+ GSM = "gsm"
+ """GSM."""
+
+ TD_SCDMA = "td_scdma"
+ """TD-SCDMA."""
+
+ IWLAN = "iwlan"
+ """IWLAN."""
+
+ NR = "nr"
+ """5G NR (New Radio)."""
+
+ NRNSA = "nrnsa"
+ """5G NRNSA (New Radio Non-Standalone)."""
+
+ LTE_CA = "lte_ca"
+ """LTE CA."""
+
+
+class NetTransportValues(Enum):
+ IP_TCP = "ip_tcp"
+ """ip_tcp."""
+
+ IP_UDP = "ip_udp"
+ """ip_udp."""
+
+ PIPE = "pipe"
+ """Named or anonymous pipe."""
+
+ INPROC = "inproc"
+ """In-process communication."""
+
+ OTHER = "other"
+ """Something else (non IP-based)."""
+
+
+class NetSockFamilyValues(Enum):
+ INET = "inet"
+ """IPv4 address."""
+
+ INET6 = "inet6"
+ """IPv6 address."""
+
+ UNIX = "unix"
+ """Unix domain socket path."""
+
+
+class HttpRequestMethodValues(Enum):
+ CONNECT = "CONNECT"
+ """CONNECT method."""
+
+ DELETE = "DELETE"
+ """DELETE method."""
+
+ GET = "GET"
+ """GET method."""
+
+ HEAD = "HEAD"
+ """HEAD method."""
+
+ OPTIONS = "OPTIONS"
+ """OPTIONS method."""
+
+ PATCH = "PATCH"
+ """PATCH method."""
+
+ POST = "POST"
+ """POST method."""
+
+ PUT = "PUT"
+ """PUT method."""
+
+ TRACE = "TRACE"
+ """TRACE method."""
+
+ OTHER = "_OTHER"
+ """Any HTTP method that the instrumentation has no prior knowledge of."""
+
+
+class EventDomainValues(Enum):
+ BROWSER = "browser"
+ """Events from browser apps."""
+
+ DEVICE = "device"
+ """Events from mobile apps."""
+
+ K8S = "k8s"
+ """Events from Kubernetes."""
+
+
+class LogIostreamValues(Enum):
+ STDOUT = "stdout"
+ """Logs from stdout stream."""
+
+ STDERR = "stderr"
+ """Events from stderr stream."""
+
+
+class TypeValues(Enum):
+ HEAP = "heap"
+ """Heap memory."""
+
+ NON_HEAP = "non_heap"
+ """Non-heap memory."""
+
+
+class OpentracingRefTypeValues(Enum):
+ CHILD_OF = "child_of"
+ """The parent Span depends on the child Span in some capacity."""
+
+ FOLLOWS_FROM = "follows_from"
+ """The parent Span does not depend in any way on the result of the child Span."""
+
+
+class DbSystemValues(Enum):
+ OTHER_SQL = "other_sql"
+ """Some other SQL database. Fallback only. See notes."""
+
+ MSSQL = "mssql"
+ """Microsoft SQL Server."""
+
+ MSSQLCOMPACT = "mssqlcompact"
+ """Microsoft SQL Server Compact."""
+
+ MYSQL = "mysql"
+ """MySQL."""
+
+ ORACLE = "oracle"
+ """Oracle Database."""
+
+ DB2 = "db2"
+ """IBM Db2."""
+
+ POSTGRESQL = "postgresql"
+ """PostgreSQL."""
+
+ REDSHIFT = "redshift"
+ """Amazon Redshift."""
+
+ HIVE = "hive"
+ """Apache Hive."""
+
+ CLOUDSCAPE = "cloudscape"
+ """Cloudscape."""
+
+ HSQLDB = "hsqldb"
+ """HyperSQL DataBase."""
+
+ PROGRESS = "progress"
+ """Progress Database."""
+
+ MAXDB = "maxdb"
+ """SAP MaxDB."""
+
+ HANADB = "hanadb"
+ """SAP HANA."""
+
+ INGRES = "ingres"
+ """Ingres."""
+
+ FIRSTSQL = "firstsql"
+ """FirstSQL."""
+
+ EDB = "edb"
+ """EnterpriseDB."""
+
+ CACHE = "cache"
+ """InterSystems Caché."""
+
+ ADABAS = "adabas"
+ """Adabas (Adaptable Database System)."""
+
+ FIREBIRD = "firebird"
+ """Firebird."""
+
+ DERBY = "derby"
+ """Apache Derby."""
+
+ FILEMAKER = "filemaker"
+ """FileMaker."""
+
+ INFORMIX = "informix"
+ """Informix."""
+
+ INSTANTDB = "instantdb"
+ """InstantDB."""
+
+ INTERBASE = "interbase"
+ """InterBase."""
+
+ MARIADB = "mariadb"
+ """MariaDB."""
+
+ NETEZZA = "netezza"
+ """Netezza."""
+
+ PERVASIVE = "pervasive"
+ """Pervasive PSQL."""
+
+ POINTBASE = "pointbase"
+ """PointBase."""
+
+ SQLITE = "sqlite"
+ """SQLite."""
+
+ SYBASE = "sybase"
+ """Sybase."""
+
+ TERADATA = "teradata"
+ """Teradata."""
+
+ VERTICA = "vertica"
+ """Vertica."""
+
+ H2 = "h2"
+ """H2."""
+
+ COLDFUSION = "coldfusion"
+ """ColdFusion IMQ."""
+
+ CASSANDRA = "cassandra"
+ """Apache Cassandra."""
+
+ HBASE = "hbase"
+ """Apache HBase."""
+
+ MONGODB = "mongodb"
+ """MongoDB."""
+
+ REDIS = "redis"
+ """Redis."""
+
+ COUCHBASE = "couchbase"
+ """Couchbase."""
+
+ COUCHDB = "couchdb"
+ """CouchDB."""
+
+ COSMOSDB = "cosmosdb"
+ """Microsoft Azure Cosmos DB."""
+
+ DYNAMODB = "dynamodb"
+ """Amazon DynamoDB."""
+
+ NEO4J = "neo4j"
+ """Neo4j."""
+
+ GEODE = "geode"
+ """Apache Geode."""
+
+ ELASTICSEARCH = "elasticsearch"
+ """Elasticsearch."""
+
+ MEMCACHED = "memcached"
+ """Memcached."""
+
+ COCKROACHDB = "cockroachdb"
+ """CockroachDB."""
+
+ OPENSEARCH = "opensearch"
+ """OpenSearch."""
+
+ CLICKHOUSE = "clickhouse"
+ """ClickHouse."""
+
+ SPANNER = "spanner"
+ """Cloud Spanner."""
+
+ TRINO = "trino"
+ """Trino."""
+
+
+class NetworkTransportValues(Enum):
+ TCP = "tcp"
+ """TCP."""
+
+ UDP = "udp"
+ """UDP."""
+
+ PIPE = "pipe"
+ """Named or anonymous pipe. See note below."""
+
+ UNIX = "unix"
+ """Unix domain socket."""
+
+
+class NetworkTypeValues(Enum):
+ IPV4 = "ipv4"
+ """IPv4."""
+
+ IPV6 = "ipv6"
+ """IPv6."""
+
+
+class DbCassandraConsistencyLevelValues(Enum):
+ ALL = "all"
+ """all."""
+
+ EACH_QUORUM = "each_quorum"
+ """each_quorum."""
+
+ QUORUM = "quorum"
+ """quorum."""
+
+ LOCAL_QUORUM = "local_quorum"
+ """local_quorum."""
+
+ ONE = "one"
+ """one."""
+
+ TWO = "two"
+ """two."""
+
+ THREE = "three"
+ """three."""
+
+ LOCAL_ONE = "local_one"
+ """local_one."""
+
+ ANY = "any"
+ """any."""
+
+ SERIAL = "serial"
+ """serial."""
+
+ LOCAL_SERIAL = "local_serial"
+ """local_serial."""
+
+
+class DbCosmosdbOperationTypeValues(Enum):
+ INVALID = "Invalid"
+ """invalid."""
+
+ CREATE = "Create"
+ """create."""
+
+ PATCH = "Patch"
+ """patch."""
+
+ READ = "Read"
+ """read."""
+
+ READ_FEED = "ReadFeed"
+ """read_feed."""
+
+ DELETE = "Delete"
+ """delete."""
+
+ REPLACE = "Replace"
+ """replace."""
+
+ EXECUTE = "Execute"
+ """execute."""
+
+ QUERY = "Query"
+ """query."""
+
+ HEAD = "Head"
+ """head."""
+
+ HEAD_FEED = "HeadFeed"
+ """head_feed."""
+
+ UPSERT = "Upsert"
+ """upsert."""
+
+ BATCH = "Batch"
+ """batch."""
+
+ QUERY_PLAN = "QueryPlan"
+ """query_plan."""
+
+ EXECUTE_JAVASCRIPT = "ExecuteJavaScript"
+ """execute_javascript."""
+
+
+class DbCosmosdbConnectionModeValues(Enum):
+ GATEWAY = "gateway"
+ """Gateway (HTTP) connections mode."""
+
+ DIRECT = "direct"
+ """Direct connection."""
+
+
+class OtelStatusCodeValues(Enum):
+ OK = "OK"
+ """The operation has been validated by an Application developer or Operator to have completed successfully."""
+
+ ERROR = "ERROR"
+ """The operation contains an error."""
+
+
+class FaasTriggerValues(Enum):
+ DATASOURCE = "datasource"
+ """A response to some data source operation such as a database or filesystem read/write."""
+
+ HTTP = "http"
+ """To provide an answer to an inbound HTTP request."""
+
+ PUBSUB = "pubsub"
+ """A function is set to be executed when messages are sent to a messaging system."""
+
+ TIMER = "timer"
+ """A function is scheduled to be executed regularly."""
+
+ OTHER = "other"
+ """If none of the others apply."""
+
+
+class FaasDocumentOperationValues(Enum):
+ INSERT = "insert"
+ """When a new object is created."""
+
+ EDIT = "edit"
+ """When an object is modified."""
+
+ DELETE = "delete"
+ """When an object is deleted."""
+
+
+class MessagingOperationValues(Enum):
+ PUBLISH = "publish"
+ """publish."""
+
+ RECEIVE = "receive"
+ """receive."""
+
+ PROCESS = "process"
+ """process."""
+
+
+class FaasInvokedProviderValues(Enum):
+ ALIBABA_CLOUD = "alibaba_cloud"
+ """Alibaba Cloud."""
+
+ AWS = "aws"
+ """Amazon Web Services."""
+
+ AZURE = "azure"
+ """Microsoft Azure."""
+
+ GCP = "gcp"
+ """Google Cloud Platform."""
+
+ TENCENT_CLOUD = "tencent_cloud"
+ """Tencent Cloud."""
+
+
+class NetworkConnectionTypeValues(Enum):
+ WIFI = "wifi"
+ """wifi."""
+
+ WIRED = "wired"
+ """wired."""
+
+ CELL = "cell"
+ """cell."""
+
+ UNAVAILABLE = "unavailable"
+ """unavailable."""
+
+ UNKNOWN = "unknown"
+ """unknown."""
+
+
+class NetworkConnectionSubtypeValues(Enum):
+ GPRS = "gprs"
+ """GPRS."""
+
+ EDGE = "edge"
+ """EDGE."""
+
+ UMTS = "umts"
+ """UMTS."""
+
+ CDMA = "cdma"
+ """CDMA."""
+
+ EVDO_0 = "evdo_0"
+ """EVDO Rel. 0."""
+
+ EVDO_A = "evdo_a"
+ """EVDO Rev. A."""
+
+ CDMA2000_1XRTT = "cdma2000_1xrtt"
+ """CDMA2000 1XRTT."""
+
+ HSDPA = "hsdpa"
+ """HSDPA."""
+
+ HSUPA = "hsupa"
+ """HSUPA."""
+
+ HSPA = "hspa"
+ """HSPA."""
+
+ IDEN = "iden"
+ """IDEN."""
+
+ EVDO_B = "evdo_b"
+ """EVDO Rev. B."""
+
+ LTE = "lte"
+ """LTE."""
+
+ EHRPD = "ehrpd"
+ """EHRPD."""
+
+ HSPAP = "hspap"
+ """HSPAP."""
+
+ GSM = "gsm"
+ """GSM."""
+
+ TD_SCDMA = "td_scdma"
+ """TD-SCDMA."""
+
+ IWLAN = "iwlan"
+ """IWLAN."""
+
+ NR = "nr"
+ """5G NR (New Radio)."""
+
+ NRNSA = "nrnsa"
+ """5G NRNSA (New Radio Non-Standalone)."""
+
+ LTE_CA = "lte_ca"
+ """LTE CA."""
+
+
+class RpcSystemValues(Enum):
+ GRPC = "grpc"
+ """gRPC."""
+
+ JAVA_RMI = "java_rmi"
+ """Java RMI."""
+
+ DOTNET_WCF = "dotnet_wcf"
+ """.NET WCF."""
+
+ APACHE_DUBBO = "apache_dubbo"
+ """Apache Dubbo."""
+
+ CONNECT_RPC = "connect_rpc"
+ """Connect RPC."""
+
+
+class GraphqlOperationTypeValues(Enum):
+ QUERY = "query"
+ """GraphQL query."""
+
+ MUTATION = "mutation"
+ """GraphQL mutation."""
+
+ SUBSCRIPTION = "subscription"
+ """GraphQL subscription."""
+
+
+class MessagingRocketmqMessageTypeValues(Enum):
+ NORMAL = "normal"
+ """Normal message."""
+
+ FIFO = "fifo"
+ """FIFO message."""
+
+ DELAY = "delay"
+ """Delay message."""
+
+ TRANSACTION = "transaction"
+ """Transaction message."""
+
+
+class MessagingRocketmqConsumptionModelValues(Enum):
+ CLUSTERING = "clustering"
+ """Clustering consumption model."""
+
+ BROADCASTING = "broadcasting"
+ """Broadcasting consumption model."""
+
+
+class RpcGrpcStatusCodeValues(Enum):
+ OK = 0
+ """OK."""
+
+ CANCELLED = 1
+ """CANCELLED."""
+
+ UNKNOWN = 2
+ """UNKNOWN."""
+
+ INVALID_ARGUMENT = 3
+ """INVALID_ARGUMENT."""
+
+ DEADLINE_EXCEEDED = 4
+ """DEADLINE_EXCEEDED."""
+
+ NOT_FOUND = 5
+ """NOT_FOUND."""
+
+ ALREADY_EXISTS = 6
+ """ALREADY_EXISTS."""
+
+ PERMISSION_DENIED = 7
+ """PERMISSION_DENIED."""
+
+ RESOURCE_EXHAUSTED = 8
+ """RESOURCE_EXHAUSTED."""
+
+ FAILED_PRECONDITION = 9
+ """FAILED_PRECONDITION."""
+
+ ABORTED = 10
+ """ABORTED."""
+
+ OUT_OF_RANGE = 11
+ """OUT_OF_RANGE."""
+
+ UNIMPLEMENTED = 12
+ """UNIMPLEMENTED."""
+
+ INTERNAL = 13
+ """INTERNAL."""
+
+ UNAVAILABLE = 14
+ """UNAVAILABLE."""
+
+ DATA_LOSS = 15
+ """DATA_LOSS."""
+
+ UNAUTHENTICATED = 16
+ """UNAUTHENTICATED."""
+
+
+class MessageTypeValues(Enum):
+ SENT = "SENT"
+ """sent."""
+
+ RECEIVED = "RECEIVED"
+ """received."""
+
+
+class RpcConnectRpcErrorCodeValues(Enum):
+ CANCELLED = "cancelled"
+ """cancelled."""
+
+ UNKNOWN = "unknown"
+ """unknown."""
+
+ INVALID_ARGUMENT = "invalid_argument"
+ """invalid_argument."""
+
+ DEADLINE_EXCEEDED = "deadline_exceeded"
+ """deadline_exceeded."""
+
+ NOT_FOUND = "not_found"
+ """not_found."""
+
+ ALREADY_EXISTS = "already_exists"
+ """already_exists."""
+
+ PERMISSION_DENIED = "permission_denied"
+ """permission_denied."""
+
+ RESOURCE_EXHAUSTED = "resource_exhausted"
+ """resource_exhausted."""
+
+ FAILED_PRECONDITION = "failed_precondition"
+ """failed_precondition."""
+
+ ABORTED = "aborted"
+ """aborted."""
+
+ OUT_OF_RANGE = "out_of_range"
+ """out_of_range."""
+
+ UNIMPLEMENTED = "unimplemented"
+ """unimplemented."""
+
+ INTERNAL = "internal"
+ """internal."""
+
+ UNAVAILABLE = "unavailable"
+ """unavailable."""
+
+ DATA_LOSS = "data_loss"
+ """data_loss."""
+
+ UNAUTHENTICATED = "unauthenticated"
+ """unauthenticated."""
diff --git a/opentelemetry-semantic-conventions/src/opentelemetry/semconv/version.py b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/version.py
new file mode 100644
index 0000000000..ff896307c3
--- /dev/null
+++ b/opentelemetry-semantic-conventions/src/opentelemetry/semconv/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.44b0.dev"
diff --git a/opentelemetry-semantic-conventions/tests/__init__.py b/opentelemetry-semantic-conventions/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/opentelemetry-semantic-conventions/tests/test_semconv.py b/opentelemetry-semantic-conventions/tests/test_semconv.py
new file mode 100644
index 0000000000..a7362a8af7
--- /dev/null
+++ b/opentelemetry-semantic-conventions/tests/test_semconv.py
@@ -0,0 +1,24 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# type: ignore
+
+from importlib.util import find_spec
+from unittest import TestCase
+
+
+class TestSemanticConventions(TestCase):
+ def test_semantic_conventions(self):
+
+ if find_spec("opentelemetry.semconv") is None:
+ self.fail("opentelemetry-semantic-conventions not installed")
diff --git a/pip b/pip
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/propagator/opentelemetry-propagator-b3/LICENSE b/propagator/opentelemetry-propagator-b3/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/propagator/opentelemetry-propagator-b3/README.rst b/propagator/opentelemetry-propagator-b3/README.rst
new file mode 100644
index 0000000000..2ff3f9df11
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/README.rst
@@ -0,0 +1,23 @@
+OpenTelemetry B3 Propagator
+===========================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-propagator-b3.svg
+ :target: https://pypi.org/project/opentelemetry-propagator-b3/
+
+This library provides a propagator for the B3 format
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-propagator-b3
+
+
+References
+----------
+
+* `OpenTelemetry `_
+* `B3 format `_
diff --git a/propagator/opentelemetry-propagator-b3/pyproject.toml b/propagator/opentelemetry-propagator-b3/pyproject.toml
new file mode 100644
index 0000000000..4a2bc08001
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/pyproject.toml
@@ -0,0 +1,53 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-propagator-b3"
+dynamic = ["version"]
+description = "OpenTelemetry B3 Propagator"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "deprecated >= 1.2.6",
+ "opentelemetry-api ~= 1.3",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_propagator]
+b3 = "opentelemetry.propagators.b3:B3SingleFormat"
+b3multi = "opentelemetry.propagators.b3:B3MultiFormat"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/propagator/opentelemetry-propagator-b3"
+
+[tool.hatch.version]
+path = "src/opentelemetry/propagators/b3/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py
new file mode 100644
index 0000000000..1bbc3614f9
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py
@@ -0,0 +1,210 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import typing
+from re import compile as re_compile
+
+from deprecated import deprecated
+
+from opentelemetry import trace
+from opentelemetry.context import Context
+from opentelemetry.propagators.textmap import (
+ CarrierT,
+ Getter,
+ Setter,
+ TextMapPropagator,
+ default_getter,
+ default_setter,
+)
+from opentelemetry.trace import format_span_id, format_trace_id
+
+
+class B3MultiFormat(TextMapPropagator):
+ """Propagator for the B3 HTTP multi-header format.
+
+ See: https://github.com/openzipkin/b3-propagation
+ https://github.com/openzipkin/b3-propagation#multiple-headers
+ """
+
+ SINGLE_HEADER_KEY = "b3"
+ TRACE_ID_KEY = "x-b3-traceid"
+ SPAN_ID_KEY = "x-b3-spanid"
+ SAMPLED_KEY = "x-b3-sampled"
+ FLAGS_KEY = "x-b3-flags"
+ _SAMPLE_PROPAGATE_VALUES = {"1", "True", "true", "d"}
+ _trace_id_regex = re_compile(r"[\da-fA-F]{16}|[\da-fA-F]{32}")
+ _span_id_regex = re_compile(r"[\da-fA-F]{16}")
+
+ def extract(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: Getter = default_getter,
+ ) -> Context:
+ if context is None:
+ context = Context()
+ trace_id = trace.INVALID_TRACE_ID
+ span_id = trace.INVALID_SPAN_ID
+ sampled = "0"
+ flags = None
+
+ single_header = _extract_first_element(
+ getter.get(carrier, self.SINGLE_HEADER_KEY)
+ )
+ if single_header:
+ # The b3 spec calls for the sampling state to be
+ # "deferred", which is unspecified. This concept does not
+ # translate to SpanContext, so we set it as recorded.
+ sampled = "1"
+ fields = single_header.split("-", 4)
+
+ if len(fields) == 1:
+ sampled = fields[0]
+ elif len(fields) == 2:
+ trace_id, span_id = fields
+ elif len(fields) == 3:
+ trace_id, span_id, sampled = fields
+ elif len(fields) == 4:
+ trace_id, span_id, sampled, _ = fields
+ else:
+ trace_id = (
+ _extract_first_element(getter.get(carrier, self.TRACE_ID_KEY))
+ or trace_id
+ )
+ span_id = (
+ _extract_first_element(getter.get(carrier, self.SPAN_ID_KEY))
+ or span_id
+ )
+ sampled = (
+ _extract_first_element(getter.get(carrier, self.SAMPLED_KEY))
+ or sampled
+ )
+ flags = (
+ _extract_first_element(getter.get(carrier, self.FLAGS_KEY))
+ or flags
+ )
+
+ if (
+ trace_id == trace.INVALID_TRACE_ID
+ or span_id == trace.INVALID_SPAN_ID
+ or self._trace_id_regex.fullmatch(trace_id) is None
+ or self._span_id_regex.fullmatch(span_id) is None
+ ):
+ return context
+
+ trace_id = int(trace_id, 16)
+ span_id = int(span_id, 16)
+ options = 0
+ # The b3 spec provides no defined behavior for both sample and
+ # flag values set. Since the setting of at least one implies
+ # the desire for some form of sampling, propagate if either
+ # header is set to allow.
+ if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
+ options |= trace.TraceFlags.SAMPLED
+
+ return trace.set_span_in_context(
+ trace.NonRecordingSpan(
+ trace.SpanContext(
+ # trace an span ids are encoded in hex, so must be converted
+ trace_id=trace_id,
+ span_id=span_id,
+ is_remote=True,
+ trace_flags=trace.TraceFlags(options),
+ trace_state=trace.TraceState(),
+ )
+ ),
+ context,
+ )
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ span = trace.get_current_span(context=context)
+
+ span_context = span.get_span_context()
+ if span_context == trace.INVALID_SPAN_CONTEXT:
+ return
+
+ sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0
+ setter.set(
+ carrier,
+ self.TRACE_ID_KEY,
+ format_trace_id(span_context.trace_id),
+ )
+ setter.set(
+ carrier, self.SPAN_ID_KEY, format_span_id(span_context.span_id)
+ )
+ setter.set(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ return {
+ self.TRACE_ID_KEY,
+ self.SPAN_ID_KEY,
+ self.SAMPLED_KEY,
+ }
+
+
+class B3SingleFormat(B3MultiFormat):
+ """Propagator for the B3 HTTP single-header format.
+
+ See: https://github.com/openzipkin/b3-propagation
+ https://github.com/openzipkin/b3-propagation#single-header
+ """
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ span = trace.get_current_span(context=context)
+
+ span_context = span.get_span_context()
+ if span_context == trace.INVALID_SPAN_CONTEXT:
+ return
+
+ sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0
+
+ fields = [
+ format_trace_id(span_context.trace_id),
+ format_span_id(span_context.span_id),
+ "1" if sampled else "0",
+ ]
+
+ setter.set(carrier, self.SINGLE_HEADER_KEY, "-".join(fields))
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ return {self.SINGLE_HEADER_KEY}
+
+
+class B3Format(B3MultiFormat):
+ @deprecated(
+ version="1.2.0",
+ reason="B3Format is deprecated in favor of B3MultiFormat",
+ )
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+
+
+def _extract_first_element(
+ items: typing.Iterable[CarrierT],
+) -> typing.Optional[CarrierT]:
+ if items is None:
+ return None
+ return next(iter(items), None)
diff --git a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/py.typed b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/version.py b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/propagator/opentelemetry-propagator-b3/tests/__init__.py b/propagator/opentelemetry-propagator-b3/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/propagator/opentelemetry-propagator-b3/tests/performance/benchmarks/trace/propagation/test_benchmark_b3_format.py b/propagator/opentelemetry-propagator-b3/tests/performance/benchmarks/trace/propagation/test_benchmark_b3_format.py
new file mode 100644
index 0000000000..23cbf773ed
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/tests/performance/benchmarks/trace/propagation/test_benchmark_b3_format.py
@@ -0,0 +1,41 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import opentelemetry.propagators.b3 as b3_format
+import opentelemetry.sdk.trace as trace
+
+FORMAT = b3_format.B3Format()
+
+
+def test_extract_single_header(benchmark):
+ benchmark(
+ FORMAT.extract,
+ {
+ FORMAT.SINGLE_HEADER_KEY: "bdb5b63237ed38aea578af665aa5aa60-c32d953d73ad2251-1"
+ },
+ )
+
+
+def test_inject_empty_context(benchmark):
+ tracer = trace.TracerProvider().get_tracer("sdk_tracer_provider")
+ with tracer.start_as_current_span("Root Span"):
+ with tracer.start_as_current_span("Child Span"):
+ benchmark(
+ FORMAT.inject,
+ {
+ FORMAT.TRACE_ID_KEY: "bdb5b63237ed38aea578af665aa5aa60",
+ FORMAT.SPAN_ID_KEY: "00000000000000000c32d953d73ad225",
+ FORMAT.SAMPLED_KEY: "1",
+ },
+ )
diff --git a/propagator/opentelemetry-propagator-b3/tests/test_b3_format.py b/propagator/opentelemetry-propagator-b3/tests/test_b3_format.py
new file mode 100644
index 0000000000..a4c51b90c1
--- /dev/null
+++ b/propagator/opentelemetry-propagator-b3/tests/test_b3_format.py
@@ -0,0 +1,478 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+from abc import abstractmethod
+from unittest.mock import Mock
+
+import opentelemetry.trace as trace_api
+from opentelemetry.context import Context, get_current
+from opentelemetry.propagators.b3 import ( # pylint: disable=no-name-in-module,import-error
+ B3MultiFormat,
+ B3SingleFormat,
+)
+from opentelemetry.propagators.textmap import DefaultGetter
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.trace import id_generator
+from opentelemetry.trace.propagation import _SPAN_KEY
+
+
+def get_child_parent_new_carrier(old_carrier, propagator):
+
+ ctx = propagator.extract(old_carrier)
+ parent_span_context = trace_api.get_current_span(ctx).get_span_context()
+
+ parent = trace._Span("parent", parent_span_context)
+ child = trace._Span(
+ "child",
+ trace_api.SpanContext(
+ parent_span_context.trace_id,
+ id_generator.RandomIdGenerator().generate_span_id(),
+ is_remote=False,
+ trace_flags=parent_span_context.trace_flags,
+ trace_state=parent_span_context.trace_state,
+ ),
+ parent=parent.get_span_context(),
+ )
+
+ new_carrier = {}
+ ctx = trace_api.set_span_in_context(child)
+ propagator.inject(new_carrier, context=ctx)
+
+ return child, parent, new_carrier
+
+
+class AbstractB3FormatTestCase:
+ # pylint: disable=too-many-public-methods,no-member,invalid-name
+
+ @classmethod
+ def setUpClass(cls):
+ generator = id_generator.RandomIdGenerator()
+ cls.serialized_trace_id = trace_api.format_trace_id(
+ generator.generate_trace_id()
+ )
+ cls.serialized_span_id = trace_api.format_span_id(
+ generator.generate_span_id()
+ )
+
+ def setUp(self) -> None:
+ tracer_provider = trace.TracerProvider()
+ patcher = unittest.mock.patch.object(
+ trace_api, "get_tracer_provider", return_value=tracer_provider
+ )
+ patcher.start()
+ self.addCleanup(patcher.stop)
+
+ @classmethod
+ def get_child_parent_new_carrier(cls, old_carrier):
+ return get_child_parent_new_carrier(old_carrier, cls.get_propagator())
+
+ @classmethod
+ @abstractmethod
+ def get_propagator(cls):
+ pass
+
+ @classmethod
+ @abstractmethod
+ def get_trace_id(cls, carrier):
+ pass
+
+ def assertSampled(self, carrier):
+ pass
+
+ def assertNotSampled(self, carrier):
+ pass
+
+ def test_extract_multi_header(self):
+ """Test the extraction of B3 headers."""
+ propagator = self.get_propagator()
+ context = {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.SAMPLED_KEY: "1",
+ }
+ child, parent, _ = self.get_child_parent_new_carrier(context)
+
+ self.assertEqual(
+ context[propagator.TRACE_ID_KEY],
+ trace_api.format_trace_id(child.context.trace_id),
+ )
+
+ self.assertEqual(
+ context[propagator.SPAN_ID_KEY],
+ trace_api.format_span_id(child.parent.span_id),
+ )
+ self.assertTrue(parent.context.is_remote)
+ self.assertTrue(parent.context.trace_flags.sampled)
+
+ def test_extract_single_header(self):
+ """Test the extraction from a single b3 header."""
+ propagator = self.get_propagator()
+ child, parent, _ = self.get_child_parent_new_carrier(
+ {
+ propagator.SINGLE_HEADER_KEY: f"{self.serialized_trace_id}-{self.serialized_span_id}"
+ }
+ )
+
+ self.assertEqual(
+ self.serialized_trace_id,
+ trace_api.format_trace_id(child.context.trace_id),
+ )
+ self.assertEqual(
+ self.serialized_span_id,
+ trace_api.format_span_id(child.parent.span_id),
+ )
+ self.assertTrue(parent.context.is_remote)
+ self.assertTrue(parent.context.trace_flags.sampled)
+
+ child, parent, _ = self.get_child_parent_new_carrier(
+ {
+ propagator.SINGLE_HEADER_KEY: f"{self.serialized_trace_id}-{self.serialized_span_id}-1"
+ }
+ )
+
+ self.assertEqual(
+ self.serialized_trace_id,
+ trace_api.format_trace_id(child.context.trace_id),
+ )
+ self.assertEqual(
+ self.serialized_span_id,
+ trace_api.format_span_id(child.parent.span_id),
+ )
+
+ self.assertTrue(parent.context.is_remote)
+ self.assertTrue(parent.context.trace_flags.sampled)
+
+ def test_extract_header_precedence(self):
+ """A single b3 header should take precedence over multiple
+ headers.
+ """
+ propagator = self.get_propagator()
+ single_header_trace_id = self.serialized_trace_id[:-3] + "123"
+
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.SINGLE_HEADER_KEY: f"{single_header_trace_id}-{self.serialized_span_id}",
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.SAMPLED_KEY: "1",
+ }
+ )
+
+ self.assertEqual(
+ self.get_trace_id(new_carrier), single_header_trace_id
+ )
+
+ def test_enabled_sampling(self):
+ """Test b3 sample key variants that turn on sampling."""
+ propagator = self.get_propagator()
+ for variant in ["1", "True", "true", "d"]:
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.SAMPLED_KEY: variant,
+ }
+ )
+ self.assertSampled(new_carrier)
+
+ def test_disabled_sampling(self):
+ """Test b3 sample key variants that turn off sampling."""
+ propagator = self.get_propagator()
+ for variant in ["0", "False", "false", None]:
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.SAMPLED_KEY: variant,
+ }
+ )
+ self.assertNotSampled(new_carrier)
+
+ def test_flags(self):
+ """x-b3-flags set to "1" should result in propagation."""
+ propagator = self.get_propagator()
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ )
+
+ self.assertSampled(new_carrier)
+
+ def test_flags_and_sampling(self):
+ """Propagate if b3 flags and sampling are set."""
+ propagator = self.get_propagator()
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ )
+
+ self.assertSampled(new_carrier)
+
+ def test_derived_ctx_is_returned_for_success(self):
+ """Ensure returned context is derived from the given context."""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+ new_ctx = propagator.extract(
+ {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ },
+ old_ctx,
+ )
+ self.assertIn(_SPAN_KEY, new_ctx)
+ for key, value in old_ctx.items(): # pylint:disable=no-member
+ self.assertIn(key, new_ctx)
+ # pylint:disable=unsubscriptable-object
+ self.assertEqual(new_ctx[key], value)
+
+ def test_derived_ctx_is_returned_for_failure(self):
+ """Ensure returned context is derived from the given context."""
+ old_ctx = Context({"k2": "v2"})
+ new_ctx = self.get_propagator().extract({}, old_ctx)
+ self.assertNotIn(_SPAN_KEY, new_ctx)
+ for key, value in old_ctx.items(): # pylint:disable=no-member
+ self.assertIn(key, new_ctx)
+ # pylint:disable=unsubscriptable-object
+ self.assertEqual(new_ctx[key], value)
+
+ def test_64bit_trace_id(self):
+ """64 bit trace ids should be padded to 128 bit trace ids."""
+ propagator = self.get_propagator()
+ trace_id_64_bit = self.serialized_trace_id[:16]
+
+ _, _, new_carrier = self.get_child_parent_new_carrier(
+ {
+ propagator.TRACE_ID_KEY: trace_id_64_bit,
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ },
+ )
+
+ self.assertEqual(
+ self.get_trace_id(new_carrier), "0" * 16 + trace_id_64_bit
+ )
+
+ def test_extract_invalid_single_header_to_explicit_ctx(self):
+ """Given unparsable header, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+
+ carrier = {propagator.SINGLE_HEADER_KEY: "0-1-2-3-4-5-6-7"}
+ new_ctx = propagator.extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_invalid_single_header_to_implicit_ctx(self):
+ propagator = self.get_propagator()
+ carrier = {propagator.SINGLE_HEADER_KEY: "0-1-2-3-4-5-6-7"}
+ new_ctx = propagator.extract(carrier)
+
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_extract_missing_trace_id_to_explicit_ctx(self):
+ """Given no trace ID, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+
+ carrier = {
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_missing_trace_id_to_implicit_ctx(self):
+ propagator = self.get_propagator()
+ carrier = {
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier)
+
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_extract_invalid_trace_id_to_explicit_ctx(self):
+ """Given invalid trace ID, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+
+ carrier = {
+ propagator.TRACE_ID_KEY: "abc123",
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_invalid_trace_id_to_implicit_ctx(self):
+ propagator = self.get_propagator()
+ carrier = {
+ propagator.TRACE_ID_KEY: "abc123",
+ propagator.SPAN_ID_KEY: self.serialized_span_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier)
+
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_extract_invalid_span_id_to_explicit_ctx(self):
+ """Given invalid span ID, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+
+ carrier = {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: "abc123",
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_invalid_span_id_to_implicit_ctx(self):
+ propagator = self.get_propagator()
+ carrier = {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.SPAN_ID_KEY: "abc123",
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier)
+
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_extract_missing_span_id_to_explicit_ctx(self):
+ """Given no span ID, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+ propagator = self.get_propagator()
+
+ carrier = {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_missing_span_id_to_implicit_ctx(self):
+ propagator = self.get_propagator()
+ carrier = {
+ propagator.TRACE_ID_KEY: self.serialized_trace_id,
+ propagator.FLAGS_KEY: "1",
+ }
+ new_ctx = propagator.extract(carrier)
+
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_extract_empty_carrier_to_explicit_ctx(self):
+ """Given no headers at all, do not modify context"""
+ old_ctx = Context({"k1": "v1"})
+
+ carrier = {}
+ new_ctx = self.get_propagator().extract(carrier, old_ctx)
+
+ self.assertDictEqual(new_ctx, old_ctx)
+
+ def test_extract_empty_carrier_to_implicit_ctx(self):
+ new_ctx = self.get_propagator().extract({})
+ self.assertDictEqual(Context(), new_ctx)
+
+ def test_inject_empty_context(self):
+ """If the current context has no span, don't add headers"""
+ new_carrier = {}
+ self.get_propagator().inject(new_carrier, get_current())
+ assert len(new_carrier) == 0
+
+ def test_default_span(self):
+ """Make sure propagator does not crash when working with NonRecordingSpan"""
+
+ class CarrierGetter(DefaultGetter):
+ def get(self, carrier, key):
+ return carrier.get(key, None)
+
+ propagator = self.get_propagator()
+ ctx = propagator.extract({}, getter=CarrierGetter())
+ propagator.inject({}, context=ctx)
+
+ def test_fields(self):
+ """Make sure the fields attribute returns the fields used in inject"""
+
+ propagator = self.get_propagator()
+ tracer = trace.TracerProvider().get_tracer("sdk_tracer_provider")
+
+ mock_setter = Mock()
+
+ with tracer.start_as_current_span("parent"):
+ with tracer.start_as_current_span("child"):
+ propagator.inject({}, setter=mock_setter)
+
+ inject_fields = set()
+
+ for call in mock_setter.mock_calls:
+ inject_fields.add(call[1][1])
+
+ self.assertEqual(propagator.fields, inject_fields)
+
+ def test_extract_none_context(self):
+ """Given no trace ID, do not modify context"""
+ old_ctx = None
+
+ carrier = {}
+ new_ctx = self.get_propagator().extract(carrier, old_ctx)
+ self.assertDictEqual(Context(), new_ctx)
+
+
+class TestB3MultiFormat(AbstractB3FormatTestCase, unittest.TestCase):
+ @classmethod
+ def get_propagator(cls):
+ return B3MultiFormat()
+
+ @classmethod
+ def get_trace_id(cls, carrier):
+ return carrier[cls.get_propagator().TRACE_ID_KEY]
+
+ def assertSampled(self, carrier):
+ self.assertEqual(carrier[self.get_propagator().SAMPLED_KEY], "1")
+
+ def assertNotSampled(self, carrier):
+ self.assertEqual(carrier[self.get_propagator().SAMPLED_KEY], "0")
+
+
+class TestB3SingleFormat(AbstractB3FormatTestCase, unittest.TestCase):
+ @classmethod
+ def get_propagator(cls):
+ return B3SingleFormat()
+
+ @classmethod
+ def get_trace_id(cls, carrier):
+ return carrier[cls.get_propagator().SINGLE_HEADER_KEY].split("-")[0]
+
+ def assertSampled(self, carrier):
+ self.assertEqual(
+ carrier[self.get_propagator().SINGLE_HEADER_KEY].split("-")[2], "1"
+ )
+
+ def assertNotSampled(self, carrier):
+ self.assertEqual(
+ carrier[self.get_propagator().SINGLE_HEADER_KEY].split("-")[2], "0"
+ )
diff --git a/propagator/opentelemetry-propagator-jaeger/LICENSE b/propagator/opentelemetry-propagator-jaeger/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/propagator/opentelemetry-propagator-jaeger/README.rst b/propagator/opentelemetry-propagator-jaeger/README.rst
new file mode 100644
index 0000000000..970cb189f3
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/README.rst
@@ -0,0 +1,23 @@
+OpenTelemetry Jaeger Propagator
+===============================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-propagator-jaeger.svg
+ :target: https://pypi.org/project/opentelemetry-propagator-jaeger/
+
+This library provides a propagator for the Jaeger format
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-propagator-jaeger
+
+
+References
+----------
+
+* `OpenTelemetry `_
+* `Jaeger format `_
diff --git a/propagator/opentelemetry-propagator-jaeger/pyproject.toml b/propagator/opentelemetry-propagator-jaeger/pyproject.toml
new file mode 100644
index 0000000000..519fc6e48d
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/pyproject.toml
@@ -0,0 +1,51 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-propagator-jaeger"
+dynamic = ["version"]
+description = "OpenTelemetry Jaeger Propagator"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 5 - Production/Stable",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-api ~= 1.3",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.entry-points.opentelemetry_propagator]
+jaeger = "opentelemetry.propagators.jaeger:JaegerPropagator"
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/propagator/opentelemetry-propagator-jaeger"
+
+[tool.hatch.version]
+path = "src/opentelemetry/propagators/jaeger/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py
new file mode 100644
index 0000000000..201d8bf3d3
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py
@@ -0,0 +1,173 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import typing
+import urllib.parse
+
+from opentelemetry import baggage, trace
+from opentelemetry.context import Context
+from opentelemetry.propagators.textmap import (
+ CarrierT,
+ Getter,
+ Setter,
+ TextMapPropagator,
+ default_getter,
+ default_setter,
+)
+from opentelemetry.trace import format_span_id, format_trace_id
+
+
+class JaegerPropagator(TextMapPropagator):
+ """Propagator for the Jaeger format.
+
+ See: https://www.jaegertracing.io/docs/1.19/client-libraries/#propagation-format
+ """
+
+ TRACE_ID_KEY = "uber-trace-id"
+ BAGGAGE_PREFIX = "uberctx-"
+ DEBUG_FLAG = 0x02
+
+ def extract(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: Getter = default_getter,
+ ) -> Context:
+
+ if context is None:
+ context = Context()
+ header = getter.get(carrier, self.TRACE_ID_KEY)
+ if not header:
+ return context
+
+ context = self._extract_baggage(getter, carrier, context)
+
+ trace_id, span_id, flags = _parse_trace_id_header(header)
+ if (
+ trace_id == trace.INVALID_TRACE_ID
+ or span_id == trace.INVALID_SPAN_ID
+ ):
+ return context
+
+ span = trace.NonRecordingSpan(
+ trace.SpanContext(
+ trace_id=trace_id,
+ span_id=span_id,
+ is_remote=True,
+ trace_flags=trace.TraceFlags(flags & trace.TraceFlags.SAMPLED),
+ )
+ )
+ return trace.set_span_in_context(span, context)
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ span = trace.get_current_span(context=context)
+ span_context = span.get_span_context()
+ if span_context == trace.INVALID_SPAN_CONTEXT:
+ return
+
+ # Non-recording spans do not have a parent
+ span_parent_id = (
+ span.parent.span_id if span.is_recording() and span.parent else 0
+ )
+ trace_flags = span_context.trace_flags
+ if trace_flags.sampled:
+ trace_flags |= self.DEBUG_FLAG
+
+ # set span identity
+ setter.set(
+ carrier,
+ self.TRACE_ID_KEY,
+ _format_uber_trace_id(
+ span_context.trace_id,
+ span_context.span_id,
+ span_parent_id,
+ trace_flags,
+ ),
+ )
+
+ # set span baggage, if any
+ baggage_entries = baggage.get_all(context=context)
+ if not baggage_entries:
+ return
+ for key, value in baggage_entries.items():
+ baggage_key = self.BAGGAGE_PREFIX + key
+ setter.set(carrier, baggage_key, urllib.parse.quote(str(value)))
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ return {self.TRACE_ID_KEY}
+
+ def _extract_baggage(self, getter, carrier, context):
+ baggage_keys = [
+ key
+ for key in getter.keys(carrier)
+ if key.startswith(self.BAGGAGE_PREFIX)
+ ]
+ for key in baggage_keys:
+ value = _extract_first_element(getter.get(carrier, key))
+ context = baggage.set_baggage(
+ key.replace(self.BAGGAGE_PREFIX, ""),
+ urllib.parse.unquote(value).strip(),
+ context=context,
+ )
+ return context
+
+
+def _format_uber_trace_id(trace_id, span_id, parent_span_id, flags):
+ return f"{format_trace_id(trace_id)}:{format_span_id(span_id)}:{format_span_id(parent_span_id)}:{flags:02x}"
+
+
+def _extract_first_element(
+ items: typing.Iterable[CarrierT],
+) -> typing.Optional[CarrierT]:
+ if items is None:
+ return None
+ return next(iter(items), None)
+
+
+def _parse_trace_id_header(
+ items: typing.Iterable[CarrierT],
+) -> typing.Tuple[int]:
+ invalid_header_result = (trace.INVALID_TRACE_ID, trace.INVALID_SPAN_ID, 0)
+
+ header = _extract_first_element(items)
+ if header is None:
+ return invalid_header_result
+
+ fields = header.split(":")
+ if len(fields) != 4:
+ return invalid_header_result
+
+ trace_id_str, span_id_str, _parent_id_str, flags_str = fields
+ flags = _int_from_hex_str(flags_str, None)
+ if flags is None:
+ return invalid_header_result
+
+ trace_id = _int_from_hex_str(trace_id_str, trace.INVALID_TRACE_ID)
+ span_id = _int_from_hex_str(span_id_str, trace.INVALID_SPAN_ID)
+ return trace_id, span_id, flags
+
+
+def _int_from_hex_str(
+ identifier: str, default: typing.Optional[int]
+) -> typing.Optional[int]:
+ try:
+ return int(identifier, 16)
+ except ValueError:
+ return default
diff --git a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/py.typed b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/version.py b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/version.py
new file mode 100644
index 0000000000..60f04a0743
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "1.23.0.dev"
diff --git a/propagator/opentelemetry-propagator-jaeger/tests/__init__.py b/propagator/opentelemetry-propagator-jaeger/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/propagator/opentelemetry-propagator-jaeger/tests/test_jaeger_propagator.py b/propagator/opentelemetry-propagator-jaeger/tests/test_jaeger_propagator.py
new file mode 100644
index 0000000000..a836cdf403
--- /dev/null
+++ b/propagator/opentelemetry-propagator-jaeger/tests/test_jaeger_propagator.py
@@ -0,0 +1,240 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from unittest.mock import Mock
+
+import opentelemetry.trace as trace_api
+from opentelemetry import baggage
+from opentelemetry.baggage import _BAGGAGE_KEY
+from opentelemetry.context import Context
+from opentelemetry.propagators import ( # pylint: disable=no-name-in-module
+ jaeger,
+)
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.trace import id_generator
+from opentelemetry.test import TestCase
+
+FORMAT = jaeger.JaegerPropagator()
+
+
+def get_context_new_carrier(old_carrier, carrier_baggage=None):
+
+ ctx = FORMAT.extract(old_carrier)
+ if carrier_baggage:
+ for key, value in carrier_baggage.items():
+ ctx = baggage.set_baggage(key, value, ctx)
+ parent_span_context = trace_api.get_current_span(ctx).get_span_context()
+
+ parent = trace._Span("parent", parent_span_context)
+ child = trace._Span(
+ "child",
+ trace_api.SpanContext(
+ parent_span_context.trace_id,
+ id_generator.RandomIdGenerator().generate_span_id(),
+ is_remote=False,
+ trace_flags=parent_span_context.trace_flags,
+ trace_state=parent_span_context.trace_state,
+ ),
+ parent=parent.get_span_context(),
+ )
+
+ new_carrier = {}
+ ctx = trace_api.set_span_in_context(child, ctx)
+
+ FORMAT.inject(new_carrier, context=ctx)
+
+ return ctx, new_carrier
+
+
+class TestJaegerPropagator(TestCase):
+ @classmethod
+ def setUpClass(cls):
+ generator = id_generator.RandomIdGenerator()
+ cls.trace_id = generator.generate_trace_id()
+ cls.span_id = generator.generate_span_id()
+ cls.parent_span_id = generator.generate_span_id()
+ cls.serialized_uber_trace_id = (
+ jaeger._format_uber_trace_id( # pylint: disable=protected-access
+ cls.trace_id, cls.span_id, cls.parent_span_id, 11
+ )
+ )
+
+ def test_extract_valid_span(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ ctx = FORMAT.extract(old_carrier)
+ span_context = trace_api.get_current_span(ctx).get_span_context()
+ self.assertEqual(span_context.trace_id, self.trace_id)
+ self.assertEqual(span_context.span_id, self.span_id)
+
+ def test_missing_carrier(self):
+ old_carrier = {}
+ ctx = FORMAT.extract(old_carrier)
+ span_context = trace_api.get_current_span(ctx).get_span_context()
+ self.assertEqual(span_context.trace_id, trace_api.INVALID_TRACE_ID)
+ self.assertEqual(span_context.span_id, trace_api.INVALID_SPAN_ID)
+
+ def test_trace_id(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ _, new_carrier = get_context_new_carrier(old_carrier)
+ self.assertEqual(
+ self.serialized_uber_trace_id.split(":")[0],
+ new_carrier[FORMAT.TRACE_ID_KEY].split(":")[0],
+ )
+
+ def test_parent_span_id(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ _, new_carrier = get_context_new_carrier(old_carrier)
+ span_id = self.serialized_uber_trace_id.split(":")[1]
+ parent_span_id = new_carrier[FORMAT.TRACE_ID_KEY].split(":")[2]
+ self.assertEqual(span_id, parent_span_id)
+
+ def test_sampled_flag_set(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ _, new_carrier = get_context_new_carrier(old_carrier)
+ sample_flag_value = (
+ int(new_carrier[FORMAT.TRACE_ID_KEY].split(":")[3]) & 0x01
+ )
+ self.assertEqual(1, sample_flag_value)
+
+ def test_debug_flag_set(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ _, new_carrier = get_context_new_carrier(old_carrier)
+ debug_flag_value = (
+ int(new_carrier[FORMAT.TRACE_ID_KEY].split(":")[3])
+ & FORMAT.DEBUG_FLAG
+ )
+ self.assertEqual(FORMAT.DEBUG_FLAG, debug_flag_value)
+
+ def test_sample_debug_flags_unset(self):
+ uber_trace_id = (
+ jaeger._format_uber_trace_id( # pylint: disable=protected-access
+ self.trace_id, self.span_id, self.parent_span_id, 0
+ )
+ )
+ old_carrier = {FORMAT.TRACE_ID_KEY: uber_trace_id}
+ _, new_carrier = get_context_new_carrier(old_carrier)
+ flags = int(new_carrier[FORMAT.TRACE_ID_KEY].split(":")[3])
+ sample_flag_value = flags & 0x01
+ debug_flag_value = flags & FORMAT.DEBUG_FLAG
+ self.assertEqual(0, sample_flag_value)
+ self.assertEqual(0, debug_flag_value)
+
+ def test_baggage(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ input_baggage = {"key1": "value1"}
+ _, new_carrier = get_context_new_carrier(old_carrier, input_baggage)
+ ctx = FORMAT.extract(new_carrier)
+ self.assertDictEqual(input_baggage, ctx[_BAGGAGE_KEY])
+
+ def test_non_string_baggage(self):
+ old_carrier = {FORMAT.TRACE_ID_KEY: self.serialized_uber_trace_id}
+ input_baggage = {"key1": 1, "key2": True}
+ formatted_baggage = {"key1": "1", "key2": "True"}
+ _, new_carrier = get_context_new_carrier(old_carrier, input_baggage)
+ ctx = FORMAT.extract(new_carrier)
+ self.assertDictEqual(formatted_baggage, ctx[_BAGGAGE_KEY])
+
+ def test_extract_invalid_uber_trace_id(self):
+ old_carrier = {
+ "uber-trace-id": "000000000000000000000000deadbeef:00000000deadbef0:00",
+ "uberctx-key1": "value1",
+ }
+ formatted_baggage = {"key1": "value1"}
+ context = FORMAT.extract(old_carrier)
+ span_context = trace_api.get_current_span(context).get_span_context()
+ self.assertEqual(span_context.span_id, trace_api.INVALID_SPAN_ID)
+ self.assertDictEqual(formatted_baggage, context[_BAGGAGE_KEY])
+
+ def test_extract_invalid_trace_id(self):
+ old_carrier = {
+ "uber-trace-id": "00000000000000000000000000000000:00000000deadbef0:00:00",
+ "uberctx-key1": "value1",
+ }
+ formatted_baggage = {"key1": "value1"}
+ context = FORMAT.extract(old_carrier)
+ span_context = trace_api.get_current_span(context).get_span_context()
+ self.assertEqual(span_context.trace_id, trace_api.INVALID_TRACE_ID)
+ self.assertDictEqual(formatted_baggage, context[_BAGGAGE_KEY])
+
+ def test_extract_invalid_span_id(self):
+ old_carrier = {
+ "uber-trace-id": "000000000000000000000000deadbeef:0000000000000000:00:00",
+ "uberctx-key1": "value1",
+ }
+ formatted_baggage = {"key1": "value1"}
+ context = FORMAT.extract(old_carrier)
+ span_context = trace_api.get_current_span(context).get_span_context()
+ self.assertEqual(span_context.span_id, trace_api.INVALID_SPAN_ID)
+ self.assertDictEqual(formatted_baggage, context[_BAGGAGE_KEY])
+
+ def test_fields(self):
+ tracer = trace.TracerProvider().get_tracer("sdk_tracer_provider")
+ mock_setter = Mock()
+ with tracer.start_as_current_span("parent"):
+ with tracer.start_as_current_span("child"):
+ FORMAT.inject({}, setter=mock_setter)
+ inject_fields = set()
+ for call in mock_setter.mock_calls:
+ inject_fields.add(call[1][1])
+ self.assertEqual(FORMAT.fields, inject_fields)
+
+ def test_extract_no_trace_id_to_explicit_ctx(self):
+ carrier = {}
+ orig_ctx = Context({"k1": "v1"})
+
+ ctx = FORMAT.extract(carrier, orig_ctx)
+ self.assertDictEqual(orig_ctx, ctx)
+
+ def test_extract_no_trace_id_to_implicit_ctx(self):
+ carrier = {}
+
+ ctx = FORMAT.extract(carrier)
+ self.assertDictEqual(Context(), ctx)
+
+ def test_extract_invalid_uber_trace_id_header_to_explicit_ctx(self):
+ trace_id_headers = [
+ "000000000000000000000000deadbeef:00000000deadbef0:00",
+ "00000000000000000000000000000000:00000000deadbef0:00:00",
+ "000000000000000000000000deadbeef:0000000000000000:00:00",
+ "000000000000000000000000deadbeef:0000000000000000:00:xyz",
+ ]
+ for trace_id_header in trace_id_headers:
+ with self.subTest(trace_id_header=trace_id_header):
+ carrier = {"uber-trace-id": trace_id_header}
+ orig_ctx = Context({"k1": "v1"})
+
+ ctx = FORMAT.extract(carrier, orig_ctx)
+ self.assertDictEqual(orig_ctx, ctx)
+
+ def test_extract_invalid_uber_trace_id_header_to_implicit_ctx(self):
+ trace_id_headers = [
+ "000000000000000000000000deadbeef:00000000deadbef0:00",
+ "00000000000000000000000000000000:00000000deadbef0:00:00",
+ "000000000000000000000000deadbeef:0000000000000000:00:00",
+ "000000000000000000000000deadbeef:0000000000000000:00:xyz",
+ ]
+ for trace_id_header in trace_id_headers:
+ with self.subTest(trace_id_header=trace_id_header):
+ carrier = {"uber-trace-id": trace_id_header}
+
+ ctx = FORMAT.extract(carrier)
+ self.assertDictEqual(Context(), ctx)
+
+ def test_non_recording_span_does_not_crash(self):
+ """Make sure propagator does not crash when working with NonRecordingSpan"""
+ mock_setter = Mock()
+ span = trace_api.NonRecordingSpan(trace_api.SpanContext(1, 1, True))
+ with trace_api.use_span(span, end_on_exit=True):
+ with self.assertNotRaises(Exception):
+ FORMAT.inject({}, setter=mock_setter)
diff --git a/pyproject.toml b/pyproject.toml
index c1a64c5240..02437cb595 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,6 +2,7 @@
line-length = 79
exclude = '''
(
+<<<<<<< HEAD
\.git
| \.tox
| venv
@@ -9,3 +10,21 @@ exclude = '''
| dist
)
'''
+=======
+ /( # generated files
+ .tox|
+ venv|
+ venv.*|
+ .venv.*|
+ target.*|
+ .*/build/lib/.*|
+ exporter/opentelemetry-exporter-zipkin-proto-http/src/opentelemetry/exporter/zipkin/proto/http/v2/gen|
+ opentelemetry-proto/src/opentelemetry/proto/.*/.*|
+ scripts
+ )/
+)
+'''
+[tool.pytest.ini_options]
+addopts = "-rs -v"
+log_cli = true
+>>>>>>> upstream/main
diff --git a/python b/python
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/rationale.md b/rationale.md
new file mode 100644
index 0000000000..9c10727fd5
--- /dev/null
+++ b/rationale.md
@@ -0,0 +1,68 @@
+# OpenTelemetry Rationale
+
+When creating a library, often times designs and decisions are made that get lost over time. This document tries to collect information on design decisions to answer common questions that may come up when you explore the SDK.
+
+## Versioning and Releasing
+
+This document describes the versioning and stability policy of components shipped from this repository, as per the [OpenTelemetry versioning and stability
+specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/versioning-and-stability.md).
+
+The OpenTelemetry implementations, the OpenTelemetry Spec itself and this repo follows [SemVer V2](https://semver.org/spec/v2.0.0.html) guidelines.
+This means that, for any stable packages released from this repo, all public APIs will remain [backward
+compatible](https://www.python.org/dev/peps/pep-0387/),
+unless a major version bump occurs. This applies to the API, SDK, as well as Exporters, Instrumentation etc. shipped from this repo.
+
+For example, users can take a dependency on 1.0.0 version of any package, with the assurance that all future releases until 2.0.0 will be backward compatible.
+
+## Goals
+
+### API Stability
+
+Once the API for a given signal (spans, logs, metrics, baggage) has been officially released, that API module will function with any SDK that has the same major version, and equal or greater minor or patch version.
+
+For example, libraries that are instrumented with `opentelemetry-api 1.0.1` will function with SDK library `opentelemetry-sdk 1.11.33` or `opentelemetry-sdk 1.3.4`.
+
+### SDK Stability:
+
+Public portions of the SDK (constructors, configuration, end-user interfaces) must remain backwards compatible. Internal interfaces are allowed to break.
+
+## Core components
+
+Core components refer to the set of components which are required as per the spec. This includes API, SDK, propagators (B3 and Jaeger) and exporters which are required by the specification. These exporters are OTLP, and Zipkin.
+
+## Mature or stable Signals
+
+Modules for mature (i.e. released) signals will be found in the latest versions of the corresponding packages of the core components. The version numbers of these will have no suffix appended, indicating they are stable. For example, the package `opentelemetry-api` v1.x.y will be considered stable.
+
+## Pre-releases
+
+Pre-release packages are denoted by appending identifiers such as -Alpha, -Beta, -RC etc. There are no API guarantees in pre-releases. Each release can contain breaking changes and functionality could be removed as well. In general, an RC pre-release is more stable than a Beta release, which is more stable than an Alpha release.
+
+### Immature or experimental signals
+
+Modules for experimental signals will be released in the same packages as the core components, but prefixed with `_` to indicate that they are unstable and subject to change. NO STABILITY GUARANTEES ARE MADE.
+
+## Examples
+
+Purely for illustration purposes, not intended to represent actual releases:
+
+#### V1.0.0 Release (tracing, baggage, propagators, context)
+
+- `opentelemetry-api` 1.0.0
+ - Contains APIs for tracing, baggage, propagators, context
+- `opentelemetry-sdk` 1.0.0
+ - Contains SDK components for tracing, baggage, propagators, and context
+
+#### V1.15.0 Release (with metrics)
+
+- `opentelemetry-api` 1.15.0
+ - Contains APIs for tracing, baggage, propagators, context, and metrics
+- `opentelemetry-sdk` 1.15.0
+ - Contains SDK components for tracing, baggage, propagators, context and metrics
+
+##### Contains the following pre-release packages
+
+- `opentelemetry-api` 1.x.yrc1
+ - Contains the experimental public API for logging plus other unstable features. There are no stability guarantees.
+- `opentelemetry-sdk` 1.x.yrc1
+ - Contains the experimental public SDK for logging plus other unstable features. There are no stability guarantees.
diff --git a/scripts/build.sh b/scripts/build.sh
index dc3e237946..ef592ee546 100755
--- a/scripts/build.sh
+++ b/scripts/build.sh
@@ -16,7 +16,11 @@ DISTDIR=dist
mkdir -p $DISTDIR
rm -rf $DISTDIR/*
+<<<<<<< HEAD
for d in exporter/*/ opentelemetry-instrumentation/ opentelemetry-contrib-instrumentations/ opentelemetry-distro/ instrumentation/*/ propagator/*/ resource/*/ sdk-extension/*/ util/*/ ; do
+=======
+ for d in opentelemetry-api/ opentelemetry-sdk/ opentelemetry-proto/ opentelemetry-semantic-conventions/ exporter/*/ shim/opentelemetry-opentracing-shim/ propagator/*/ tests/opentelemetry-test-utils/; do
+>>>>>>> upstream/main
(
echo "building $d"
cd "$d"
@@ -27,6 +31,7 @@ DISTDIR=dist
fi
)
done
+<<<<<<< HEAD
(
cd $DISTDIR
for x in *.tar.gz ; do
@@ -40,4 +45,6 @@ DISTDIR=dist
fi
done
)
+=======
+>>>>>>> upstream/main
)
diff --git a/scripts/check_for_valid_readme.py b/scripts/check_for_valid_readme.py
index 42446dd741..1138a4a924 100644
--- a/scripts/check_for_valid_readme.py
+++ b/scripts/check_for_valid_readme.py
@@ -29,6 +29,10 @@ def main():
error = False
for path in map(Path, args.paths):
+<<<<<<< HEAD
+=======
+
+>>>>>>> upstream/main
readme = path / "README.rst"
try:
if not is_valid_rst(readme):
@@ -36,6 +40,10 @@ def main():
print("FAILED: RST syntax errors in", readme)
continue
except FileNotFoundError:
+<<<<<<< HEAD
+=======
+ error = True
+>>>>>>> upstream/main
print("FAILED: README.rst not found in", path)
continue
if args.verbose:
diff --git a/scripts/coverage.sh b/scripts/coverage.sh
index 4015c6884a..97c29e6c68 100755
--- a/scripts/coverage.sh
+++ b/scripts/coverage.sh
@@ -3,12 +3,32 @@
set -e
function cov {
+<<<<<<< HEAD
pytest \
--cov ${1} \
--cov-append \
--cov-branch \
--cov-report='' \
${1}
+=======
+ if [ ${TOX_ENV_NAME:0:4} == "py34" ]
+ then
+ pytest \
+ --ignore-glob=instrumentation/opentelemetry-instrumentation-opentracing-shim/tests/testbed/* \
+ --cov ${1} \
+ --cov-append \
+ --cov-branch \
+ --cov-report='' \
+ ${1}
+ else
+ pytest \
+ --cov ${1} \
+ --cov-append \
+ --cov-branch \
+ --cov-report='' \
+ ${1}
+ fi
+>>>>>>> upstream/main
}
PYTHON_VERSION=$(python -c 'import sys; print(".".join(map(str, sys.version_info[:3])))')
@@ -16,6 +36,7 @@ PYTHON_VERSION_INFO=(${PYTHON_VERSION//./ })
coverage erase
+<<<<<<< HEAD
cov instrumentation/opentelemetry-instrumentation-flask
cov instrumentation/opentelemetry-instrumentation-requests
cov instrumentation/opentelemetry-instrumentation-wsgi
@@ -23,5 +44,20 @@ cov instrumentation/opentelemetry-instrumentation-aiohttp-client
cov instrumentation/opentelemetry-instrumentation-asgi
+=======
+cov opentelemetry-api
+cov opentelemetry-sdk
+cov exporter/opentelemetry-exporter-datadog
+cov instrumentation/opentelemetry-instrumentation-flask
+cov instrumentation/opentelemetry-instrumentation-requests
+cov instrumentation/opentelemetry-instrumentation-opentracing-shim
+cov util/opentelemetry-util-http
+cov exporter/opentelemetry-exporter-zipkin
+
+
+cov instrumentation/opentelemetry-instrumentation-aiohttp-client
+cov instrumentation/opentelemetry-instrumentation-asgi
+
+>>>>>>> upstream/main
coverage report --show-missing
coverage xml
diff --git a/scripts/eachdist.py b/scripts/eachdist.py
index 570a0cd0e5..8deae2a732 100755
--- a/scripts/eachdist.py
+++ b/scripts/eachdist.py
@@ -8,7 +8,10 @@
import subprocess
import sys
from configparser import ConfigParser
+<<<<<<< HEAD
from datetime import datetime
+=======
+>>>>>>> upstream/main
from inspect import cleandoc
from itertools import chain
from os.path import basename
@@ -17,8 +20,11 @@
DEFAULT_ALLSEP = " "
DEFAULT_ALLFMT = "{rel}"
+<<<<<<< HEAD
NON_SRC_DIRS = ["build", "dist", "__pycache__", "lib", "venv", ".tox"]
+=======
+>>>>>>> upstream/main
def unique(elems):
seen = set()
@@ -240,7 +246,12 @@ def setup_instparser(instparser):
)
fmtparser = subparsers.add_parser(
+<<<<<<< HEAD
"format", help="Formats all source code with black and isort.",
+=======
+ "format",
+ help="Formats all source code with black and isort.",
+>>>>>>> upstream/main
)
fmtparser.set_defaults(func=format_args)
fmtparser.add_argument(
@@ -250,7 +261,12 @@ def setup_instparser(instparser):
)
versionparser = subparsers.add_parser(
+<<<<<<< HEAD
"version", help="Get the version for a release",
+=======
+ "version",
+ help="Get the version for a release",
+>>>>>>> upstream/main
)
versionparser.set_defaults(func=version_args)
versionparser.add_argument(
@@ -282,7 +298,11 @@ def find_targets_unordered(rootpath):
continue
if any(
(subdir / marker).exists()
+<<<<<<< HEAD
for marker in ("pyproject.toml",)
+=======
+ for marker in ("setup.py", "pyproject.toml")
+>>>>>>> upstream/main
):
yield subdir
else:
@@ -518,19 +538,31 @@ def lint_args(args):
runsubprocess(
args.dry_run,
+<<<<<<< HEAD
("black", "--config", f"{rootdir}/pyproject.toml", ".")
+ (("--diff", "--check") if args.check_only else ()),
+=======
+ ("black", "--config", "pyproject.toml", ".") + (("--diff", "--check") if args.check_only else ()),
+>>>>>>> upstream/main
cwd=rootdir,
check=True,
)
runsubprocess(
args.dry_run,
+<<<<<<< HEAD
("isort", "--settings-path", f"{rootdir}/.isort.cfg", ".")
+=======
+ ("isort", "--settings-path", ".isort.cfg", ".")
+>>>>>>> upstream/main
+ (("--diff", "--check-only") if args.check_only else ()),
cwd=rootdir,
check=True,
)
+<<<<<<< HEAD
runsubprocess(args.dry_run, ("flake8", "--config", f"{rootdir}/.flake8", rootdir), check=True)
+=======
+ runsubprocess(args.dry_run, ("flake8", "--config", ".flake8", rootdir), check=True)
+>>>>>>> upstream/main
execute_args(
parse_subargs(
args, ("exec", "pylint {}", "--all", "--mode", "lintroots")
@@ -539,11 +571,20 @@ def lint_args(args):
execute_args(
parse_subargs(
args,
+<<<<<<< HEAD
("exec", "python scripts/check_for_valid_readme.py {}", "--all",),
+=======
+ (
+ "exec",
+ "python scripts/check_for_valid_readme.py {}",
+ "--all",
+ ),
+>>>>>>> upstream/main
)
)
+<<<<<<< HEAD
def update_changelog(path, version, new_entry):
unreleased_changes = False
try:
@@ -605,12 +646,17 @@ def _is_non_src_dir(root) -> bool:
for root, _, files in os.walk(path):
if _is_non_src_dir(root):
continue
+=======
+def find(name, path):
+ for root, _, files in os.walk(path):
+>>>>>>> upstream/main
if name in files:
return os.path.join(root, name)
return None
def filter_packages(targets, packages):
+<<<<<<< HEAD
if not packages:
return targets
filtered_packages = []
@@ -619,6 +665,12 @@ def filter_packages(targets, packages):
if str(pkg) == "all":
continue
if str(pkg) in str(target):
+=======
+ filtered_packages = []
+ for target in targets:
+ for pkg in packages:
+ if pkg in str(target):
+>>>>>>> upstream/main
filtered_packages.append(target)
break
return filtered_packages
@@ -628,12 +680,20 @@ def update_version_files(targets, version, packages):
print("updating version.py files")
targets = filter_packages(targets, packages)
update_files(
+<<<<<<< HEAD
targets, "version.py", "__version__ .*", f'__version__ = "{version}"',
+=======
+ targets,
+ "version.py",
+ "__version__ .*",
+ f'__version__ = "{version}"',
+>>>>>>> upstream/main
)
def update_dependencies(targets, version, packages):
print("updating dependencies")
+<<<<<<< HEAD
if "all" in packages:
packages.extend(targets)
for pkg in packages:
@@ -648,6 +708,20 @@ def update_dependencies(targets, version, packages):
"pyproject.toml",
fr"({package_name}.*)==(.*)",
r"\1== " + version + '",',
+=======
+ # PEP 508 allowed specifier operators
+ operators = ['==', '!=', '<=', '>=', '<', '>', '===', '~=', '=']
+ operators_pattern = '|'.join(re.escape(op) for op in operators)
+
+ for pkg in packages:
+ search = rf"({basename(pkg)}[^,]*)({operators_pattern})(.*\.dev)"
+ replace = r"\1\2 " + version
+ update_files(
+ targets,
+ "pyproject.toml",
+ search,
+ replace,
+>>>>>>> upstream/main
)
@@ -682,22 +756,32 @@ def release_args(args):
cfg.read(str(find_projectroot() / "eachdist.ini"))
versions = args.versions
updated_versions = []
+<<<<<<< HEAD
excluded = cfg["exclude_release"]["packages"].split()
targets = [target for target in targets if basename(target) not in excluded]
+=======
+>>>>>>> upstream/main
for group in versions.split(","):
mcfg = cfg[group]
version = mcfg["version"]
updated_versions.append(version)
+<<<<<<< HEAD
packages = None
if "packages" in mcfg:
packages = [pkg for pkg in mcfg["packages"].split() if pkg not in excluded]
+=======
+ packages = mcfg["packages"].split()
+>>>>>>> upstream/main
print(f"update {group} packages to {version}")
update_dependencies(targets, version, packages)
update_version_files(targets, version, packages)
+<<<<<<< HEAD
update_changelogs("-".join(updated_versions))
+=======
+>>>>>>> upstream/main
def test_args(args):
clean_remainder_args(args.pytestargs)
@@ -715,10 +799,17 @@ def test_args(args):
def format_args(args):
+<<<<<<< HEAD
format_dir = str(find_projectroot())
if args.path:
format_dir = os.path.join(format_dir, args.path)
root_dir = str(find_projectroot())
+=======
+ root_dir = format_dir = str(find_projectroot())
+ if args.path:
+ format_dir = os.path.join(format_dir, args.path)
+
+>>>>>>> upstream/main
runsubprocess(
args.dry_run,
("black", "--config", f"{root_dir}/pyproject.toml", "."),
diff --git a/scripts/generate_website_docs.sh b/scripts/generate_website_docs.sh
new file mode 100755
index 0000000000..a36c00e712
--- /dev/null
+++ b/scripts/generate_website_docs.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+# this script generates the documentation required for
+# opentelemetry.io
+
+pip install -r docs-requirements.txt
+
+TMP_DIR=/tmp/python_otel_docs
+rm -Rf ${TMP_DIR}
+
+sphinx-build -M jekyll ./docs ${TMP_DIR}
diff --git a/scripts/proto_codegen.sh b/scripts/proto_codegen.sh
new file mode 100755
index 0000000000..26fb8b1ee8
--- /dev/null
+++ b/scripts/proto_codegen.sh
@@ -0,0 +1,77 @@
+#!/bin/bash
+#
+# Regenerate python code from OTLP protos in
+# https://github.com/open-telemetry/opentelemetry-proto
+#
+# To use, update PROTO_REPO_BRANCH_OR_COMMIT variable below to a commit hash or
+# tag in opentelemtry-proto repo that you want to build off of. Then, just run
+# this script to update the proto files. Commit the changes as well as any
+# fixes needed in the OTLP exporter.
+#
+# Optional envars:
+# PROTO_REPO_DIR - the path to an existing checkout of the opentelemetry-proto repo
+
+# Pinned commit/branch/tag for the current version used in opentelemetry-proto python package.
+PROTO_REPO_BRANCH_OR_COMMIT="v0.20.0"
+
+set -e
+
+PROTO_REPO_DIR=${PROTO_REPO_DIR:-"/tmp/opentelemetry-proto"}
+# root of opentelemetry-python repo
+repo_root="$(git rev-parse --show-toplevel)"
+venv_dir="/tmp/proto_codegen_venv"
+
+# run on exit even if crash
+cleanup() {
+ echo "Deleting $venv_dir"
+ rm -rf $venv_dir
+}
+trap cleanup EXIT
+
+echo "Creating temporary virtualenv at $venv_dir using $(python3 --version)"
+python3 -m venv $venv_dir
+source $venv_dir/bin/activate
+python -m pip install \
+ -c $repo_root/gen-requirements.txt \
+ grpcio-tools mypy-protobuf
+echo 'python -m grpc_tools.protoc --version'
+python -m grpc_tools.protoc --version
+
+# Clone the proto repo if it doesn't exist
+if [ ! -d "$PROTO_REPO_DIR" ]; then
+ git clone https://github.com/open-telemetry/opentelemetry-proto.git $PROTO_REPO_DIR
+fi
+
+# Pull in changes and switch to requested branch
+(
+ cd $PROTO_REPO_DIR
+ git fetch --all
+ git checkout $PROTO_REPO_BRANCH_OR_COMMIT
+ # pull if PROTO_REPO_BRANCH_OR_COMMIT is not a detached head
+ git symbolic-ref -q HEAD && git pull --ff-only || true
+)
+
+cd $repo_root/opentelemetry-proto/src
+
+# clean up old generated code
+find opentelemetry/ -regex ".*_pb2.*\.pyi?" -exec rm {} +
+
+# generate proto code for all protos
+all_protos=$(find $PROTO_REPO_DIR/ -iname "*.proto")
+python -m grpc_tools.protoc \
+ -I $PROTO_REPO_DIR \
+ --python_out=. \
+ --mypy_out=. \
+ $all_protos
+
+# generate grpc output only for protos with service definitions
+service_protos=$(grep -REl "service \w+ {" $PROTO_REPO_DIR/opentelemetry/)
+
+python -m grpc_tools.protoc \
+ -I $PROTO_REPO_DIR \
+ --python_out=. \
+ --mypy_out=. \
+ --grpc_python_out=. \
+ $service_protos
+
+echo "Please update ./opentelemetry-proto/README.rst to include the updated version."
diff --git a/scripts/public_symbols_checker.py b/scripts/public_symbols_checker.py
new file mode 100644
index 0000000000..05b7ad4abb
--- /dev/null
+++ b/scripts/public_symbols_checker.py
@@ -0,0 +1,161 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from collections import defaultdict
+from difflib import unified_diff
+from pathlib import Path
+from re import match
+from sys import exit
+
+from git import Repo
+from git.db import GitDB
+
+repo = Repo(__file__, odbt=GitDB, search_parent_directories=True)
+
+
+added_symbols = defaultdict(list)
+removed_symbols = defaultdict(list)
+
+
+def get_symbols(change_type, diff_lines_getter, prefix):
+
+ if change_type == "D" or prefix == r"\-":
+ file_path_symbols = removed_symbols
+ else:
+ file_path_symbols = added_symbols
+
+ for diff_lines in (
+ repo.commit("main")
+ .diff(repo.head.commit)
+ .iter_change_type(change_type)
+ ):
+
+ if diff_lines.b_blob is None:
+ # This happens if a file has been removed completely.
+ b_file_path = diff_lines.a_blob.path
+ else:
+ b_file_path = diff_lines.b_blob.path
+ b_file_path_obj = Path(b_file_path)
+
+ if (
+ b_file_path_obj.suffix != ".py"
+ or "opentelemetry" not in b_file_path
+ or any(
+ # single leading underscore
+ part[0] == "_" and part[1] != "_"
+ # tests directories
+ or part == "tests"
+ for part in b_file_path_obj.parts
+ )
+ ):
+ continue
+
+ for diff_line in diff_lines_getter(diff_lines):
+ matching_line = match(
+ r"{prefix}({symbol_re})\s=\s.+|"
+ r"{prefix}def\s({symbol_re})|"
+ r"{prefix}class\s({symbol_re})".format(
+ symbol_re=r"[a-zA-Z][_\w]+", prefix=prefix
+ ),
+ diff_line,
+ )
+
+ if matching_line is not None:
+ file_path_symbols[b_file_path].append(
+ next(filter(bool, matching_line.groups()))
+ )
+
+
+def a_diff_lines_getter(diff_lines):
+ return diff_lines.b_blob.data_stream.read().decode("utf-8").split("\n")
+
+
+def d_diff_lines_getter(diff_lines):
+ return diff_lines.a_blob.data_stream.read().decode("utf-8").split("\n")
+
+
+def m_diff_lines_getter(diff_lines):
+ return unified_diff(
+ diff_lines.a_blob.data_stream.read().decode("utf-8").split("\n"),
+ diff_lines.b_blob.data_stream.read().decode("utf-8").split("\n"),
+ )
+
+
+get_symbols("A", a_diff_lines_getter, r"")
+get_symbols("D", d_diff_lines_getter, r"")
+get_symbols("M", m_diff_lines_getter, r"\+")
+get_symbols("M", m_diff_lines_getter, r"\-")
+
+
+def remove_common_symbols():
+ # For each file, we remove the symbols that are added and removed in the
+ # same commit.
+ common_symbols = defaultdict(list)
+ for file_path, symbols in added_symbols.items():
+ for symbol in symbols:
+ if symbol in removed_symbols[file_path]:
+ common_symbols[file_path].append(symbol)
+
+ for file_path, symbols in common_symbols.items():
+ for symbol in symbols:
+ added_symbols[file_path].remove(symbol)
+ removed_symbols[file_path].remove(symbol)
+
+ # If a file has no added or removed symbols, we remove it from the
+ # dictionaries.
+ for file_path in list(added_symbols.keys()):
+ if not added_symbols[file_path]:
+ del added_symbols[file_path]
+
+ for file_path in list(removed_symbols.keys()):
+ if not removed_symbols[file_path]:
+ del removed_symbols[file_path]
+
+
+if added_symbols or removed_symbols:
+
+ # If a symbol is added and removed in the same commit, we consider it
+ # as not added or removed.
+ remove_common_symbols()
+ print("The code in this branch adds the following public symbols:")
+ print()
+ for file_path_, symbols_ in added_symbols.items():
+ print(f"- {file_path_}")
+ for symbol_ in symbols_:
+ print(f"\t{symbol_}")
+ print()
+
+ print(
+ "Please make sure that all of them are strictly necessary, if not, "
+ "please consider prefixing them with an underscore to make them "
+ 'private. After that, please label this PR with "Skip Public API '
+ 'check".'
+ )
+ print()
+ print("The code in this branch removes the following public symbols:")
+ print()
+ for file_path_, symbols_ in removed_symbols.items():
+ print(f"- {file_path_}")
+ for symbol_ in symbols_:
+ print(f"\t{symbol_}")
+ print()
+
+ print(
+ "Please make sure no public symbols are removed, if so, please "
+ "consider deprecating them instead. After that, please label this "
+ 'PR with "Skip Public API check".'
+ )
+ exit(1)
+else:
+ print("The code in this branch will not add any public symbols")
diff --git a/scripts/semconv/.gitignore b/scripts/semconv/.gitignore
new file mode 100644
index 0000000000..ed7b836bb6
--- /dev/null
+++ b/scripts/semconv/.gitignore
@@ -0,0 +1 @@
+opentelemetry-specification
\ No newline at end of file
diff --git a/scripts/semconv/generate.sh b/scripts/semconv/generate.sh
new file mode 100755
index 0000000000..3a453db025
--- /dev/null
+++ b/scripts/semconv/generate.sh
@@ -0,0 +1,59 @@
+#!/bin/bash
+
+SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+ROOT_DIR="${SCRIPT_DIR}/../../"
+
+# freeze the spec version to make SemanticAttributes generation reproducible
+SPEC_VERSION=v1.21.0
+SCHEMA_URL=https://opentelemetry.io/schemas/$SPEC_VERSION
+OTEL_SEMCONV_GEN_IMG_VERSION=0.21.0
+
+cd ${SCRIPT_DIR}
+
+rm -rf semantic-conventions || true
+mkdir semantic-conventions
+cd semantic-conventions
+
+git init
+git remote add origin https://github.com/open-telemetry/semantic-conventions.git
+git fetch origin "$SPEC_VERSION"
+git reset --hard FETCH_HEAD
+cd ${SCRIPT_DIR}
+
+docker run --rm \
+ -v ${SCRIPT_DIR}/semantic-conventions/model:/source \
+ -v ${SCRIPT_DIR}/templates:/templates \
+ -v ${ROOT_DIR}/opentelemetry-semantic-conventions/src/opentelemetry/semconv/trace/:/output \
+ otel/semconvgen:$OTEL_SEMCONV_GEN_IMG_VERSION \
+ --only span,event,attribute_group \
+ -f /source code \
+ --template /templates/semantic_attributes.j2 \
+ --output /output/__init__.py \
+ -Dclass=SpanAttributes \
+ -DschemaUrl=$SCHEMA_URL
+
+docker run --rm \
+ -v ${SCRIPT_DIR}/semantic-conventions/model:/source \
+ -v ${SCRIPT_DIR}/templates:/templates \
+ -v ${ROOT_DIR}/opentelemetry-semantic-conventions/src/opentelemetry/semconv/resource/:/output \
+ otel/semconvgen:$OTEL_SEMCONV_GEN_IMG_VERSION \
+ --only resource \
+ -f /source code \
+ --template /templates/semantic_attributes.j2 \
+ --output /output/__init__.py \
+ -Dclass=ResourceAttributes \
+ -DschemaUrl=$SCHEMA_URL
+
+docker run --rm \
+ -v ${SCRIPT_DIR}/semantic-conventions/model:/source \
+ -v ${SCRIPT_DIR}/templates:/templates \
+ -v ${ROOT_DIR}/opentelemetry-semantic-conventions/src/opentelemetry/semconv/metrics/:/output \
+ otel/semconvgen:$OTEL_SEMCONV_GEN_IMG_VERSION \
+ --only metric \
+ -f /source code \
+ --template /templates/semantic_metrics.j2 \
+ --output /output/__init__.py \
+ -Dclass=MetricInstruments \
+ -DschemaUrl=$SCHEMA_URL
+
+cd "$ROOT_DIR"
diff --git a/scripts/semconv/templates/semantic_attributes.j2 b/scripts/semconv/templates/semantic_attributes.j2
new file mode 100644
index 0000000000..7e48d74768
--- /dev/null
+++ b/scripts/semconv/templates/semantic_attributes.j2
@@ -0,0 +1,370 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=too-many-lines
+
+{%- macro print_value(type, value) -%}
+ {{ "\"" if type == "string"}}{{value}}{{ "\"" if type == "string"}}
+{%- endmacro %}
+
+from enum import Enum
+{%- if class == "SpanAttributes" %}
+
+from deprecated import deprecated
+
+{%- endif %}
+
+
+class {{class}}:
+ SCHEMA_URL = "{{schemaUrl}}"
+ """
+ The URL of the OpenTelemetry schema for these keys and values.
+ """
+ {%- for attribute in attributes | unique(attribute="fqn") %}
+ {{attribute.fqn | to_const_name}} = "{{attribute.fqn}}"
+ """
+ {{attribute.brief | to_doc_brief}}.
+
+ {%- if attribute.note %}
+ Note: {{attribute.note | to_doc_brief | indent}}.
+ {%- endif %}
+
+ {%- if attribute.deprecated %}
+ Deprecated: {{attribute.deprecated | to_doc_brief}}.
+ {%- endif %}
+ """
+{# Extra line #}
+ {%- endfor %}
+
+ {%- if class == "SpanAttributes" %}
+ # Manually defined deprecated attributes
+ {#
+ Deprecated attributes and types are defined here for backward compatibility reasons.
+ They were removed from OpenTelemetry semantic conventions completely.
+
+ Attributes that were deprecated in OpenTelemetry semantic conventions
+ (https://github.com/open-telemetry/semantic-conventions/tree/main/model/deprecated)
+ are auto-generated with comments indicating deprecated status, so they don't need
+ to be manually defined.
+ #}
+
+ NET_PEER_IP = "net.peer.ip"
+ """
+ Deprecated, use the `client.socket.address` attribute.
+ """
+
+ NET_HOST_IP = "net.host.ip"
+ """
+ Deprecated, use the `server.socket.address` attribute.
+ """
+
+ HTTP_SERVER_NAME = "http.server_name"
+ """
+ Deprecated, use the `server.address` attribute.
+ """
+
+ HTTP_HOST = "http.host"
+ """
+ Deprecated, use the `server.address` and `server.port` attributes.
+ """
+
+ HTTP_RETRY_COUNT = "http.retry_count"
+ """
+ Deprecated, use the `http.resend_count` attribute.
+ """
+
+ HTTP_REQUEST_CONTENT_LENGTH_UNCOMPRESSED = (
+ "http.request_content_length_uncompressed"
+ )
+ """
+ Deprecated, use the `http.request.body.size` attribute.
+ """
+
+ HTTP_RESPONSE_CONTENT_LENGTH_UNCOMPRESSED = (
+ "http.response_content_length_uncompressed"
+ )
+ """
+ Deprecated, use the `http.response.body.size` attribute.
+ """
+
+ MESSAGING_DESTINATION = "messaging.destination"
+ """
+ Deprecated, use the `messaging.destination.name` attribute.
+ """
+
+ MESSAGING_DESTINATION_KIND = "messaging.destination_kind"
+ """
+ Deprecated.
+ """
+
+ MESSAGING_TEMP_DESTINATION = "messaging.temp_destination"
+ """
+ Deprecated. Use `messaging.destination.temporary` attribute.
+ """
+
+ MESSAGING_PROTOCOL = "messaging.protocol"
+ """
+ Deprecated. Use `network.protocol.name` attribute.
+ """
+
+ MESSAGING_PROTOCOL_VERSION = "messaging.protocol_version"
+ """
+ Deprecated. Use `network.protocol.version` attribute.
+ """
+
+ MESSAGING_URL = "messaging.url"
+ """
+ Deprecated. Use `server.address` and `server.port` attributes.
+ """
+
+ MESSAGING_CONVERSATION_ID = "messaging.conversation_id"
+ """
+ Deprecated. Use `messaging.message.conversation.id` attribute.
+ """
+
+ MESSAGING_KAFKA_PARTITION = "messaging.kafka.partition"
+ """
+ Deprecated. Use `messaging.kafka.destination.partition` attribute.
+ """
+
+ FAAS_EXECUTION = "faas.execution"
+ """
+ Deprecated. Use `faas.invocation_id` attribute.
+ """
+
+ HTTP_USER_AGENT = "http.user_agent"
+ """
+ Deprecated. Use `user_agent.original` attribute.
+ """
+
+ MESSAGING_RABBITMQ_ROUTING_KEY = "messaging.rabbitmq.routing_key"
+ """
+ Deprecated. Use `messaging.rabbitmq.destination.routing_key` attribute.
+ """
+
+ MESSAGING_KAFKA_TOMBSTONE = "messaging.kafka.tombstone"
+ """
+ Deprecated. Use `messaging.kafka.destination.tombstone` attribute.
+ """
+
+ NET_APP_PROTOCOL_NAME = "net.app.protocol.name"
+ """
+ Deprecated. Use `network.protocol.name` attribute.
+ """
+
+ NET_APP_PROTOCOL_VERSION = "net.app.protocol.version"
+ """
+ Deprecated. Use `network.protocol.version` attribute.
+ """
+
+ HTTP_CLIENT_IP = "http.client_ip"
+ """
+ Deprecated. Use `client.address` attribute.
+ """
+
+ HTTP_FLAVOR = "http.flavor"
+ """
+ Deprecated. Use `network.protocol.name` and `network.protocol.version` attributes.
+ """
+
+ NET_HOST_CONNECTION_TYPE = "net.host.connection.type"
+ """
+ Deprecated. Use `network.connection.type` attribute.
+ """
+
+ NET_HOST_CONNECTION_SUBTYPE = "net.host.connection.subtype"
+ """
+ Deprecated. Use `network.connection.subtype` attribute.
+ """
+
+ NET_HOST_CARRIER_NAME = "net.host.carrier.name"
+ """
+ Deprecated. Use `network.carrier.name` attribute.
+ """
+
+ NET_HOST_CARRIER_MCC = "net.host.carrier.mcc"
+ """
+ Deprecated. Use `network.carrier.mcc` attribute.
+ """
+
+ NET_HOST_CARRIER_MNC = "net.host.carrier.mnc"
+ """
+ Deprecated. Use `network.carrier.mnc` attribute.
+ """
+
+ MESSAGING_CONSUMER_ID = "messaging.consumer_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+ MESSAGING_KAFKA_CLIENT_ID = "messaging.kafka.client_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+ MESSAGING_ROCKETMQ_CLIENT_ID = "messaging.rocketmq.client_id"
+ """
+ Deprecated. Use `messaging.client_id` attribute.
+ """
+
+@deprecated(
+ version="1.18.0",
+ reason="Removed from the specification in favor of `network.protocol.name` and `network.protocol.version` attributes",
+)
+class HttpFlavorValues(Enum):
+ HTTP_1_0 = "1.0"
+
+ HTTP_1_1 = "1.1"
+
+ HTTP_2_0 = "2.0"
+
+ HTTP_3_0 = "3.0"
+
+ SPDY = "SPDY"
+
+ QUIC = "QUIC"
+
+@deprecated(
+ version="1.18.0",
+ reason="Removed from the specification",
+)
+class MessagingDestinationKindValues(Enum):
+ QUEUE = "queue"
+ """A message sent to a queue."""
+
+ TOPIC = "topic"
+ """A message sent to a topic."""
+
+
+@deprecated(
+ version="1.21.0",
+ reason="Renamed to NetworkConnectionTypeValues",
+)
+class NetHostConnectionTypeValues(Enum):
+ WIFI = "wifi"
+ """wifi."""
+
+ WIRED = "wired"
+ """wired."""
+
+ CELL = "cell"
+ """cell."""
+
+ UNAVAILABLE = "unavailable"
+ """unavailable."""
+
+ UNKNOWN = "unknown"
+ """unknown."""
+
+
+@deprecated(
+ version="1.21.0",
+ reason="Renamed to NetworkConnectionSubtypeValues",
+)
+class NetHostConnectionSubtypeValues(Enum):
+ GPRS = "gprs"
+ """GPRS."""
+
+ EDGE = "edge"
+ """EDGE."""
+
+ UMTS = "umts"
+ """UMTS."""
+
+ CDMA = "cdma"
+ """CDMA."""
+
+ EVDO_0 = "evdo_0"
+ """EVDO Rel. 0."""
+
+ EVDO_A = "evdo_a"
+ """EVDO Rev. A."""
+
+ CDMA2000_1XRTT = "cdma2000_1xrtt"
+ """CDMA2000 1XRTT."""
+
+ HSDPA = "hsdpa"
+ """HSDPA."""
+
+ HSUPA = "hsupa"
+ """HSUPA."""
+
+ HSPA = "hspa"
+ """HSPA."""
+
+ IDEN = "iden"
+ """IDEN."""
+
+ EVDO_B = "evdo_b"
+ """EVDO Rev. B."""
+
+ LTE = "lte"
+ """LTE."""
+
+ EHRPD = "ehrpd"
+ """EHRPD."""
+
+ HSPAP = "hspap"
+ """HSPAP."""
+
+ GSM = "gsm"
+ """GSM."""
+
+ TD_SCDMA = "td_scdma"
+ """TD-SCDMA."""
+
+ IWLAN = "iwlan"
+ """IWLAN."""
+
+ NR = "nr"
+ """5G NR (New Radio)."""
+
+ NRNSA = "nrnsa"
+ """5G NRNSA (New Radio Non-Standalone)."""
+
+ LTE_CA = "lte_ca"
+ """LTE CA."""
+
+ {% endif %}
+
+ {%- if class == "ResourceAttributes" %}
+ # Manually defined deprecated attributes
+ {#
+ Deprecated attributes and types are defined here for backward compatibility reasons.
+ They were removed from OpenTelemetry semantic conventions completely.
+
+ Attributes that were deprecated in OpenTelemetry semantic conventions
+ (https://github.com/open-telemetry/semantic-conventions/tree/main/model/deprecated)
+ are auto-generated with comments indicating deprecated status, so they don't need
+ to be manually defined.
+ #}
+
+ FAAS_ID = "faas.id"
+ """
+ Deprecated, use the `cloud.resource.id` attribute.
+ """
+ {% endif %}
+
+{%- for attribute in attributes | unique(attribute="fqn") %}
+{%- if attribute.is_enum %}
+{%- set class_name = attribute.fqn | to_camelcase(True) ~ "Values" %}
+{%- set type = attribute.attr_type.enum_type %}
+class {{class_name}}(Enum):
+ {%- for member in attribute.attr_type.members %}
+ {{ member.member_id | to_const_name }} = {{ print_value(type, member.value) }}
+ """{% filter escape %}{{member.brief | to_doc_brief}}.{% endfilter %}"""
+{# Extra line #}
+ {%- endfor %}
+{% endif %}
+{%- endfor %}
diff --git a/scripts/semconv/templates/semantic_metrics.j2 b/scripts/semconv/templates/semantic_metrics.j2
new file mode 100644
index 0000000000..4fa1260cb5
--- /dev/null
+++ b/scripts/semconv/templates/semantic_metrics.j2
@@ -0,0 +1,40 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+class {{class}}:
+ SCHEMA_URL = "{{schemaUrl}}"
+ """
+ The URL of the OpenTelemetry schema for these keys and values.
+ """
+ {% for id in semconvs %}{%- if semconvs[id].GROUP_TYPE_NAME == 'metric' %}{% set metric = semconvs[id] %}
+ {{metric.metric_name | to_const_name}} = "{{metric.metric_name}}"
+ """
+ {{metric.brief | to_doc_brief}}
+ Instrument: {{ metric.instrument }}
+ Unit: {{ metric.unit }}
+ """
+{# Extra line #}
+ {%- endif %}{% endfor %}
+
+ # Manually defined metrics
+ {#
+ Metrics defined here manually were not yaml-ified in 1.21.0 release
+ and therefore are not auto-generated.
+ #}
+ DB_CLIENT_CONNECTIONS_USAGE = "db.client.connections.usage"
+ """
+ The number of connections that are currently in state described by the `state` attribute
+ Instrument: UpDownCounter
+ Unit: {connection}
+ """
\ No newline at end of file
diff --git a/scripts/tracecontext-integration-test.sh b/scripts/tracecontext-integration-test.sh
new file mode 100755
index 0000000000..4d482ddafe
--- /dev/null
+++ b/scripts/tracecontext-integration-test.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+set -e
+# hard-coding the git tag to ensure stable builds.
+TRACECONTEXT_GIT_TAG="98f210efd89c63593dce90e2bae0a1bdcb986f51"
+# clone w3c tracecontext tests
+mkdir -p target
+rm -rf ./target/trace-context
+git clone https://github.com/w3c/trace-context ./target/trace-context
+cd ./target/trace-context && git checkout $TRACECONTEXT_GIT_TAG && cd -
+# start example opentelemetry service, which propagates trace-context by
+# default.
+python ./tests/w3c_tracecontext_validation_server.py 1>&2 &
+EXAMPLE_SERVER_PID=$!
+# give the app server a little time to start up. Not adding some sort
+# of delay would cause many of the tracecontext tests to fail being
+# unable to connect.
+sleep 1
+onshutdown()
+{
+ # send a sigint, to ensure
+ # it is caught as a KeyboardInterrupt in the
+ # example service.
+ kill $EXAMPLE_SERVER_PID
+}
+trap onshutdown EXIT
+cd ./target/trace-context/test
+python test.py http://127.0.0.1:5000/verify-tracecontext
diff --git a/scripts/update_sha.py b/scripts/update_sha.py
index 1c913249a2..bcfd46aead 100644
--- a/scripts/update_sha.py
+++ b/scripts/update_sha.py
@@ -19,9 +19,13 @@
import requests
from ruamel.yaml import YAML
+<<<<<<< HEAD
API_URL = (
"https://api.github.com/repos/open-telemetry/opentelemetry-python/commits/"
)
+=======
+API_URL = "https://api.github.com/repos/open-telemetry/opentelemetry-python-contrib/commits/"
+>>>>>>> upstream/main
WORKFLOW_FILE = ".github/workflows/test.yml"
@@ -37,7 +41,11 @@ def update_sha(sha):
yaml.preserve_quotes = True
with open(WORKFLOW_FILE, "r") as file:
workflow = yaml.load(file)
+<<<<<<< HEAD
workflow["env"]["CORE_REPO_SHA"] = sha
+=======
+ workflow["env"]["CONTRIB_REPO_SHA"] = sha
+>>>>>>> upstream/main
with open(WORKFLOW_FILE, "w") as file:
yaml.dump(workflow, file)
diff --git a/shim/opentelemetry-opencensus-shim/LICENSE b/shim/opentelemetry-opencensus-shim/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/shim/opentelemetry-opencensus-shim/README.rst b/shim/opentelemetry-opencensus-shim/README.rst
new file mode 100644
index 0000000000..bb5f7d4774
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/README.rst
@@ -0,0 +1,20 @@
+OpenCensus Shim for OpenTelemetry
+==================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-opencensus-shim.svg
+ :target: https://pypi.org/project/opentelemetry-opencensus-shim/
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-opencensus-shim
+
+References
+----------
+
+* `OpenCensus Shim for OpenTelemetry `_
+* `OpenTelemetry Project `_
diff --git a/shim/opentelemetry-opencensus-shim/pyproject.toml b/shim/opentelemetry-opencensus-shim/pyproject.toml
new file mode 100644
index 0000000000..ef4dfd76c8
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/pyproject.toml
@@ -0,0 +1,53 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-opencensus-shim"
+dynamic = ["version"]
+description = "OpenCensus Shim for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "opentelemetry-api ~= 1.3",
+ "wrapt ~= 1.0",
+ # may work with older versions but this is the oldest confirmed version
+ "opencensus >= 0.11.0",
+]
+
+[project.optional-dependencies]
+test = [
+ "opentelemetry-test-utils == 0.44b0.dev",
+ "opencensus == 0.11.1",
+ # Temporary fix for https://github.com/census-instrumentation/opencensus-python/issues/1219
+ "six == 1.16.0",
+]
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/shim/opentelemetry-opencensus-shim"
+
+[tool.hatch.version]
+path = "src/opentelemetry/shim/opencensus/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = ["/src", "/tests"]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/__init__.py b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/__init__.py
new file mode 100644
index 0000000000..bd49fd1987
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/__init__.py
@@ -0,0 +1,37 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The OpenTelemetry OpenCensus shim is a library which allows an easy migration from OpenCensus
+to OpenTelemetry. Additional details can be found `in the specification
+`_.
+
+The shim consists of a set of classes which implement the OpenCensus Python API while using
+OpenTelemetry constructs behind the scenes. Its purpose is to allow applications which are
+already instrumented using OpenCensus to start using OpenTelemetry with minimal effort, without
+having to rewrite large portions of the codebase.
+"""
+
+from opentelemetry.shim.opencensus._patch import install_shim, uninstall_shim
+
+__all__ = [
+ "install_shim",
+ "uninstall_shim",
+]
+
+# TODO: Decide when this should be called.
+# 1. defensive import in opentelemetry-api
+# 2. defensive import directly in OpenCensus, although that would require a release
+# 3. ask the user to do it
+# install_shim()
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_patch.py b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_patch.py
new file mode 100644
index 0000000000..c3c6e81037
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_patch.py
@@ -0,0 +1,67 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from functools import lru_cache
+from logging import getLogger
+from typing import Optional
+
+from opencensus.trace.span_context import SpanContext
+from opencensus.trace.tracer import Tracer
+from opencensus.trace.tracers.noop_tracer import NoopTracer
+
+from opentelemetry import trace
+from opentelemetry.shim.opencensus._shim_tracer import ShimTracer
+from opentelemetry.shim.opencensus.version import __version__
+
+_logger = getLogger(__name__)
+
+
+def install_shim(
+ tracer_provider: Optional[trace.TracerProvider] = None,
+) -> None:
+ otel_tracer = trace.get_tracer(
+ "opentelemetry-opencensus-shim",
+ __version__,
+ tracer_provider=tracer_provider,
+ )
+
+ @lru_cache()
+ def cached_shim_tracer(span_context: SpanContext) -> ShimTracer:
+ return ShimTracer(
+ NoopTracer(),
+ oc_span_context=span_context,
+ otel_tracer=otel_tracer,
+ )
+
+ def fget_tracer(self: Tracer) -> ShimTracer:
+ # self.span_context is how instrumentations pass propagated context into OpenCensus e.g.
+ # https://github.com/census-instrumentation/opencensus-python/blob/fd064f438c5e490d25b004ee2545be55d2e28679/contrib/opencensus-ext-flask/opencensus/ext/flask/flask_middleware.py#L147-L153
+ return cached_shim_tracer(self.span_context)
+
+ def fset_tracer(self, value) -> None:
+ # ignore attempts to set the value
+ pass
+
+ # Tracer's constructor sets self.tracer to either a NoopTracer or ContextTracer depending
+ # on sampler:
+ # https://github.com/census-instrumentation/opencensus-python/blob/2e08df591b507612b3968be8c2538dedbf8fab37/opencensus/trace/tracer.py#L63.
+ # We monkeypatch Tracer.tracer with a property to return a shim instance instead. This
+ # makes all instances of Tracer (even those already created) use a ShimTracer.
+ Tracer.tracer = property(fget_tracer, fset_tracer)
+ _logger.info("Installed OpenCensus shim")
+
+
+def uninstall_shim() -> None:
+ if hasattr(Tracer, "tracer"):
+ del Tracer.tracer
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_span.py b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_span.py
new file mode 100644
index 0000000000..f3ff804c6f
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_span.py
@@ -0,0 +1,167 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+from datetime import datetime
+from typing import TYPE_CHECKING
+
+import wrapt
+from opencensus.trace import execution_context
+from opencensus.trace.blank_span import BlankSpan
+from opencensus.trace.span import SpanKind
+from opencensus.trace.status import Status
+from opencensus.trace.time_event import MessageEvent
+
+from opentelemetry import context, trace
+
+if TYPE_CHECKING:
+ from opentelemetry.shim.opencensus._shim_tracer import ShimTracer
+
+_logger = logging.getLogger(__name__)
+
+# Copied from Java
+# https://github.com/open-telemetry/opentelemetry-java/blob/0d3a04669e51b33ea47b29399a7af00012d25ccb/opencensus-shim/src/main/java/io/opentelemetry/opencensusshim/SpanConverter.java#L24-L27
+_MESSAGE_EVENT_ATTRIBUTE_KEY_TYPE = "message.event.type"
+_MESSAGE_EVENT_ATTRIBUTE_KEY_SIZE_UNCOMPRESSED = (
+ "message.event.size.uncompressed"
+)
+_MESSAGE_EVENT_ATTRIBUTE_KEY_SIZE_COMPRESSED = "message.event.size.compressed"
+
+_MESSAGE_EVENT_TYPE_STR_MAPPING = {
+ 0: "TYPE_UNSPECIFIED",
+ 1: "SENT",
+ 2: "RECEIVED",
+}
+
+
+def _opencensus_time_to_nanos(timestamp: str) -> int:
+ """Converts an OpenCensus formatted time string (ISO 8601 with Z) to time.time_ns style
+ unix timestamp
+ """
+ # format taken from
+ # https://github.com/census-instrumentation/opencensus-python/blob/c38c71b9285e71de94d0185ff3c5bf65ee163345/opencensus/common/utils/__init__.py#L76
+ #
+ # datetime.fromisoformat() does not work with the added "Z" until python 3.11
+ seconds_float = datetime.strptime(
+ timestamp, "%Y-%m-%dT%H:%M:%S.%fZ"
+ ).timestamp()
+ return round(seconds_float * 1e9)
+
+
+# pylint: disable=abstract-method
+class ShimSpan(wrapt.ObjectProxy):
+ def __init__(
+ self,
+ wrapped: BlankSpan,
+ *,
+ otel_span: trace.Span,
+ shim_tracer: "ShimTracer",
+ ) -> None:
+ super().__init__(wrapped)
+ self._self_otel_span = otel_span
+ self._self_shim_tracer = shim_tracer
+ self._self_token: object = None
+
+ # Set a few values for BlankSpan members (they appear to be part of the "public" API
+ # even though they are not documented in BaseSpan). Some instrumentations may use these
+ # and not expect an AttributeError to be raised. Set values from OTel where possible
+ # and let ObjectProxy defer to the wrapped BlankSpan otherwise.
+ sc = self._self_otel_span.get_span_context()
+ self.same_process_as_parent_span = not sc.is_remote
+ self.span_id = sc.span_id
+
+ def span(self, name="child_span"):
+ return self._self_shim_tracer.start_span(name=name)
+
+ def add_attribute(self, attribute_key, attribute_value):
+ self._self_otel_span.set_attribute(attribute_key, attribute_value)
+
+ def add_annotation(self, description, **attrs):
+ self._self_otel_span.add_event(description, attrs)
+
+ def add_message_event(self, message_event: MessageEvent):
+ attrs = {
+ _MESSAGE_EVENT_ATTRIBUTE_KEY_TYPE: _MESSAGE_EVENT_TYPE_STR_MAPPING[
+ message_event.type
+ ],
+ }
+ if message_event.uncompressed_size_bytes is not None:
+ attrs[
+ _MESSAGE_EVENT_ATTRIBUTE_KEY_SIZE_UNCOMPRESSED
+ ] = message_event.uncompressed_size_bytes
+ if message_event.compressed_size_bytes is not None:
+ attrs[
+ _MESSAGE_EVENT_ATTRIBUTE_KEY_SIZE_COMPRESSED
+ ] = message_event.compressed_size_bytes
+
+ timestamp = _opencensus_time_to_nanos(message_event.timestamp)
+ self._self_otel_span.add_event(
+ str(message_event.id),
+ attrs,
+ timestamp=timestamp,
+ )
+
+ # pylint: disable=no-self-use
+ def add_link(self, link):
+ """span links do not work with the shim because the OpenCensus Tracer does not accept
+ links in start_span(). Same issue applies to SpanKind. Also see:
+ https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/compatibility/opencensus.md#known-incompatibilities
+ """
+ _logger.warning(
+ "OpenTelemetry does not support links added after a span is created."
+ )
+
+ @property
+ def span_kind(self):
+ """Setting span_kind does not work with the shim because the OpenCensus Tracer does not
+ accept the param in start_span() and there's no way to set OTel span kind after
+ start_span().
+ """
+ return SpanKind.UNSPECIFIED
+
+ @span_kind.setter
+ def span_kind(self, value):
+ _logger.warning(
+ "OpenTelemetry does not support setting span kind after a span is created."
+ )
+
+ def set_status(self, status: Status):
+ self._self_otel_span.set_status(
+ trace.StatusCode.OK if status.is_ok else trace.StatusCode.ERROR,
+ status.description,
+ )
+
+ def finish(self):
+ """Note this method does not pop the span from current context. Use Tracer.end_span()
+ or a `with span: ...` statement (contextmanager) to do that.
+ """
+ self._self_otel_span.end()
+
+ def __enter__(self):
+ self._self_otel_span.__enter__()
+ return self
+
+ # pylint: disable=arguments-differ
+ def __exit__(self, exception_type, exception_value, traceback):
+ self._self_otel_span.__exit__(
+ exception_type, exception_value, traceback
+ )
+ # OpenCensus Span.__exit__() calls Tracer.end_span()
+ # https://github.com/census-instrumentation/opencensus-python/blob/2e08df591b507612b3968be8c2538dedbf8fab37/opencensus/trace/span.py#L390
+ # but that would cause the OTel span to be ended twice. Instead, this code just copies
+ # the context teardown from that method.
+ context.detach(self._self_token)
+ execution_context.set_current_span(
+ self._self_shim_tracer.current_span()
+ )
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_tracer.py b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_tracer.py
new file mode 100644
index 0000000000..a1e30afb50
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/_shim_tracer.py
@@ -0,0 +1,154 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+
+import wrapt
+from opencensus.trace import execution_context
+from opencensus.trace.blank_span import BlankSpan
+from opencensus.trace.span_context import SpanContext
+from opencensus.trace.tracers.base import Tracer as BaseTracer
+from opencensus.trace.tracestate import Tracestate
+
+from opentelemetry import context, trace
+from opentelemetry.shim.opencensus._shim_span import ShimSpan
+
+_logger = logging.getLogger(__name__)
+
+_SHIM_SPAN_KEY = context.create_key("opencensus-shim-span-key")
+_SAMPLED = trace.TraceFlags(trace.TraceFlags.SAMPLED)
+
+
+def set_shim_span_in_context(
+ span: ShimSpan, ctx: context.Context
+) -> context.Context:
+ return context.set_value(_SHIM_SPAN_KEY, span, ctx)
+
+
+def get_shim_span_in_context() -> ShimSpan:
+ return context.get_value(_SHIM_SPAN_KEY)
+
+
+def set_oc_span_in_context(
+ oc_span_context: SpanContext, ctx: context.Context
+) -> context.Context:
+ """Returns a new OTel context based on ctx with oc_span_context set as the current span"""
+
+ # If no SpanContext is passed to the opencensus.trace.tracer.Tracer, it creates a new one
+ # with a random trace ID and a None span ID to be the parent:
+ # https://github.com/census-instrumentation/opencensus-python/blob/2e08df591b507612b3968be8c2538dedbf8fab37/opencensus/trace/tracer.py#L47.
+ #
+ # OpenTelemetry considers this an invalid SpanContext and will ignore it, so we can just
+ # return early
+ if oc_span_context.span_id is None:
+ return ctx
+
+ trace_id = int(oc_span_context.trace_id, 16)
+ span_id = int(oc_span_context.span_id, 16)
+ is_remote = oc_span_context.from_header
+ trace_flags = (
+ _SAMPLED if oc_span_context.trace_options.get_enabled() else None
+ )
+ trace_state = (
+ trace.TraceState(tuple(oc_span_context.tracestate.items()))
+ # OC SpanContext does not validate this type
+ if isinstance(oc_span_context.tracestate, Tracestate)
+ else None
+ )
+
+ return trace.set_span_in_context(
+ trace.NonRecordingSpan(
+ trace.SpanContext(
+ trace_id=trace_id,
+ span_id=span_id,
+ is_remote=is_remote,
+ trace_flags=trace_flags,
+ trace_state=trace_state,
+ )
+ )
+ )
+
+
+# pylint: disable=abstract-method
+class ShimTracer(wrapt.ObjectProxy):
+ def __init__(
+ self,
+ wrapped: BaseTracer,
+ *,
+ oc_span_context: SpanContext,
+ otel_tracer: trace.Tracer
+ ) -> None:
+ super().__init__(wrapped)
+ self._self_oc_span_context = oc_span_context
+ self._self_otel_tracer = otel_tracer
+
+ # For now, finish() is not implemented by the shim. It would require keeping a list of all
+ # spans created so they can all be finished.
+ # def finish(self):
+ # """End spans and send to reporter."""
+
+ def span(self, name="span"):
+ return self.start_span(name=name)
+
+ def start_span(self, name="span"):
+ parent_ctx = context.get_current()
+ # If there is no current span in context, use the one provided to the OC Tracer at
+ # creation time
+ if trace.get_current_span(parent_ctx) is trace.INVALID_SPAN:
+ parent_ctx = set_oc_span_in_context(
+ self._self_oc_span_context, parent_ctx
+ )
+
+ span = self._self_otel_tracer.start_span(name, context=parent_ctx)
+ shim_span = ShimSpan(
+ BlankSpan(name=name, context_tracer=self),
+ otel_span=span,
+ shim_tracer=self,
+ )
+
+ ctx = trace.set_span_in_context(span)
+ ctx = set_shim_span_in_context(shim_span, ctx)
+
+ # OpenCensus's ContextTracer calls execution_context.set_current_span(span) which is
+ # equivalent to the below. This can cause context to leak but is equivalent.
+ # pylint: disable=protected-access
+ shim_span._self_token = context.attach(ctx)
+ # Also set it in OC's context, equivalent to
+ # https://github.com/census-instrumentation/opencensus-python/blob/2e08df591b507612b3968be8c2538dedbf8fab37/opencensus/trace/tracers/context_tracer.py#L94
+ execution_context.set_current_span(shim_span)
+ return shim_span
+
+ def end_span(self):
+ """Finishes the current span in the context and restores the context from before the
+ span was started.
+ """
+ span = self.current_span()
+ if not span:
+ _logger.warning("No active span, cannot do end_span.")
+ return
+
+ span.finish()
+
+ # pylint: disable=protected-access
+ context.detach(span._self_token)
+ # Also reset the OC execution_context, equivalent to
+ # https://github.com/census-instrumentation/opencensus-python/blob/2e08df591b507612b3968be8c2538dedbf8fab37/opencensus/trace/tracers/context_tracer.py#L114-L117
+ execution_context.set_current_span(self.current_span())
+
+ # pylint: disable=no-self-use
+ def current_span(self):
+ return get_shim_span_in_context()
+
+ def add_attribute_to_current_span(self, attribute_key, attribute_value):
+ self.current_span().add_attribute(attribute_key, attribute_value)
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/py.typed b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/version.py b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/version.py
new file mode 100644
index 0000000000..ff896307c3
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/src/opentelemetry/shim/opencensus/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.44b0.dev"
diff --git a/shim/opentelemetry-opencensus-shim/tests/__init__.py b/shim/opentelemetry-opencensus-shim/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opencensus-shim/tests/test_patch.py b/shim/opentelemetry-opencensus-shim/tests/test_patch.py
new file mode 100644
index 0000000000..697ddfc352
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/tests/test_patch.py
@@ -0,0 +1,84 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opencensus.trace.tracer import Tracer
+from opencensus.trace.tracers.noop_tracer import NoopTracer
+
+from opentelemetry.shim.opencensus import install_shim, uninstall_shim
+from opentelemetry.shim.opencensus._shim_tracer import ShimTracer
+
+
+class TestPatch(unittest.TestCase):
+ def setUp(self):
+ uninstall_shim()
+
+ def tearDown(self):
+ uninstall_shim()
+
+ def test_install_shim(self):
+ # Initially the shim is not installed. The Tracer class has no tracer property, it is
+ # instance level only.
+ self.assertFalse(hasattr(Tracer, "tracer"))
+
+ install_shim()
+
+ # The actual Tracer class should now be patched with a tracer property
+ self.assertTrue(hasattr(Tracer, "tracer"))
+ self.assertIsInstance(Tracer.tracer, property)
+
+ def test_install_shim_affects_existing_tracers(self):
+ # Initially the shim is not installed. A OC Tracer instance should have a NoopTracer
+ oc_tracer = Tracer()
+ self.assertIsInstance(oc_tracer.tracer, NoopTracer)
+ self.assertNotIsInstance(oc_tracer.tracer, ShimTracer)
+
+ install_shim()
+
+ # The property should cause existing instances to get the singleton ShimTracer
+ self.assertIsInstance(oc_tracer.tracer, ShimTracer)
+
+ def test_install_shim_affects_new_tracers(self):
+ install_shim()
+
+ # The property should cause existing instances to get the singleton ShimTracer
+ oc_tracer = Tracer()
+ self.assertIsInstance(oc_tracer.tracer, ShimTracer)
+
+ def test_uninstall_shim_resets_tracer(self):
+ install_shim()
+ uninstall_shim()
+
+ # The actual Tracer class should not be patched
+ self.assertFalse(hasattr(Tracer, "tracer"))
+
+ def test_uninstall_shim_resets_existing_tracers(self):
+ oc_tracer = Tracer()
+ orig = oc_tracer.tracer
+ install_shim()
+ uninstall_shim()
+
+ # Accessing the tracer member should no longer use the property, and instead should get
+ # its original NoopTracer
+ self.assertIs(oc_tracer.tracer, orig)
+
+ def test_uninstall_shim_resets_new_tracers(self):
+ install_shim()
+ uninstall_shim()
+
+ # Accessing the tracer member should get the NoopTracer
+ oc_tracer = Tracer()
+ self.assertIsInstance(oc_tracer.tracer, NoopTracer)
+ self.assertNotIsInstance(oc_tracer.tracer, ShimTracer)
diff --git a/shim/opentelemetry-opencensus-shim/tests/test_shim.py b/shim/opentelemetry-opencensus-shim/tests/test_shim.py
new file mode 100644
index 0000000000..74a9eddcf2
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/tests/test_shim.py
@@ -0,0 +1,209 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import unittest
+from unittest.mock import patch
+
+from opencensus.trace import trace_options, tracestate
+from opencensus.trace.blank_span import BlankSpan as OcBlankSpan
+from opencensus.trace.link import Link as OcLink
+from opencensus.trace.span import SpanKind
+from opencensus.trace.span_context import SpanContext
+from opencensus.trace.tracer import Tracer as OcTracer
+from opencensus.trace.tracers.noop_tracer import NoopTracer as OcNoopTracer
+
+from opentelemetry import context, trace
+from opentelemetry.shim.opencensus import install_shim, uninstall_shim
+from opentelemetry.shim.opencensus._shim_span import ShimSpan
+from opentelemetry.shim.opencensus._shim_tracer import (
+ ShimTracer,
+ set_oc_span_in_context,
+)
+
+
+class TestShim(unittest.TestCase):
+ def setUp(self):
+ uninstall_shim()
+ install_shim()
+
+ def tearDown(self):
+ uninstall_shim()
+
+ def assert_hasattr(self, obj, key):
+ self.assertTrue(hasattr(obj, key))
+
+ def test_shim_tracer_wraps_noop_tracer(self):
+ oc_tracer = OcTracer()
+
+ self.assertIsInstance(oc_tracer.tracer, ShimTracer)
+
+ # wrapt.ObjectProxy does the magic here. The ShimTracer should look like the real OC
+ # NoopTracer.
+ self.assertIsInstance(oc_tracer.tracer, OcNoopTracer)
+ self.assert_hasattr(oc_tracer.tracer, "finish")
+ self.assert_hasattr(oc_tracer.tracer, "span")
+ self.assert_hasattr(oc_tracer.tracer, "start_span")
+ self.assert_hasattr(oc_tracer.tracer, "end_span")
+ self.assert_hasattr(oc_tracer.tracer, "current_span")
+ self.assert_hasattr(oc_tracer.tracer, "add_attribute_to_current_span")
+ self.assert_hasattr(oc_tracer.tracer, "list_collected_spans")
+
+ def test_shim_tracer_starts_shim_spans(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ self.assertIsInstance(span, ShimSpan)
+
+ def test_shim_span_wraps_blank_span(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ # wrapt.ObjectProxy does the magic here. The ShimSpan should look like the real OC
+ # BlankSpan.
+ self.assertIsInstance(span, OcBlankSpan)
+
+ # members
+ self.assert_hasattr(span, "name")
+ self.assert_hasattr(span, "parent_span")
+ self.assert_hasattr(span, "start_time")
+ self.assert_hasattr(span, "end_time")
+ self.assert_hasattr(span, "span_id")
+ self.assert_hasattr(span, "attributes")
+ self.assert_hasattr(span, "stack_trace")
+ self.assert_hasattr(span, "annotations")
+ self.assert_hasattr(span, "message_events")
+ self.assert_hasattr(span, "links")
+ self.assert_hasattr(span, "status")
+ self.assert_hasattr(span, "same_process_as_parent_span")
+ self.assert_hasattr(span, "_child_spans")
+ self.assert_hasattr(span, "context_tracer")
+ self.assert_hasattr(span, "span_kind")
+
+ # methods
+ self.assert_hasattr(span, "on_create")
+ self.assert_hasattr(span, "children")
+ self.assert_hasattr(span, "span")
+ self.assert_hasattr(span, "add_attribute")
+ self.assert_hasattr(span, "add_annotation")
+ self.assert_hasattr(span, "add_message_event")
+ self.assert_hasattr(span, "add_link")
+ self.assert_hasattr(span, "set_status")
+ self.assert_hasattr(span, "start")
+ self.assert_hasattr(span, "finish")
+ self.assert_hasattr(span, "__iter__")
+ self.assert_hasattr(span, "__enter__")
+ self.assert_hasattr(span, "__exit__")
+
+ def test_add_link_logs_a_warning(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ with self.assertLogs(level=logging.WARNING):
+ span.add_link(OcLink("1", "1"))
+
+ def test_set_span_kind_logs_a_warning(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ with self.assertLogs(level=logging.WARNING):
+ span.span_kind = SpanKind.CLIENT
+
+ # pylint: disable=no-self-use,no-member,protected-access
+ def test_shim_span_contextmanager_calls_does_not_call_end(self):
+ # This was a bug in first implementation where the underlying OTel span.end() was
+ # called after span.__exit__ which caused double-ending the span.
+ oc_tracer = OcTracer()
+ oc_span = oc_tracer.start_span("foo")
+
+ with patch.object(
+ oc_span,
+ "_self_otel_span",
+ wraps=oc_span._self_otel_span,
+ ) as spy_otel_span:
+ with oc_span:
+ pass
+
+ spy_otel_span.end.assert_not_called()
+
+ def test_set_oc_span_in_context_no_span_id(self):
+ # This won't create a span ID and is the default behavior if you don't pass a context
+ # when creating the Tracer
+ ctx = set_oc_span_in_context(SpanContext(), context.get_current())
+ self.assertIs(trace.get_current_span(ctx), trace.INVALID_SPAN)
+
+ def test_set_oc_span_in_context_ids(self):
+ ctx = set_oc_span_in_context(
+ SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ ),
+ context.get_current(),
+ )
+ span_ctx = trace.get_current_span(ctx).get_span_context()
+
+ self.assertEqual(
+ trace.format_trace_id(span_ctx.trace_id),
+ "ace0216bab2b7ba249761dbb19c871b7",
+ )
+ self.assertEqual(
+ trace.format_span_id(span_ctx.span_id), "1fead89ecf242225"
+ )
+
+ def test_set_oc_span_in_context_remote(self):
+ for is_from_remote in True, False:
+ ctx = set_oc_span_in_context(
+ SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ from_header=is_from_remote,
+ ),
+ context.get_current(),
+ )
+ span_ctx = trace.get_current_span(ctx).get_span_context()
+ self.assertEqual(span_ctx.is_remote, is_from_remote)
+
+ def test_set_oc_span_in_context_traceoptions(self):
+ for oc_trace_options, expect in [
+ # Not sampled
+ (
+ trace_options.TraceOptions("0"),
+ trace.TraceFlags(trace.TraceFlags.DEFAULT),
+ ),
+ # Sampled
+ (
+ trace_options.TraceOptions("1"),
+ trace.TraceFlags(trace.TraceFlags.SAMPLED),
+ ),
+ ]:
+ ctx = set_oc_span_in_context(
+ SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ trace_options=oc_trace_options,
+ ),
+ context.get_current(),
+ )
+ span_ctx = trace.get_current_span(ctx).get_span_context()
+ self.assertEqual(span_ctx.trace_flags, expect)
+
+ def test_set_oc_span_in_context_tracestate(self):
+ ctx = set_oc_span_in_context(
+ SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ tracestate=tracestate.Tracestate({"hello": "tracestate"}),
+ ),
+ context.get_current(),
+ )
+ span_ctx = trace.get_current_span(ctx).get_span_context()
+ self.assertEqual(
+ span_ctx.trace_state, trace.TraceState([("hello", "tracestate")])
+ )
diff --git a/shim/opentelemetry-opencensus-shim/tests/test_shim_with_sdk.py b/shim/opentelemetry-opencensus-shim/tests/test_shim_with_sdk.py
new file mode 100644
index 0000000000..db993d4c22
--- /dev/null
+++ b/shim/opentelemetry-opencensus-shim/tests/test_shim_with_sdk.py
@@ -0,0 +1,316 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import unittest
+from datetime import datetime
+
+from opencensus.trace import execution_context, time_event
+from opencensus.trace.span_context import SpanContext
+from opencensus.trace.status import Status as OcStatus
+from opencensus.trace.tracer import Tracer as OcTracer
+
+from opentelemetry import trace
+from opentelemetry.sdk.trace import ReadableSpan, TracerProvider
+from opentelemetry.sdk.trace.export import SimpleSpanProcessor
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+from opentelemetry.sdk.trace.sampling import ALWAYS_ON
+from opentelemetry.shim.opencensus import install_shim, uninstall_shim
+
+_TIMESTAMP = datetime.fromisoformat("2023-01-01T00:00:00.000000")
+
+
+class TestShimWithSdk(unittest.TestCase):
+ def setUp(self):
+ uninstall_shim()
+ self.tracer_provider = TracerProvider(
+ sampler=ALWAYS_ON, shutdown_on_exit=False
+ )
+ self.mem_exporter = InMemorySpanExporter()
+ self.tracer_provider.add_span_processor(
+ SimpleSpanProcessor(self.mem_exporter)
+ )
+ install_shim(self.tracer_provider)
+
+ def tearDown(self):
+ uninstall_shim()
+
+ def test_start_span_interacts_with_context(self):
+ oc_tracer = OcTracer()
+ span = oc_tracer.start_span("foo")
+
+ # Should have created a real OTel span in implicit context under the hood. OpenCensus
+ # does not require another step to set the span in context.
+ otel_span = trace.get_current_span()
+ self.assertNotEqual(span.span_id, 0)
+ self.assertEqual(span.span_id, otel_span.get_span_context().span_id)
+
+ # This should end the span and remove it from context
+ oc_tracer.end_span()
+ self.assertIs(trace.get_current_span(), trace.INVALID_SPAN)
+
+ def test_start_span_interacts_with_oc_context(self):
+ oc_tracer = OcTracer()
+ span = oc_tracer.start_span("foo")
+
+ # Should have put the shim span in OC's implicit context under the hood. OpenCensus
+ # does not require another step to set the span in context.
+ self.assertIs(execution_context.get_current_span(), span)
+
+ # This should end the span and remove it from context
+ oc_tracer.end_span()
+ self.assertIs(execution_context.get_current_span(), None)
+
+ def test_context_manager_interacts_with_context(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ # Should have created a real OTel span in implicit context under the hood
+ otel_span = trace.get_current_span()
+
+ self.assertNotEqual(span.span_id, 0)
+ self.assertEqual(
+ span.span_id, otel_span.get_span_context().span_id
+ )
+
+ # The span should now be popped from context
+ self.assertIs(trace.get_current_span(), trace.INVALID_SPAN)
+
+ def test_context_manager_interacts_with_oc_context(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("foo") as span:
+ # Should have placed the shim span in implicit context under the hood
+ self.assertIs(execution_context.get_current_span(), span)
+
+ # The span should now be popped from context
+ self.assertIs(execution_context.get_current_span(), None)
+
+ def test_exports_a_span(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("span1"):
+ pass
+
+ self.assertEqual(len(self.mem_exporter.get_finished_spans()), 1)
+
+ def test_uses_tracers_span_context_when_no_parent_in_context(self):
+ # the SpanContext passed to the Tracer will become the parent when there is no span
+ # already set in the OTel context
+ oc_tracer = OcTracer(
+ span_context=SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ )
+ )
+
+ with oc_tracer.start_span("span1"):
+ pass
+
+ exported_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ parent = exported_span.parent
+ self.assertIsNotNone(parent)
+ self.assertEqual(
+ trace.format_trace_id(parent.trace_id),
+ "ace0216bab2b7ba249761dbb19c871b7",
+ )
+ self.assertEqual(
+ trace.format_span_id(parent.span_id), "1fead89ecf242225"
+ )
+
+ def test_ignores_tracers_span_context_when_parent_already_in_context(self):
+ # the SpanContext passed to the Tracer will be ignored since there is already a span
+ # set in the OTel context
+ oc_tracer = OcTracer(
+ span_context=SpanContext(
+ trace_id="ace0216bab2b7ba249761dbb19c871b7",
+ span_id="1fead89ecf242225",
+ )
+ )
+ otel_tracer = self.tracer_provider.get_tracer(__name__)
+
+ with otel_tracer.start_as_current_span("some_parent"):
+ with oc_tracer.start_span("span1"):
+ pass
+
+ oc_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ otel_parent: ReadableSpan = self.mem_exporter.get_finished_spans()[1]
+ self.assertEqual(
+ oc_span.parent,
+ otel_parent.context,
+ )
+
+ def test_span_attributes(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("span1") as span:
+ span.add_attribute("key1", "value1")
+ span.add_attribute("key2", "value2")
+
+ exported_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ self.assertDictEqual(
+ dict(exported_span.attributes),
+ {"key1": "value1", "key2": "value2"},
+ )
+
+ def test_span_annotations(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("span1") as span:
+ span.add_annotation("description", key1="value1", key2="value2")
+
+ exported_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ self.assertEqual(len(exported_span.events), 1)
+ event = exported_span.events[0]
+ self.assertEqual(event.name, "description")
+ self.assertDictEqual(
+ dict(event.attributes), {"key1": "value1", "key2": "value2"}
+ )
+
+ def test_span_message_event(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("span1") as span:
+ span.add_message_event(
+ time_event.MessageEvent(
+ _TIMESTAMP, "id_sent", time_event.Type.SENT, "20", "10"
+ )
+ )
+ span.add_message_event(
+ time_event.MessageEvent(
+ _TIMESTAMP,
+ "id_received",
+ time_event.Type.RECEIVED,
+ "20",
+ "10",
+ )
+ )
+ span.add_message_event(
+ time_event.MessageEvent(
+ _TIMESTAMP,
+ "id_unspecified",
+ None,
+ "20",
+ "10",
+ )
+ )
+
+ exported_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ self.assertEqual(len(exported_span.events), 3)
+ event1, event2, event3 = exported_span.events
+
+ self.assertEqual(event1.name, "id_sent")
+ self.assertDictEqual(
+ dict(event1.attributes),
+ {
+ "message.event.size.compressed": "10",
+ "message.event.size.uncompressed": "20",
+ "message.event.type": "SENT",
+ },
+ )
+ self.assertEqual(event2.name, "id_received")
+ self.assertDictEqual(
+ dict(event2.attributes),
+ {
+ "message.event.size.compressed": "10",
+ "message.event.size.uncompressed": "20",
+ "message.event.type": "RECEIVED",
+ },
+ )
+ self.assertEqual(event3.name, "id_unspecified")
+ self.assertDictEqual(
+ dict(event3.attributes),
+ {
+ "message.event.size.compressed": "10",
+ "message.event.size.uncompressed": "20",
+ "message.event.type": "TYPE_UNSPECIFIED",
+ },
+ )
+
+ def test_span_status(self):
+ oc_tracer = OcTracer()
+ with oc_tracer.start_span("span_ok") as span:
+ # OTel will log about the message being set on a not OK span
+ with self.assertLogs(level=logging.WARNING) as rec:
+ span.set_status(OcStatus(0, "message"))
+ self.assertIn(
+ "description should only be set when status_code is set to StatusCode.ERROR",
+ rec.output[0],
+ )
+
+ with oc_tracer.start_span("span_exception") as span:
+ span.set_status(
+ OcStatus.from_exception(Exception("exception message"))
+ )
+
+ self.assertEqual(len(self.mem_exporter.get_finished_spans()), 2)
+ ok_span: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ exc_span: ReadableSpan = self.mem_exporter.get_finished_spans()[1]
+
+ self.assertTrue(ok_span.status.is_ok)
+ # should be none even though we provided it because OTel drops the description when
+ # status is not ERROR
+ self.assertIsNone(ok_span.status.description)
+
+ self.assertFalse(exc_span.status.is_ok)
+ self.assertEqual(exc_span.status.description, "exception message")
+
+ def assert_related(self, *, child: ReadableSpan, parent: ReadableSpan):
+ self.assertEqual(
+ child.parent.span_id, parent.get_span_context().span_id
+ )
+
+ def test_otel_sandwich(self):
+ oc_tracer = OcTracer()
+ otel_tracer = self.tracer_provider.get_tracer(__name__)
+ with oc_tracer.start_span("opencensus_outer"):
+ with otel_tracer.start_as_current_span("otel_middle"):
+ with oc_tracer.start_span("opencensus_inner"):
+ pass
+
+ self.assertEqual(len(self.mem_exporter.get_finished_spans()), 3)
+ opencensus_inner: ReadableSpan = (
+ self.mem_exporter.get_finished_spans()[0]
+ )
+ otel_middle: ReadableSpan = self.mem_exporter.get_finished_spans()[1]
+ opencensus_outer: ReadableSpan = (
+ self.mem_exporter.get_finished_spans()[2]
+ )
+
+ self.assertEqual(opencensus_outer.name, "opencensus_outer")
+ self.assertEqual(otel_middle.name, "otel_middle")
+ self.assertEqual(opencensus_inner.name, "opencensus_inner")
+
+ self.assertIsNone(opencensus_outer.parent)
+ self.assert_related(parent=opencensus_outer, child=otel_middle)
+ self.assert_related(parent=otel_middle, child=opencensus_inner)
+
+ def test_opencensus_sandwich(self):
+ oc_tracer = OcTracer()
+ otel_tracer = self.tracer_provider.get_tracer(__name__)
+ with otel_tracer.start_as_current_span("otel_outer"):
+ with oc_tracer.start_span("opencensus_middle"):
+ with otel_tracer.start_as_current_span("otel_inner"):
+ pass
+
+ self.assertEqual(len(self.mem_exporter.get_finished_spans()), 3)
+ otel_inner: ReadableSpan = self.mem_exporter.get_finished_spans()[0]
+ opencensus_middle: ReadableSpan = (
+ self.mem_exporter.get_finished_spans()[1]
+ )
+ otel_outer: ReadableSpan = self.mem_exporter.get_finished_spans()[2]
+
+ self.assertEqual(otel_outer.name, "otel_outer")
+ self.assertEqual(opencensus_middle.name, "opencensus_middle")
+ self.assertEqual(otel_inner.name, "otel_inner")
+
+ self.assertIsNone(otel_outer.parent)
+ self.assert_related(parent=otel_outer, child=opencensus_middle)
+ self.assert_related(parent=opencensus_middle, child=otel_inner)
diff --git a/shim/opentelemetry-opentracing-shim/LICENSE b/shim/opentelemetry-opentracing-shim/LICENSE
new file mode 100644
index 0000000000..1ef7dad2c5
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright The OpenTelemetry Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/shim/opentelemetry-opentracing-shim/README.rst b/shim/opentelemetry-opentracing-shim/README.rst
new file mode 100644
index 0000000000..455634858c
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/README.rst
@@ -0,0 +1,20 @@
+OpenTracing Shim for OpenTelemetry
+==================================
+
+|pypi|
+
+.. |pypi| image:: https://badge.fury.io/py/opentelemetry-opentracing-shim.svg
+ :target: https://pypi.org/project/opentelemetry-opentracing-shim/
+
+Installation
+------------
+
+::
+
+ pip install opentelemetry-opentracing-shim
+
+References
+----------
+
+* `OpenTracing Shim for OpenTelemetry `_
+* `OpenTelemetry Project `_
diff --git a/shim/opentelemetry-opentracing-shim/pyproject.toml b/shim/opentelemetry-opentracing-shim/pyproject.toml
new file mode 100644
index 0000000000..7d32301daf
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/pyproject.toml
@@ -0,0 +1,53 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-opentracing-shim"
+dynamic = ["version"]
+description = "OpenTracing Shim for OpenTelemetry"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Typing :: Typed",
+]
+dependencies = [
+ "Deprecated >= 1.2.6",
+ "opentelemetry-api ~= 1.3",
+ "opentracing ~= 2.0",
+]
+
+[project.optional-dependencies]
+test = [
+ "opentelemetry-test-utils == 0.44b0.dev",
+ "opentracing ~= 2.2.0",
+]
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tree/main/shim/opentelemetry-opentracing-shim"
+
+[tool.hatch.version]
+path = "src/opentelemetry/shim/opentracing_shim/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+ "/tests",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/__init__.py b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/__init__.py
new file mode 100644
index 0000000000..8fd72da972
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/__init__.py
@@ -0,0 +1,749 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+The OpenTelemetry OpenTracing shim is a library which allows an easy migration
+from OpenTracing to OpenTelemetry.
+
+The shim consists of a set of classes which implement the OpenTracing Python
+API while using OpenTelemetry constructs behind the scenes. Its purpose is to
+allow applications which are already instrumented using OpenTracing to start
+using OpenTelemetry with a minimal effort, without having to rewrite large
+portions of the codebase.
+
+To use the shim, a :class:`TracerShim` instance is created and then used as if
+it were an "ordinary" OpenTracing :class:`opentracing.Tracer`, as in the
+following example::
+
+ import time
+
+ from opentelemetry import trace
+ from opentelemetry.sdk.trace import TracerProvider
+ from opentelemetry.shim.opentracing_shim import create_tracer
+
+ # Define which OpenTelemetry Tracer provider implementation to use.
+ trace.set_tracer_provider(TracerProvider())
+
+ # Create an OpenTelemetry Tracer.
+ otel_tracer = trace.get_tracer(__name__)
+
+ # Create an OpenTracing shim.
+ shim = create_tracer(otel_tracer)
+
+ with shim.start_active_span("ProcessHTTPRequest"):
+ print("Processing HTTP request")
+ # Sleeping to mock real work.
+ time.sleep(0.1)
+ with shim.start_active_span("GetDataFromDB"):
+ print("Getting data from DB")
+ # Sleeping to mock real work.
+ time.sleep(0.2)
+
+Note:
+ While the OpenTracing Python API represents time values as the number of
+ **seconds** since the epoch expressed as :obj:`float` values, the
+ OpenTelemetry Python API represents time values as the number of
+ **nanoseconds** since the epoch expressed as :obj:`int` values. This fact
+ requires the OpenTracing shim to convert time values back and forth between
+ the two representations, which involves floating point arithmetic.
+
+ Due to the way computers represent floating point values in hardware,
+ representation of decimal floating point values in binary-based hardware is
+ imprecise by definition.
+
+ The above results in **slight imprecisions** in time values passed to the
+ shim via the OpenTracing API when comparing the value passed to the shim
+ and the value stored in the OpenTelemetry :class:`opentelemetry.trace.Span`
+ object behind the scenes. **This is not a bug in this library or in
+ Python**. Rather, this is a generic problem which stems from the fact that
+ not every decimal floating point number can be correctly represented in
+ binary, and therefore affects other libraries and programming languages as
+ well. More information about this problem can be found in the
+ `Floating Point Arithmetic\\: Issues and Limitations`_ section of the
+ Python documentation.
+
+ While testing this library, the aforementioned imprecisions were observed
+ to be of *less than a microsecond*.
+
+API
+---
+.. _Floating Point Arithmetic\\: Issues and Limitations:
+ https://docs.python.org/3/tutorial/floatingpoint.html
+"""
+
+# TODO: make pylint use 3p opentracing module for type inference
+# pylint:disable=no-member
+
+import logging
+from types import TracebackType
+from typing import Optional, Type, TypeVar, Union
+
+from deprecated import deprecated
+from opentracing import (
+ Format,
+ Scope,
+ ScopeManager,
+ Span,
+ SpanContext,
+ Tracer,
+ UnsupportedFormatException,
+)
+
+from opentelemetry.baggage import get_baggage, set_baggage
+from opentelemetry.context import (
+ Context,
+ attach,
+ create_key,
+ detach,
+ get_value,
+ set_value,
+)
+from opentelemetry.propagate import get_global_textmap
+from opentelemetry.shim.opentracing_shim import util
+from opentelemetry.shim.opentracing_shim.version import __version__
+from opentelemetry.trace import INVALID_SPAN_CONTEXT, Link, NonRecordingSpan
+from opentelemetry.trace import SpanContext as OtelSpanContext
+from opentelemetry.trace import Tracer as OtelTracer
+from opentelemetry.trace import (
+ TracerProvider,
+ get_current_span,
+ set_span_in_context,
+ use_span,
+)
+from opentelemetry.util.types import Attributes
+
+ValueT = TypeVar("ValueT", int, float, bool, str)
+logger = logging.getLogger(__name__)
+_SHIM_KEY = create_key("scope_shim")
+
+
+def create_tracer(otel_tracer_provider: TracerProvider) -> "TracerShim":
+ """Creates a :class:`TracerShim` object from the provided OpenTelemetry
+ :class:`opentelemetry.trace.TracerProvider`.
+
+ The returned :class:`TracerShim` is an implementation of
+ :class:`opentracing.Tracer` using OpenTelemetry under the hood.
+
+ Args:
+ otel_tracer_provider: A tracer from this provider will be used to
+ perform the actual tracing when user code is instrumented using the
+ OpenTracing API.
+
+ Returns:
+ The created :class:`TracerShim`.
+ """
+
+ return TracerShim(otel_tracer_provider.get_tracer(__name__, __version__))
+
+
+class SpanContextShim(SpanContext):
+ """Implements :class:`opentracing.SpanContext` by wrapping a
+ :class:`opentelemetry.trace.SpanContext` object.
+
+ Args:
+ otel_context: A :class:`opentelemetry.trace.SpanContext` to be used for
+ constructing the :class:`SpanContextShim`.
+ """
+
+ def __init__(self, otel_context: OtelSpanContext):
+ self._otel_context = otel_context
+ # Context is being used here since it must be immutable.
+ self._baggage = Context()
+
+ def unwrap(self) -> OtelSpanContext:
+ """Returns the wrapped :class:`opentelemetry.trace.SpanContext`
+ object.
+
+ Returns:
+ The :class:`opentelemetry.trace.SpanContext` object wrapped by this
+ :class:`SpanContextShim`.
+ """
+
+ return self._otel_context
+
+ @property
+ def baggage(self) -> Context:
+ """Returns the ``baggage`` associated with this object"""
+
+ return self._baggage
+
+
+class SpanShim(Span):
+ """Wraps a :class:`opentelemetry.trace.Span` object.
+
+ Args:
+ tracer: The :class:`opentracing.Tracer` that created this `SpanShim`.
+ context: A :class:`SpanContextShim` which contains the context for this
+ :class:`SpanShim`.
+ span: A :class:`opentelemetry.trace.Span` to wrap.
+ """
+
+ def __init__(self, tracer, context: SpanContextShim, span):
+ super().__init__(tracer, context)
+ self._otel_span = span
+
+ def unwrap(self):
+ """Returns the wrapped :class:`opentelemetry.trace.Span` object.
+
+ Returns:
+ The :class:`opentelemetry.trace.Span` object wrapped by this
+ :class:`SpanShim`.
+ """
+
+ return self._otel_span
+
+ def set_operation_name(self, operation_name: str) -> "SpanShim":
+ """Updates the name of the wrapped OpenTelemetry span.
+
+ Args:
+ operation_name: The new name to be used for the underlying
+ :class:`opentelemetry.trace.Span` object.
+
+ Returns:
+ Returns this :class:`SpanShim` instance to allow call chaining.
+ """
+
+ self._otel_span.update_name(operation_name)
+ return self
+
+ def finish(self, finish_time: float = None):
+ """Ends the OpenTelemetry span wrapped by this :class:`SpanShim`.
+
+ If *finish_time* is provided, the time value is converted to the
+ OpenTelemetry time format (number of nanoseconds since the epoch,
+ expressed as an integer) and passed on to the OpenTelemetry tracer when
+ ending the OpenTelemetry span. If *finish_time* isn't provided, it is
+ up to the OpenTelemetry tracer implementation to generate a timestamp
+ when ending the span.
+
+ Args:
+ finish_time: A value that represents the finish time expressed as
+ the number of seconds since the epoch as returned by
+ :func:`time.time()`.
+ """
+
+ end_time = finish_time
+ if end_time is not None:
+ end_time = util.time_seconds_to_ns(finish_time)
+ self._otel_span.end(end_time=end_time)
+
+ def set_tag(self, key: str, value: ValueT) -> "SpanShim":
+ """Sets an OpenTelemetry attribute on the wrapped OpenTelemetry span.
+
+ Args:
+ key: A tag key.
+ value: A tag value.
+
+ Returns:
+ Returns this :class:`SpanShim` instance to allow call chaining.
+ """
+
+ self._otel_span.set_attribute(key, value)
+ return self
+
+ def log_kv(
+ self, key_values: Attributes, timestamp: float = None
+ ) -> "SpanShim":
+ """Logs an event for the wrapped OpenTelemetry span.
+
+ Note:
+ The OpenTracing API defines the values of *key_values* to be of any
+ type. However, the OpenTelemetry API requires that the values be
+ any one of the types defined in
+ ``opentelemetry.trace.util.Attributes`` therefore, only these types
+ are supported as values.
+
+ Args:
+ key_values: A dictionary as specified in
+ ``opentelemetry.trace.util.Attributes``.
+ timestamp: Timestamp of the OpenTelemetry event, will be generated
+ automatically if omitted.
+
+ Returns:
+ Returns this :class:`SpanShim` instance to allow call chaining.
+ """
+
+ if timestamp is not None:
+ event_timestamp = util.time_seconds_to_ns(timestamp)
+ else:
+ event_timestamp = None
+
+ event_name = util.event_name_from_kv(key_values)
+ self._otel_span.add_event(event_name, key_values, event_timestamp)
+ return self
+
+ @deprecated(reason="This method is deprecated in favor of log_kv")
+ def log(self, **kwargs):
+ super().log(**kwargs)
+
+ @deprecated(reason="This method is deprecated in favor of log_kv")
+ def log_event(self, event, payload=None):
+ super().log_event(event, payload=payload)
+
+ def set_baggage_item(self, key: str, value: str):
+ """Stores a Baggage item in the span as a key/value
+ pair.
+
+ Args:
+ key: A tag key.
+ value: A tag value.
+ """
+ # pylint: disable=protected-access
+ self._context._baggage = set_baggage(
+ key, value, context=self._context._baggage
+ )
+
+ def get_baggage_item(self, key: str) -> Optional[object]:
+ """Retrieves value of the baggage item with the given key.
+
+ Args:
+ key: A tag key.
+ Returns:
+ Returns this :class:`SpanShim` instance to allow call chaining.
+ """
+ # pylint: disable=protected-access
+ return get_baggage(key, context=self._context._baggage)
+
+
+class ScopeShim(Scope):
+ """A `ScopeShim` wraps the OpenTelemetry functionality related to span
+ activation/deactivation while using OpenTracing :class:`opentracing.Scope`
+ objects for presentation.
+
+ Unlike other classes in this package, the `ScopeShim` class doesn't wrap an
+ OpenTelemetry class because OpenTelemetry doesn't have the notion of
+ "scope" (though it *does* have similar functionality).
+
+ There are two ways to construct a `ScopeShim` object: using the default
+ initializer and using the :meth:`from_context_manager()` class method.
+
+ It is necessary to have both ways for constructing `ScopeShim` objects
+ because in some cases we need to create the object from an OpenTelemetry
+ `opentelemetry.trace.Span` context manager (as returned by
+ :meth:`opentelemetry.trace.use_span`), in which case our only way of
+ retrieving a `opentelemetry.trace.Span` object is by calling the
+ ``__enter__()`` method on the context manager, which makes the span active
+ in the OpenTelemetry tracer; whereas in other cases we need to accept a
+ `SpanShim` object and wrap it in a `ScopeShim`. The former is used mainly
+ when the instrumentation code retrieves the currently-active span using
+ `ScopeManagerShim.active`. The latter is mainly used when the
+ instrumentation code activates a span using
+ :meth:`ScopeManagerShim.activate`.
+
+ Args:
+ manager: The :class:`ScopeManagerShim` that created this
+ :class:`ScopeShim`.
+ span: The :class:`SpanShim` this :class:`ScopeShim` controls.
+ span_cm: A Python context manager which yields an OpenTelemetry
+ `opentelemetry.trace.Span` from its ``__enter__()`` method. Used
+ by :meth:`from_context_manager` to store the context manager as
+ an attribute so that it can later be closed by calling its
+ ``__exit__()`` method. Defaults to `None`.
+ """
+
+ def __init__(
+ self, manager: "ScopeManagerShim", span: SpanShim, span_cm=None
+ ):
+ super().__init__(manager, span)
+ self._span_cm = span_cm
+ self._token = attach(set_value(_SHIM_KEY, self))
+
+ # TODO: Change type of `manager` argument to `opentracing.ScopeManager`? We
+ # need to get rid of `manager.tracer` for this.
+ @classmethod
+ def from_context_manager(cls, manager: "ScopeManagerShim", span_cm):
+ """Constructs a :class:`ScopeShim` from an OpenTelemetry
+ `opentelemetry.trace.Span` context
+ manager.
+
+ The method extracts a `opentelemetry.trace.Span` object from the
+ context manager by calling the context manager's ``__enter__()``
+ method. This causes the span to start in the OpenTelemetry tracer.
+
+ Example usage::
+
+ span = otel_tracer.start_span("TestSpan")
+ span_cm = opentelemetry.trace.use_span(span)
+ scope_shim = ScopeShim.from_context_manager(
+ scope_manager_shim,
+ span_cm=span_cm,
+ )
+
+ Args:
+ manager: The :class:`ScopeManagerShim` that created this
+ :class:`ScopeShim`.
+ span_cm: A context manager as returned by
+ :meth:`opentelemetry.trace.use_span`.
+ """
+
+ # pylint: disable=unnecessary-dunder-call
+ otel_span = span_cm.__enter__()
+ span_context = SpanContextShim(otel_span.get_span_context())
+ span = SpanShim(manager.tracer, span_context, otel_span)
+ return cls(manager, span, span_cm)
+
+ def close(self):
+ """Closes the `ScopeShim`. If the `ScopeShim` was created from a
+ context manager, calling this method sets the active span in the
+ OpenTelemetry tracer back to the span which was active before this
+ `ScopeShim` was created. In addition, if the span represented by this
+ `ScopeShim` was activated with the *finish_on_close* argument set to
+ `True`, calling this method will end the span.
+
+ Warning:
+ In the current state of the implementation it is possible to create
+ a `ScopeShim` directly from a `SpanShim`, that is - without using
+ :meth:`from_context_manager()`. For that reason we need to be able
+ to end the span represented by the `ScopeShim` in this case, too.
+ Please note that closing a `ScopeShim` created this way (for
+ example as returned by :meth:`ScopeManagerShim.active`) **always
+ ends the associated span**, regardless of the value passed in
+ *finish_on_close* when activating the span.
+ """
+ self._end_span_scope(None, None, None)
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ """
+ Override the __exit__ method of `opentracing.scope.Scope` so we can report
+ exceptions correctly in opentelemetry specification format.
+ """
+ self._end_span_scope(exc_type, exc_val, exc_tb)
+
+ def _end_span_scope(
+ self,
+ exc_type: Optional[Type[BaseException]],
+ exc_val: Optional[BaseException],
+ exc_tb: Optional[TracebackType],
+ ) -> None:
+ detach(self._token)
+ if self._span_cm is not None:
+ self._span_cm.__exit__(exc_type, exc_val, exc_tb)
+ else:
+ self._span.unwrap().end()
+
+
+class ScopeManagerShim(ScopeManager):
+ """Implements :class:`opentracing.ScopeManager` by setting and getting the
+ active `opentelemetry.trace.Span` in the OpenTelemetry tracer.
+
+ This class keeps a reference to a :class:`TracerShim` as an attribute. This
+ reference is used to communicate with the OpenTelemetry tracer. It is
+ necessary to have a reference to the :class:`TracerShim` rather than the
+ :class:`opentelemetry.trace.Tracer` wrapped by it because when constructing
+ a :class:`SpanShim` we need to pass a reference to a
+ :class:`opentracing.Tracer`.
+
+ Args:
+ tracer: A :class:`TracerShim` to use for setting and getting active
+ span state.
+ """
+
+ def __init__(self, tracer: "TracerShim"):
+ # The only thing the ``__init__()``` method on the base class does is
+ # initialize `self._noop_span` and `self._noop_scope` with no-op
+ # objects. Therefore, it doesn't seem useful to call it.
+ # pylint: disable=super-init-not-called
+ self._tracer = tracer
+
+ def activate(self, span: SpanShim, finish_on_close: bool) -> "ScopeShim":
+ """Activates a :class:`SpanShim` and returns a :class:`ScopeShim` which
+ represents the active span.
+
+ Args:
+ span: A :class:`SpanShim` to be activated.
+ finish_on_close(:obj:`bool`): Determines whether the OpenTelemetry
+ span should be ended when the returned :class:`ScopeShim` is
+ closed.
+
+ Returns:
+ A :class:`ScopeShim` representing the activated span.
+ """
+
+ span_cm = use_span(span.unwrap(), end_on_exit=finish_on_close)
+ return ScopeShim.from_context_manager(self, span_cm=span_cm)
+
+ @property
+ def active(self) -> "ScopeShim":
+ """Returns a :class:`ScopeShim` object representing the
+ currently-active span in the OpenTelemetry tracer.
+
+ Returns:
+ A :class:`ScopeShim` representing the active span in the
+ OpenTelemetry tracer, or `None` if no span is currently active.
+
+ Warning:
+ Calling :meth:`ScopeShim.close` on the :class:`ScopeShim` returned
+ by this property **always ends the corresponding span**, regardless
+ of the *finish_on_close* value used when activating the span. This
+ is a limitation of the current implementation of the OpenTracing
+ shim and is likely to be handled in future versions.
+ """
+
+ span = get_current_span()
+ if span.get_span_context() == INVALID_SPAN_CONTEXT:
+ return None
+
+ try:
+ return get_value(_SHIM_KEY)
+ except KeyError:
+ span_context = SpanContextShim(span.get_span_context())
+ wrapped_span = SpanShim(self._tracer, span_context, span)
+ return ScopeShim(self, span=wrapped_span)
+
+ @property
+ def tracer(self) -> "TracerShim":
+ """Returns the :class:`TracerShim` reference used by this
+ :class:`ScopeManagerShim` for setting and getting the active span from
+ the OpenTelemetry tracer.
+
+ Returns:
+ The :class:`TracerShim` used for setting and getting the active
+ span.
+
+ Warning:
+ This property is *not* a part of the OpenTracing API. It is used
+ internally by the current implementation of the OpenTracing shim
+ and will likely be removed in future versions.
+ """
+
+ return self._tracer
+
+
+class TracerShim(Tracer):
+ """Wraps a :class:`opentelemetry.trace.Tracer` object.
+
+ This wrapper class allows using an OpenTelemetry tracer as if it were an
+ OpenTracing tracer. It exposes the same methods as an "ordinary"
+ OpenTracing tracer, and uses OpenTelemetry transparently for performing the
+ actual tracing.
+
+ This class depends on the *OpenTelemetry API*. Therefore, any
+ implementation of a :class:`opentelemetry.trace.Tracer` should work with
+ this class.
+
+ Args:
+ tracer: A :class:`opentelemetry.trace.Tracer` to use for tracing. This
+ tracer will be invoked by the shim to create actual spans.
+ """
+
+ def __init__(self, tracer: OtelTracer):
+ super().__init__(scope_manager=ScopeManagerShim(self))
+ self._otel_tracer = tracer
+ self._supported_formats = (
+ Format.TEXT_MAP,
+ Format.HTTP_HEADERS,
+ )
+
+ def unwrap(self):
+ """Returns the :class:`opentelemetry.trace.Tracer` object that is
+ wrapped by this :class:`TracerShim` and used for actual tracing.
+
+ Returns:
+ The :class:`opentelemetry.trace.Tracer` used for actual tracing.
+ """
+
+ return self._otel_tracer
+
+ def start_active_span(
+ self,
+ operation_name: str,
+ child_of: Union[SpanShim, SpanContextShim] = None,
+ references: list = None,
+ tags: Attributes = None,
+ start_time: float = None,
+ ignore_active_span: bool = False,
+ finish_on_close: bool = True,
+ ) -> "ScopeShim":
+ """Starts and activates a span. In terms of functionality, this method
+ behaves exactly like the same method on a "regular" OpenTracing tracer.
+ See :meth:`opentracing.Tracer.start_active_span` for more details.
+
+ Args:
+ operation_name: Name of the operation represented by
+ the new span from the perspective of the current service.
+ child_of: A :class:`SpanShim` or :class:`SpanContextShim`
+ representing the parent in a "child of" reference. If
+ specified, the *references* parameter must be omitted.
+ references: A list of :class:`opentracing.Reference` objects that
+ identify one or more parents of type :class:`SpanContextShim`.
+ tags: A dictionary of tags.
+ start_time: An explicit start time expressed as the number of
+ seconds since the epoch as returned by :func:`time.time()`.
+ ignore_active_span: Ignore the currently-active span in the
+ OpenTelemetry tracer and make the created span the root span of
+ a new trace.
+ finish_on_close: Determines whether the created span should end
+ automatically when closing the returned :class:`ScopeShim`.
+
+ Returns:
+ A :class:`ScopeShim` that is already activated by the
+ :class:`ScopeManagerShim`.
+ """
+
+ current_span = get_current_span()
+
+ if (
+ child_of is None
+ and current_span.get_span_context() is not INVALID_SPAN_CONTEXT
+ ):
+ child_of = SpanShim(None, None, current_span)
+
+ span = self.start_span(
+ operation_name=operation_name,
+ child_of=child_of,
+ references=references,
+ tags=tags,
+ start_time=start_time,
+ ignore_active_span=ignore_active_span,
+ )
+ return self._scope_manager.activate(span, finish_on_close)
+
+ def start_span(
+ self,
+ operation_name: str = None,
+ child_of: Union[SpanShim, SpanContextShim] = None,
+ references: list = None,
+ tags: Attributes = None,
+ start_time: float = None,
+ ignore_active_span: bool = False,
+ ) -> SpanShim:
+ """Implements the ``start_span()`` method from the base class.
+
+ Starts a span. In terms of functionality, this method behaves exactly
+ like the same method on a "regular" OpenTracing tracer. See
+ :meth:`opentracing.Tracer.start_span` for more details.
+
+ Args:
+ operation_name: Name of the operation represented by the new span
+ from the perspective of the current service.
+ child_of: A :class:`SpanShim` or :class:`SpanContextShim`
+ representing the parent in a "child of" reference. If
+ specified, the *references* parameter must be omitted.
+ references: A list of :class:`opentracing.Reference` objects that
+ identify one or more parents of type :class:`SpanContextShim`.
+ tags: A dictionary of tags.
+ start_time: An explicit start time expressed as the number of
+ seconds since the epoch as returned by :func:`time.time()`.
+ ignore_active_span: Ignore the currently-active span in the
+ OpenTelemetry tracer and make the created span the root span of
+ a new trace.
+
+ Returns:
+ An already-started :class:`SpanShim` instance.
+ """
+
+ # Use active span as parent when no explicit parent is specified.
+ if not ignore_active_span and not child_of:
+ child_of = self.active_span
+
+ # Use the specified parent or the active span if possible. Otherwise,
+ # use a `None` parent, which triggers the creation of a new trace.
+ parent = child_of.unwrap() if child_of else None
+ if isinstance(parent, OtelSpanContext):
+ parent = NonRecordingSpan(parent)
+
+ valid_links = []
+ if references:
+ for ref in references:
+ if ref.referenced_context.unwrap() is not INVALID_SPAN_CONTEXT:
+ valid_links.append(Link(ref.referenced_context.unwrap()))
+
+ if valid_links and parent is None:
+ parent = NonRecordingSpan(valid_links[0].context)
+
+ parent_span_context = set_span_in_context(parent)
+
+ # The OpenTracing API expects time values to be `float` values which
+ # represent the number of seconds since the epoch. OpenTelemetry
+ # represents time values as nanoseconds since the epoch.
+ start_time_ns = start_time
+ if start_time_ns is not None:
+ start_time_ns = util.time_seconds_to_ns(start_time)
+
+ span = self._otel_tracer.start_span(
+ operation_name,
+ context=parent_span_context,
+ links=valid_links,
+ attributes=tags,
+ start_time=start_time_ns,
+ )
+
+ context = SpanContextShim(span.get_span_context())
+ return SpanShim(self, context, span)
+
+ def inject(self, span_context, format: object, carrier: object):
+ """Injects ``span_context`` into ``carrier``.
+
+ See base class for more details.
+
+ Args:
+ span_context: The ``opentracing.SpanContext`` to inject.
+ format: a Python object instance that represents a given
+ carrier format. `format` may be of any type, and `format`
+ equality is defined by Python ``==`` operator.
+ carrier: the format-specific carrier object to inject into
+ """
+
+ # pylint: disable=redefined-builtin
+ # This implementation does not perform the injecting by itself but
+ # uses the configured propagators in opentelemetry.propagators.
+ # TODO: Support Format.BINARY once it is supported in
+ # opentelemetry-python.
+
+ if format not in self._supported_formats:
+ raise UnsupportedFormatException
+
+ propagator = get_global_textmap()
+
+ span = span_context.unwrap() if span_context else None
+ if isinstance(span, OtelSpanContext):
+ span = NonRecordingSpan(span)
+
+ ctx = set_span_in_context(span)
+ propagator.inject(carrier, context=ctx)
+
+ def extract(self, format: object, carrier: object):
+ """Returns an ``opentracing.SpanContext`` instance extracted from a
+ ``carrier``.
+
+ See base class for more details.
+
+ Args:
+ format: a Python object instance that represents a given
+ carrier format. ``format`` may be of any type, and ``format``
+ equality is defined by python ``==`` operator.
+ carrier: the format-specific carrier object to extract from
+
+ Returns:
+ An ``opentracing.SpanContext`` extracted from ``carrier`` or
+ ``None`` if no such ``SpanContext`` could be found.
+ """
+
+ # pylint: disable=redefined-builtin
+ # This implementation does not perform the extracting by itself but
+ # uses the configured propagators in opentelemetry.propagators.
+ # TODO: Support Format.BINARY once it is supported in
+ # opentelemetry-python.
+ if format not in self._supported_formats:
+ raise UnsupportedFormatException
+
+ propagator = get_global_textmap()
+ ctx = propagator.extract(carrier)
+ span = get_current_span(ctx)
+ if span is not None:
+ otel_context = span.get_span_context()
+ else:
+ otel_context = INVALID_SPAN_CONTEXT
+
+ return SpanContextShim(otel_context)
diff --git a/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/py.typed b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/util.py b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/util.py
new file mode 100644
index 0000000000..eb7d3d9aca
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/util.py
@@ -0,0 +1,54 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# A default event name to be used for logging events when a better event name
+# can't be derived from the event's key-value pairs.
+DEFAULT_EVENT_NAME = "log"
+
+
+def time_seconds_to_ns(time_seconds):
+ """Converts a time value in seconds to a time value in nanoseconds.
+
+ `time_seconds` is a `float` as returned by `time.time()` which represents
+ the number of seconds since the epoch.
+
+ The returned value is an `int` representing the number of nanoseconds since
+ the epoch.
+ """
+
+ return int(time_seconds * 1e9)
+
+
+def time_seconds_from_ns(time_nanoseconds):
+ """Converts a time value in nanoseconds to a time value in seconds.
+
+ `time_nanoseconds` is an `int` representing the number of nanoseconds since
+ the epoch.
+
+ The returned value is a `float` representing the number of seconds since
+ the epoch.
+ """
+
+ return time_nanoseconds / 1e9
+
+
+def event_name_from_kv(key_values):
+ """A helper function which returns an event name from the given dict, or a
+ default event name.
+ """
+
+ if key_values is None or "event" not in key_values:
+ return DEFAULT_EVENT_NAME
+
+ return key_values["event"]
diff --git a/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/version.py b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/version.py
new file mode 100644
index 0000000000..ff896307c3
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/src/opentelemetry/shim/opentracing_shim/version.py
@@ -0,0 +1,15 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.44b0.dev"
diff --git a/shim/opentelemetry-opentracing-shim/tests/__init__.py b/shim/opentelemetry-opentracing-shim/tests/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/test_shim.py b/shim/opentelemetry-opentracing-shim/tests/test_shim.py
new file mode 100644
index 0000000000..99394ad216
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/test_shim.py
@@ -0,0 +1,674 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# TODO: make pylint use 3p opentracing module for type inference
+# pylint:disable=no-member
+
+import time
+import traceback
+from unittest import TestCase
+from unittest.mock import Mock
+
+import opentracing
+
+from opentelemetry import trace
+from opentelemetry.propagate import get_global_textmap, set_global_textmap
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.shim.opentracing_shim import (
+ SpanContextShim,
+ SpanShim,
+ create_tracer,
+ util,
+)
+from opentelemetry.test.mock_textmap import (
+ MockTextMapPropagator,
+ NOOPTextMapPropagator,
+)
+
+
+class TestShim(TestCase):
+ # pylint: disable=too-many-public-methods
+
+ def setUp(self):
+ """Create an OpenTelemetry tracer and a shim before every test case."""
+ trace.set_tracer_provider(TracerProvider())
+ self.shim = create_tracer(trace.get_tracer_provider())
+
+ @classmethod
+ def setUpClass(cls):
+ # Save current propagator to be restored on teardown.
+ cls._previous_propagator = get_global_textmap()
+
+ # Set mock propagator for testing.
+ set_global_textmap(MockTextMapPropagator())
+
+ @classmethod
+ def tearDownClass(cls):
+ # Restore previous propagator.
+ set_global_textmap(cls._previous_propagator)
+
+ def test_shim_type(self):
+ # Verify shim is an OpenTracing tracer.
+ self.assertIsInstance(self.shim, opentracing.Tracer)
+
+ def test_start_active_span(self):
+ """Test span creation and activation using `start_active_span()`."""
+
+ with self.shim.start_active_span("TestSpan0") as scope:
+ # Verify correct type of Scope and Span objects.
+ self.assertIsInstance(scope, opentracing.Scope)
+ self.assertIsInstance(scope.span, opentracing.Span)
+
+ # Verify span is started.
+ self.assertIsNotNone(scope.span.unwrap().start_time)
+
+ # Verify span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ scope.span.context.unwrap(),
+ )
+ # TODO: We can't check for equality of self.shim.active_span and
+ # scope.span because the same OpenTelemetry span is returned inside
+ # different SpanShim objects. A possible solution is described
+ # here:
+ # https://github.com/open-telemetry/opentelemetry-python/issues/161#issuecomment-534136274
+
+ # Verify span has ended.
+ self.assertIsNotNone(scope.span.unwrap().end_time)
+
+ # Verify no span is active.
+ self.assertIsNone(self.shim.active_span)
+
+ def test_start_span(self):
+ """Test span creation using `start_span()`."""
+
+ with self.shim.start_span("TestSpan1") as span:
+ # Verify correct type of Span object.
+ self.assertIsInstance(span, opentracing.Span)
+
+ # Verify span is started.
+ self.assertIsNotNone(span.unwrap().start_time)
+
+ # Verify `start_span()` does NOT make the span active.
+ self.assertIsNone(self.shim.active_span)
+
+ # Verify span has ended.
+ self.assertIsNotNone(span.unwrap().end_time)
+
+ def test_start_span_no_contextmanager(self):
+ """Test `start_span()` without a `with` statement."""
+
+ span = self.shim.start_span("TestSpan2")
+
+ # Verify span is started.
+ self.assertIsNotNone(span.unwrap().start_time)
+
+ # Verify `start_span()` does NOT make the span active.
+ self.assertIsNone(self.shim.active_span)
+
+ span.finish()
+
+ def test_explicit_span_finish(self):
+ """Test `finish()` method on `Span` objects."""
+
+ span = self.shim.start_span("TestSpan3")
+
+ # Verify span hasn't ended.
+ self.assertIsNone(span.unwrap().end_time)
+
+ span.finish()
+
+ # Verify span has ended.
+ self.assertIsNotNone(span.unwrap().end_time)
+
+ def test_explicit_start_time(self):
+ """Test `start_time` argument."""
+
+ now = time.time()
+ with self.shim.start_active_span("TestSpan4", start_time=now) as scope:
+ result = util.time_seconds_from_ns(scope.span.unwrap().start_time)
+ # Tolerate inaccuracies of less than a microsecond. See Note:
+ # https://open-telemetry.github.io/opentelemetry-python/opentelemetry.shim.opentracing_shim.html
+ # TODO: This seems to work consistently, but we should find out the
+ # biggest possible loss of precision.
+ self.assertAlmostEqual(result, now, places=6)
+
+ def test_explicit_end_time(self):
+ """Test `end_time` argument of `finish()` method."""
+
+ span = self.shim.start_span("TestSpan5")
+ now = time.time()
+ span.finish(now)
+
+ end_time = util.time_seconds_from_ns(span.unwrap().end_time)
+ # Tolerate inaccuracies of less than a microsecond. See Note:
+ # https://open-telemetry.github.io/opentelemetry-python/opentelemetry.shim.opentracing_shim.html
+ # TODO: This seems to work consistently, but we should find out the
+ # biggest possible loss of precision.
+ self.assertAlmostEqual(end_time, now, places=6)
+
+ def test_explicit_span_activation(self):
+ """Test manual activation and deactivation of a span."""
+
+ span = self.shim.start_span("TestSpan6")
+
+ # Verify no span is currently active.
+ self.assertIsNone(self.shim.active_span)
+
+ with self.shim.scope_manager.activate(
+ span, finish_on_close=True
+ ) as scope:
+ # Verify span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ scope.span.context.unwrap(),
+ )
+
+ # Verify no span is active.
+ self.assertIsNone(self.shim.active_span)
+
+ def test_start_active_span_finish_on_close(self):
+ """Test `finish_on_close` argument of `start_active_span()`."""
+
+ with self.shim.start_active_span(
+ "TestSpan7", finish_on_close=True
+ ) as scope:
+ # Verify span hasn't ended.
+ self.assertIsNone(scope.span.unwrap().end_time)
+
+ # Verify span has ended.
+ self.assertIsNotNone(scope.span.unwrap().end_time)
+
+ with self.shim.start_active_span(
+ "TestSpan8", finish_on_close=False
+ ) as scope:
+ # Verify span hasn't ended.
+ self.assertIsNone(scope.span.unwrap().end_time)
+
+ # Verify span hasn't ended after scope had been closed.
+ self.assertIsNone(scope.span.unwrap().end_time)
+
+ scope.span.finish()
+
+ def test_activate_finish_on_close(self):
+ """Test `finish_on_close` argument of `activate()`."""
+
+ span = self.shim.start_span("TestSpan9")
+
+ with self.shim.scope_manager.activate(
+ span, finish_on_close=True
+ ) as scope:
+ # Verify span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ scope.span.context.unwrap(),
+ )
+
+ # Verify span has ended.
+ self.assertIsNotNone(span.unwrap().end_time)
+
+ span = self.shim.start_span("TestSpan10")
+
+ with self.shim.scope_manager.activate(
+ span, finish_on_close=False
+ ) as scope:
+ # Verify span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ scope.span.context.unwrap(),
+ )
+
+ # Verify span hasn't ended.
+ self.assertIsNone(span.unwrap().end_time)
+
+ span.finish()
+
+ def test_explicit_scope_close(self):
+ """Test `close()` method on `ScopeShim`."""
+
+ with self.shim.start_active_span("ParentSpan") as parent:
+ # Verify parent span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ parent.span.context.unwrap(),
+ )
+
+ child = self.shim.start_active_span("ChildSpan")
+
+ # Verify child span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ child.span.context.unwrap(),
+ )
+
+ # Verify child span hasn't ended.
+ self.assertIsNone(child.span.unwrap().end_time)
+
+ child.close()
+
+ # Verify child span has ended.
+ self.assertIsNotNone(child.span.unwrap().end_time)
+
+ # Verify parent span becomes active again.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ parent.span.context.unwrap(),
+ )
+
+ def test_parent_child_implicit(self):
+ """Test parent-child relationship and activation/deactivation of spans
+ without specifying the parent span upon creation.
+ """
+
+ with self.shim.start_active_span("ParentSpan") as parent:
+ # Verify parent span is the active span.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ parent.span.context.unwrap(),
+ )
+
+ with self.shim.start_active_span("ChildSpan") as child:
+ # Verify child span is the active span.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ child.span.context.unwrap(),
+ )
+
+ # Verify parent-child relationship.
+ parent_trace_id = (
+ parent.span.unwrap().get_span_context().trace_id
+ )
+ child_trace_id = (
+ child.span.unwrap().get_span_context().trace_id
+ )
+
+ self.assertEqual(parent_trace_id, child_trace_id)
+ self.assertEqual(
+ child.span.unwrap().parent,
+ parent.span.unwrap().get_span_context(),
+ )
+
+ # Verify parent span becomes the active span again.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ parent.span.context.unwrap()
+ # TODO: Check equality of the spans themselves rather than
+ # their context once the SpanShim reconstruction problem has
+ # been addressed (see previous TODO).
+ )
+
+ # Verify there is no active span.
+ self.assertIsNone(self.shim.active_span)
+
+ def test_parent_child_explicit_span(self):
+ """Test parent-child relationship of spans when specifying a `Span`
+ object as a parent upon creation.
+ """
+
+ with self.shim.start_span("ParentSpan") as parent:
+ with self.shim.start_active_span(
+ "ChildSpan", child_of=parent
+ ) as child:
+ parent_trace_id = parent.unwrap().get_span_context().trace_id
+ child_trace_id = (
+ child.span.unwrap().get_span_context().trace_id
+ )
+
+ self.assertEqual(child_trace_id, parent_trace_id)
+ self.assertEqual(
+ child.span.unwrap().parent,
+ parent.unwrap().get_span_context(),
+ )
+
+ with self.shim.start_span("ParentSpan") as parent:
+ child = self.shim.start_span("ChildSpan", child_of=parent)
+
+ parent_trace_id = parent.unwrap().get_span_context().trace_id
+ child_trace_id = child.unwrap().get_span_context().trace_id
+
+ self.assertEqual(child_trace_id, parent_trace_id)
+ self.assertEqual(
+ child.unwrap().parent, parent.unwrap().get_span_context()
+ )
+
+ child.finish()
+
+ def test_parent_child_explicit_span_context(self):
+ """Test parent-child relationship of spans when specifying a
+ `SpanContext` object as a parent upon creation.
+ """
+
+ with self.shim.start_span("ParentSpan") as parent:
+ with self.shim.start_active_span(
+ "ChildSpan", child_of=parent.context
+ ) as child:
+ parent_trace_id = parent.unwrap().get_span_context().trace_id
+ child_trace_id = (
+ child.span.unwrap().get_span_context().trace_id
+ )
+
+ self.assertEqual(child_trace_id, parent_trace_id)
+ self.assertEqual(
+ child.span.unwrap().parent, parent.context.unwrap()
+ )
+
+ with self.shim.start_span("ParentSpan") as parent:
+ with self.shim.start_span(
+ "SpanWithContextParent", child_of=parent.context
+ ) as child:
+ parent_trace_id = parent.unwrap().get_span_context().trace_id
+ child_trace_id = child.unwrap().get_span_context().trace_id
+
+ self.assertEqual(child_trace_id, parent_trace_id)
+ self.assertEqual(
+ child.unwrap().parent, parent.context.unwrap()
+ )
+
+ def test_references(self):
+ """Test span creation using the `references` argument."""
+
+ with self.shim.start_span("ParentSpan") as parent:
+ ref = opentracing.child_of(parent.context)
+
+ with self.shim.start_active_span(
+ "ChildSpan", references=[ref]
+ ) as child:
+ self.assertEqual(
+ child.span.unwrap().links[0].context,
+ parent.context.unwrap(),
+ )
+
+ def test_follows_from_references(self):
+ """Test span creation using the `references` argument with a follows from relationship."""
+
+ with self.shim.start_span("ParentSpan") as parent:
+ ref = opentracing.follows_from(parent.context)
+
+ with self.shim.start_active_span(
+ "FollowingSpan", references=[ref]
+ ) as child:
+ self.assertEqual(
+ child.span.unwrap().links[0].context,
+ parent.context.unwrap(),
+ )
+ self.assertEqual(
+ child.span.unwrap().parent,
+ parent.context.unwrap(),
+ )
+
+ def test_set_operation_name(self):
+ """Test `set_operation_name()` method."""
+
+ with self.shim.start_active_span("TestName") as scope:
+ self.assertEqual(scope.span.unwrap().name, "TestName")
+
+ scope.span.set_operation_name("NewName")
+ self.assertEqual(scope.span.unwrap().name, "NewName")
+
+ def test_tags(self):
+ """Test tags behavior using the `tags` argument and the `set_tags()`
+ method.
+ """
+
+ tags = {"foo": "bar"}
+ with self.shim.start_active_span("TestSetTag", tags=tags) as scope:
+ scope.span.set_tag("baz", "qux")
+
+ self.assertEqual(scope.span.unwrap().attributes["foo"], "bar")
+ self.assertEqual(scope.span.unwrap().attributes["baz"], "qux")
+
+ def test_span_tracer(self):
+ """Test the `tracer` property on `Span` objects."""
+
+ with self.shim.start_active_span("TestSpan11") as scope:
+ self.assertEqual(scope.span.tracer, self.shim)
+
+ def test_log_kv(self):
+ """Test the `log_kv()` method on `Span` objects."""
+
+ with self.shim.start_span("TestSpan12") as span:
+ span.log_kv({"foo": "bar"})
+ self.assertEqual(span.unwrap().events[0].attributes["foo"], "bar")
+ # Verify timestamp was generated automatically.
+ self.assertIsNotNone(span.unwrap().events[0].timestamp)
+
+ # Test explicit timestamp.
+ now = time.time()
+ span.log_kv({"foo": "bar"}, now)
+ result = util.time_seconds_from_ns(
+ span.unwrap().events[1].timestamp
+ )
+ self.assertEqual(span.unwrap().events[1].attributes["foo"], "bar")
+ # Tolerate inaccuracies of less than a microsecond. See Note:
+ # https://open-telemetry.github.io/opentelemetry-python/shim/opentracing_shim/opentracing_shim.html
+ # TODO: This seems to work consistently, but we should find out the
+ # biggest possible loss of precision.
+ self.assertAlmostEqual(result, now, places=6)
+
+ def test_log(self):
+ """Test the deprecated `log` method on `Span` objects."""
+
+ with self.shim.start_span("TestSpan13") as span:
+ with self.assertWarns(DeprecationWarning):
+ span.log(event="foo", payload="bar")
+
+ self.assertEqual(span.unwrap().events[0].attributes["event"], "foo")
+ self.assertEqual(span.unwrap().events[0].attributes["payload"], "bar")
+ self.assertIsNotNone(span.unwrap().events[0].timestamp)
+
+ def test_log_event(self):
+ """Test the deprecated `log_event` method on `Span` objects."""
+
+ with self.shim.start_span("TestSpan14") as span:
+ with self.assertWarns(DeprecationWarning):
+ span.log_event("foo", "bar")
+
+ self.assertEqual(span.unwrap().events[0].attributes["event"], "foo")
+ self.assertEqual(span.unwrap().events[0].attributes["payload"], "bar")
+ self.assertIsNotNone(span.unwrap().events[0].timestamp)
+
+ def test_span_context(self):
+ """Test construction of `SpanContextShim` objects."""
+
+ otel_context = trace.SpanContext(1234, 5678, is_remote=False)
+ context = SpanContextShim(otel_context)
+
+ self.assertIsInstance(context, opentracing.SpanContext)
+ self.assertEqual(context.unwrap().trace_id, 1234)
+ self.assertEqual(context.unwrap().span_id, 5678)
+
+ def test_span_on_error(self):
+ """Verify error tag and logs are created on span when an exception is
+ raised.
+ """
+
+ # Raise an exception while a span is active.
+ with self.assertRaises(Exception) as exc_ctx:
+ with self.shim.start_active_span("TestName") as scope:
+ raise Exception("bad thing")
+
+ ex = exc_ctx.exception
+ expected_stack = "".join(
+ traceback.format_exception(type(ex), value=ex, tb=ex.__traceback__)
+ )
+ # Verify exception details have been added to span.
+ exc_event = scope.span.unwrap().events[0]
+
+ self.assertEqual(exc_event.name, "exception")
+ self.assertEqual(
+ exc_event.attributes["exception.message"], "bad thing"
+ )
+ self.assertEqual(
+ exc_event.attributes["exception.type"], Exception.__name__
+ )
+ # cannot get the whole stacktrace so just assert exception part is contained
+ self.assertIn(
+ expected_stack, exc_event.attributes["exception.stacktrace"]
+ )
+
+ def test_inject_http_headers(self):
+ """Test `inject()` method for Format.HTTP_HEADERS."""
+
+ otel_context = trace.SpanContext(
+ trace_id=1220, span_id=7478, is_remote=False
+ )
+ context = SpanContextShim(otel_context)
+
+ headers = {}
+ self.shim.inject(context, opentracing.Format.HTTP_HEADERS, headers)
+ self.assertEqual(
+ headers[MockTextMapPropagator.TRACE_ID_KEY], str(1220)
+ )
+ self.assertEqual(headers[MockTextMapPropagator.SPAN_ID_KEY], str(7478))
+
+ def test_inject_text_map(self):
+ """Test `inject()` method for Format.TEXT_MAP."""
+
+ otel_context = trace.SpanContext(
+ trace_id=1220, span_id=7478, is_remote=False
+ )
+ context = SpanContextShim(otel_context)
+
+ # Verify Format.TEXT_MAP
+ text_map = {}
+ self.shim.inject(context, opentracing.Format.TEXT_MAP, text_map)
+ self.assertEqual(
+ text_map[MockTextMapPropagator.TRACE_ID_KEY], str(1220)
+ )
+ self.assertEqual(
+ text_map[MockTextMapPropagator.SPAN_ID_KEY], str(7478)
+ )
+
+ def test_inject_binary(self):
+ """Test `inject()` method for Format.BINARY."""
+
+ otel_context = trace.SpanContext(
+ trace_id=1220, span_id=7478, is_remote=False
+ )
+ context = SpanContextShim(otel_context)
+
+ # Verify exception for non supported binary format.
+ with self.assertRaises(opentracing.UnsupportedFormatException):
+ self.shim.inject(context, opentracing.Format.BINARY, bytearray())
+
+ def test_extract_http_headers(self):
+ """Test `extract()` method for Format.HTTP_HEADERS."""
+
+ carrier = {
+ MockTextMapPropagator.TRACE_ID_KEY: 1220,
+ MockTextMapPropagator.SPAN_ID_KEY: 7478,
+ }
+
+ ctx = self.shim.extract(opentracing.Format.HTTP_HEADERS, carrier)
+ self.assertEqual(ctx.unwrap().trace_id, 1220)
+ self.assertEqual(ctx.unwrap().span_id, 7478)
+
+ def test_extract_empty_context_returns_invalid_context(self):
+ """In the case where the propagator cannot extract a
+ SpanContext, extract should return and invalid span context.
+ """
+ _old_propagator = get_global_textmap()
+ set_global_textmap(NOOPTextMapPropagator())
+ try:
+ carrier = {}
+
+ ctx = self.shim.extract(opentracing.Format.HTTP_HEADERS, carrier)
+ self.assertEqual(ctx.unwrap(), trace.INVALID_SPAN_CONTEXT)
+ finally:
+ set_global_textmap(_old_propagator)
+
+ def test_extract_text_map(self):
+ """Test `extract()` method for Format.TEXT_MAP."""
+
+ carrier = {
+ MockTextMapPropagator.TRACE_ID_KEY: 1220,
+ MockTextMapPropagator.SPAN_ID_KEY: 7478,
+ }
+
+ ctx = self.shim.extract(opentracing.Format.TEXT_MAP, carrier)
+ self.assertEqual(ctx.unwrap().trace_id, 1220)
+ self.assertEqual(ctx.unwrap().span_id, 7478)
+
+ def test_extract_binary(self):
+ """Test `extract()` method for Format.BINARY."""
+
+ # Verify exception for non supported binary format.
+ with self.assertRaises(opentracing.UnsupportedFormatException):
+ self.shim.extract(opentracing.Format.BINARY, bytearray())
+
+ def test_baggage(self):
+
+ span_context_shim = SpanContextShim(
+ trace.SpanContext(1234, 5678, is_remote=False)
+ )
+
+ baggage = span_context_shim.baggage
+
+ with self.assertRaises(ValueError):
+ baggage[1] = 3
+
+ span_shim = SpanShim(Mock(), span_context_shim, Mock())
+
+ span_shim.set_baggage_item(1, 2)
+
+ self.assertTrue(span_shim.get_baggage_item(1), 2)
+
+ def test_active(self):
+ """Test that the active property and start_active_span return the same
+ object"""
+
+ # Verify no span is currently active.
+ self.assertIsNone(self.shim.active_span)
+
+ with self.shim.start_active_span("TestSpan15") as scope:
+ # Verify span is active.
+ self.assertEqual(
+ self.shim.active_span.context.unwrap(),
+ scope.span.context.unwrap(),
+ )
+
+ self.assertIs(self.shim.scope_manager.active, scope)
+
+ # Verify no span is active.
+ self.assertIsNone(self.shim.active_span)
+
+ def test_mixed_mode(self):
+ """Test that span parent-child relationship is kept between
+ OpenTelemetry and the OpenTracing shim"""
+
+ span_shim = self.shim.start_span("TestSpan16")
+
+ with self.shim.scope_manager.activate(span_shim, finish_on_close=True):
+
+ with (
+ TracerProvider()
+ .get_tracer(__name__)
+ .start_as_current_span("abc")
+ ) as opentelemetry_span:
+
+ self.assertIs(
+ span_shim.unwrap().context,
+ opentelemetry_span.parent,
+ )
+
+ with (
+ TracerProvider().get_tracer(__name__).start_as_current_span("abc")
+ ) as opentelemetry_span:
+
+ with self.shim.start_active_span("TestSpan17") as scope:
+
+ self.assertIs(
+ scope.span.unwrap().parent,
+ opentelemetry_span.context,
+ )
diff --git a/shim/opentelemetry-opentracing-shim/tests/test_util.py b/shim/opentelemetry-opentracing-shim/tests/test_util.py
new file mode 100644
index 0000000000..c8f7571e77
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/test_util.py
@@ -0,0 +1,70 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from time import time, time_ns
+from unittest import TestCase
+
+from opentelemetry.shim.opentracing_shim.util import (
+ DEFAULT_EVENT_NAME,
+ event_name_from_kv,
+ time_seconds_from_ns,
+ time_seconds_to_ns,
+)
+
+
+class TestUtil(TestCase):
+ def test_event_name_from_kv(self):
+ # Test basic behavior.
+ event_name = "send HTTP request"
+ res = event_name_from_kv({"event": event_name, "foo": "bar"})
+ self.assertEqual(res, event_name)
+
+ # Test None.
+ res = event_name_from_kv(None)
+ self.assertEqual(res, DEFAULT_EVENT_NAME)
+
+ # Test empty dict.
+ res = event_name_from_kv({})
+ self.assertEqual(res, DEFAULT_EVENT_NAME)
+
+ # Test missing `event` field.
+ res = event_name_from_kv({"foo": "bar"})
+ self.assertEqual(res, DEFAULT_EVENT_NAME)
+
+ def test_time_seconds_to_ns(self):
+ time_seconds = time()
+ result = time_seconds_to_ns(time_seconds)
+
+ self.assertEqual(result, int(time_seconds * 1e9))
+
+ def test_time_seconds_from_ns(self):
+ time_nanoseconds = time_ns()
+ result = time_seconds_from_ns(time_nanoseconds)
+
+ self.assertEqual(result, time_nanoseconds / 1e9)
+
+ def test_time_conversion_precision(self):
+ """Verify time conversion from seconds to nanoseconds and vice versa is
+ accurate enough.
+ """
+
+ time_seconds = 1570484241.9501917
+ time_nanoseconds = time_seconds_to_ns(time_seconds)
+ result = time_seconds_from_ns(time_nanoseconds)
+
+ # Tolerate inaccuracies of less than a microsecond.
+ # TODO: Put a link to an explanation in the docs.
+ # TODO: This seems to work consistently, but we should find out the
+ # biggest possible loss of precision.
+ self.assertAlmostEqual(result, time_seconds, places=6)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/README.rst
new file mode 100644
index 0000000000..ba7119cd68
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/README.rst
@@ -0,0 +1,47 @@
+
+Testbed suite for the OpenTelemetry-OpenTracing Bridge
+======================================================
+
+Testbed suite designed to test the API changes.
+
+Build and test.
+---------------
+
+.. code-block:: sh
+
+ tox -e py37-test-opentracing-shim
+
+Alternatively, due to the organization of the suite, it's possible to run directly the tests using ``py.test``\ :
+
+.. code-block:: sh
+
+ py.test -s testbed/test_multiple_callbacks/test_threads.py
+
+Tested frameworks
+-----------------
+
+Currently the examples cover ``threading`` and ``asyncio``.
+
+List of patterns
+----------------
+
+
+* `Active Span replacement `_ - Start an isolated task and query for its results in another task/thread.
+* `Client-Server `_ - Typical client-server example.
+* `Common Request Handler `_ - One request handler for all requests.
+* `Late Span finish `_ - Late parent ``Span`` finish.
+* `Multiple callbacks `_ - Multiple callbacks spawned at the same time.
+* `Nested callbacks `_ - One callback at a time, defined in a pipeline fashion.
+* `Subtask Span propagation `_ - ``Span`` propagation for subtasks/coroutines.
+
+Adding new patterns
+-------------------
+
+A new pattern is composed of a directory under *testbed* with the *test_* prefix, and containing the files for each platform, also with the *test_* prefix:
+
+.. code-block::
+
+ testbed/
+ test_new_pattern/
+ test_threads.py
+ test_asyncio.py
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/otel_ot_shim_tracer.py b/shim/opentelemetry-opentracing-shim/tests/testbed/otel_ot_shim_tracer.py
new file mode 100644
index 0000000000..6c0a904571
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/otel_ot_shim_tracer.py
@@ -0,0 +1,26 @@
+import opentelemetry.shim.opentracing_shim as opentracingshim
+from opentelemetry.sdk import trace
+from opentelemetry.sdk.trace.export import SimpleSpanProcessor
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+
+
+class MockTracer(opentracingshim.TracerShim):
+ """Wrapper of `opentracingshim.TracerShim`.
+
+ MockTracer extends `opentracingshim.TracerShim` by adding a in memory
+ span exporter that can be used to get the list of finished spans."""
+
+ def __init__(self):
+ tracer_provider = trace.TracerProvider()
+ oteltracer = tracer_provider.get_tracer(__name__)
+ super().__init__(oteltracer)
+ exporter = InMemorySpanExporter()
+ span_processor = SimpleSpanProcessor(exporter)
+ tracer_provider.add_span_processor(span_processor)
+
+ self.exporter = exporter
+
+ def finished_spans(self):
+ return self.exporter.get_finished_spans()
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/README.rst
new file mode 100644
index 0000000000..6bb4d2f35c
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/README.rst
@@ -0,0 +1,20 @@
+
+Active Span replacement example.
+================================
+
+This example shows a ``Span`` being created and then passed to an asynchronous task, which will temporary activate it to finish its processing, and further restore the previously active ``Span``.
+
+``threading`` implementation:
+
+.. code-block:: python
+
+ # Create a new Span for this task
+ with self.tracer.start_active_span("task"):
+
+ with self.tracer.scope_manager.activate(span, True):
+ # Simulate work strictly related to the initial Span
+ pass
+
+ # Use the task span as parent of a new subtask
+ with self.tracer.start_active_span("subtask"):
+ pass
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_asyncio.py
new file mode 100644
index 0000000000..0419ab44a2
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_asyncio.py
@@ -0,0 +1,67 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import stop_loop_when
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ # Start an isolated task and query for its result -and finish it-
+ # in another task/thread
+ span = self.tracer.start_span("initial")
+ self.submit_another_task(span)
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) >= 3,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+ self.assertNamesEqual(spans, ["initial", "subtask", "task"])
+
+ # task/subtask are part of the same trace,
+ # and subtask is a child of task
+ self.assertSameTrace(spans[1], spans[2])
+ self.assertIsChildOf(spans[1], spans[2])
+
+ # initial task is not related in any way to those two tasks
+ self.assertNotSameTrace(spans[0], spans[1])
+ self.assertEqual(spans[0].parent, None)
+
+ async def task(self, span):
+ # Create a new Span for this task
+ with self.tracer.start_active_span("task"):
+
+ with self.tracer.scope_manager.activate(span, True):
+ # Simulate work strictly related to the initial Span
+ pass
+
+ # Use the task span as parent of a new subtask
+ with self.tracer.start_active_span("subtask"):
+ pass
+
+ def submit_another_task(self, span):
+ self.loop.create_task(self.task(span))
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_threads.py
new file mode 100644
index 0000000000..4e76c87a03
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_active_span_replacement/test_threads.py
@@ -0,0 +1,63 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ # use max_workers=3 as a general example even if only one would suffice
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def test_main(self):
+ # Start an isolated task and query for its result -and finish it-
+ # in another task/thread
+ span = self.tracer.start_span("initial")
+ self.submit_another_task(span)
+
+ self.executor.shutdown(True)
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+ self.assertNamesEqual(spans, ["initial", "subtask", "task"])
+
+ # task/subtask are part of the same trace,
+ # and subtask is a child of task
+ self.assertSameTrace(spans[1], spans[2])
+ self.assertIsChildOf(spans[1], spans[2])
+
+ # initial task is not related in any way to those two tasks
+ self.assertNotSameTrace(spans[0], spans[1])
+ self.assertEqual(spans[0].parent, None)
+ self.assertEqual(spans[2].parent, None)
+
+ def task(self, span):
+ # Create a new Span for this task
+ with self.tracer.start_active_span("task"):
+
+ with self.tracer.scope_manager.activate(span, True):
+ # Simulate work strictly related to the initial Span
+ pass
+
+ # Use the task span as parent of a new subtask
+ with self.tracer.start_active_span("subtask"):
+ pass
+
+ def submit_another_task(self, span):
+ self.executor.submit(self.task, span)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/README.rst
new file mode 100644
index 0000000000..730fd9295d
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/README.rst
@@ -0,0 +1,19 @@
+
+Client-Server example.
+======================
+
+This example shows a ``Span`` created by a ``Client``, which will send a ``Message`` / ``SpanContext`` to a ``Server``, which will in turn extract such context and use it as parent of a new (server-side) ``Span``.
+
+``Client.send()`` is used to send messages and inject the ``SpanContext`` using the ``TEXT_MAP`` format, and ``Server.process()`` will process received messages and will extract the context used as parent.
+
+.. code-block:: python
+
+ def send(self):
+ with self.tracer.start_active_span("send") as scope:
+ scope.span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ message = {}
+ self.tracer.inject(scope.span.context,
+ opentracing.Format.TEXT_MAP,
+ message)
+ self.queue.put(message)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_asyncio.py
new file mode 100644
index 0000000000..adf99e76b2
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_asyncio.py
@@ -0,0 +1,92 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+import opentracing
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_logger, get_one_by_tag, stop_loop_when
+
+logger = get_logger(__name__)
+
+
+class Server:
+ def __init__(self, *args, **kwargs):
+ tracer = kwargs.pop("tracer")
+ queue = kwargs.pop("queue")
+ super().__init__(*args, **kwargs)
+
+ self.tracer = tracer
+ self.queue = queue
+
+ async def run(self):
+ value = await self.queue.get()
+ self.process(value)
+
+ def process(self, message):
+ logger.info("Processing message in server")
+
+ ctx = self.tracer.extract(opentracing.Format.TEXT_MAP, message)
+ with self.tracer.start_active_span("receive", child_of=ctx) as scope:
+ scope.span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_SERVER)
+
+
+class Client:
+ def __init__(self, tracer, queue):
+ self.tracer = tracer
+ self.queue = queue
+
+ async def send(self):
+ with self.tracer.start_active_span("send") as scope:
+ scope.span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ message = {}
+ self.tracer.inject(
+ scope.span.context, opentracing.Format.TEXT_MAP, message
+ )
+ await self.queue.put(message)
+
+ logger.info("Sent message from client")
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.queue = asyncio.Queue()
+ self.loop = asyncio.get_event_loop()
+ self.server = Server(tracer=self.tracer, queue=self.queue)
+
+ def test(self):
+ client = Client(self.tracer, self.queue)
+ self.loop.create_task(self.server.run())
+ self.loop.create_task(client.send())
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) >= 2,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ spans = self.tracer.finished_spans()
+ self.assertIsNotNone(
+ get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_SERVER)
+ )
+ self.assertIsNotNone(
+ get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+ )
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_threads.py
new file mode 100644
index 0000000000..6fa5974d79
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_client_server/test_threads.py
@@ -0,0 +1,88 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from queue import Queue
+from threading import Thread
+
+import opentracing
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import await_until, get_logger, get_one_by_tag
+
+logger = get_logger(__name__)
+
+
+class Server(Thread):
+ def __init__(self, *args, **kwargs):
+ tracer = kwargs.pop("tracer")
+ queue = kwargs.pop("queue")
+ super().__init__(*args, **kwargs)
+
+ self.daemon = True
+ self.tracer = tracer
+ self.queue = queue
+
+ def run(self):
+ value = self.queue.get()
+ self.process(value)
+
+ def process(self, message):
+ logger.info("Processing message in server")
+
+ ctx = self.tracer.extract(opentracing.Format.TEXT_MAP, message)
+ with self.tracer.start_active_span("receive", child_of=ctx) as scope:
+ scope.span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_SERVER)
+
+
+class Client:
+ def __init__(self, tracer, queue):
+ self.tracer = tracer
+ self.queue = queue
+
+ def send(self):
+ with self.tracer.start_active_span("send") as scope:
+ scope.span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ message = {}
+ self.tracer.inject(
+ scope.span.context, opentracing.Format.TEXT_MAP, message
+ )
+ self.queue.put(message)
+
+ logger.info("Sent message from client")
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.queue = Queue()
+ self.server = Server(tracer=self.tracer, queue=self.queue)
+ self.server.start()
+
+ def test(self):
+ client = Client(self.tracer, self.queue)
+ client.send()
+
+ await_until(lambda: len(self.tracer.finished_spans()) >= 2)
+
+ spans = self.tracer.finished_spans()
+ self.assertIsNotNone(
+ get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_SERVER)
+ )
+ self.assertIsNotNone(
+ get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+ )
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/README.rst
new file mode 100644
index 0000000000..1bcda539bb
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/README.rst
@@ -0,0 +1,23 @@
+
+Common Request Handler example.
+===============================
+
+This example shows a ``Span`` used with ``RequestHandler``, which is used as a middleware (as in web frameworks) to manage a new ``Span`` per operation through its ``before_request()`` / ``after_response()`` methods.
+
+Implementation details:
+
+
+* For ``threading``, no active ``Span`` is consumed as the tasks may be run concurrently on different threads, and an explicit ``SpanContext`` has to be saved to be used as parent.
+
+RequestHandler implementation:
+
+.. code-block:: python
+
+ def before_request(self, request, request_context):
+
+ # If we should ignore the active Span, use any passed SpanContext
+ # as the parent. Else, use the active one.
+ span = self.tracer.start_span("send",
+ child_of=self.context,
+ ignore_active_span=True)
+
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/request_handler.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/request_handler.py
new file mode 100644
index 0000000000..b48a5dbc68
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/request_handler.py
@@ -0,0 +1,51 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..utils import get_logger
+
+logger = get_logger(__name__)
+
+
+class RequestHandler:
+ def __init__(self, tracer, context=None, ignore_active_span=True):
+ self.tracer = tracer
+ self.context = context
+ self.ignore_active_span = ignore_active_span
+
+ def before_request(self, request, request_context):
+ logger.info("Before request %s", request)
+
+ # If we should ignore the active Span, use any passed SpanContext
+ # as the parent. Else, use the active one.
+ if self.ignore_active_span:
+ span = self.tracer.start_span(
+ "send", child_of=self.context, ignore_active_span=True
+ )
+ else:
+ span = self.tracer.start_span("send")
+
+ span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ request_context["span"] = span
+
+ def after_request(self, request, request_context):
+ # pylint: disable=no-self-use
+ logger.info("After request %s", request)
+
+ span = request_context.get("span")
+ if span is not None:
+ span.finish()
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_asyncio.py
new file mode 100644
index 0000000000..58970a223c
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_asyncio.py
@@ -0,0 +1,150 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_logger, get_one_by_operation_name, stop_loop_when
+from .request_handler import RequestHandler
+
+logger = get_logger(__name__)
+
+
+class Client:
+ def __init__(self, request_handler, loop):
+ self.request_handler = request_handler
+ self.loop = loop
+
+ async def send_task(self, message):
+ request_context = {}
+
+ async def before_handler():
+ self.request_handler.before_request(message, request_context)
+
+ async def after_handler():
+ self.request_handler.after_request(message, request_context)
+
+ await before_handler()
+ await after_handler()
+
+ return f"{message}::response"
+
+ def send(self, message):
+ return self.send_task(message)
+
+ def send_sync(self, message):
+ return self.loop.run_until_complete(self.send_task(message))
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ """
+ There is only one instance of 'RequestHandler' per 'Client'. Methods of
+ 'RequestHandler' are executed in different Tasks, and no Span propagation
+ among them is done automatically.
+ Therefore we cannot use current active span and activate span.
+ So one issue here is setting correct parent span.
+ """
+
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+ self.client = Client(RequestHandler(self.tracer), self.loop)
+
+ def test_two_callbacks(self):
+ res_future1 = self.loop.create_task(self.client.send("message1"))
+ res_future2 = self.loop.create_task(self.client.send("message2"))
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) >= 2,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ self.assertEqual("message1::response", res_future1.result())
+ self.assertEqual("message2::response", res_future2.result())
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+
+ for span in spans:
+ self.assertEqual(
+ span.attributes.get(tags.SPAN_KIND, None),
+ tags.SPAN_KIND_RPC_CLIENT,
+ )
+
+ self.assertNotSameTrace(spans[0], spans[1])
+ self.assertIsNone(spans[0].parent)
+ self.assertIsNone(spans[1].parent)
+
+ def test_parent_not_picked(self):
+ """Active parent should not be picked up by child."""
+
+ async def do_task():
+ with self.tracer.start_active_span("parent"):
+ response = await self.client.send_task("no_parent")
+ self.assertEqual("no_parent::response", response)
+
+ self.loop.run_until_complete(do_task())
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+
+ child_span = get_one_by_operation_name(spans, "send")
+ self.assertIsNotNone(child_span)
+
+ parent_span = get_one_by_operation_name(spans, "parent")
+ self.assertIsNotNone(parent_span)
+
+ # Here check that there is no parent-child relation.
+ self.assertIsNotChildOf(child_span, parent_span)
+
+ def test_good_solution_to_set_parent(self):
+ """Asyncio and contextvars are integrated, in this case it is not needed
+ to activate current span by hand.
+ """
+
+ async def do_task():
+ with self.tracer.start_active_span("parent"):
+ # Set ignore_active_span to False indicating that the
+ # framework will do it for us.
+ req_handler = RequestHandler(
+ self.tracer,
+ ignore_active_span=False,
+ )
+ client = Client(req_handler, self.loop)
+ response = await client.send_task("correct_parent")
+
+ self.assertEqual("correct_parent::response", response)
+
+ # Send second request, now there is no active parent,
+ # but it will be set, ups
+ response = await client.send_task("wrong_parent")
+ self.assertEqual("wrong_parent::response", response)
+
+ self.loop.run_until_complete(do_task())
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+
+ parent_span = get_one_by_operation_name(spans, "parent")
+ self.assertIsNotNone(parent_span)
+
+ spans = [span for span in spans if span != parent_span]
+ self.assertIsChildOf(spans[0], parent_span)
+ self.assertIsNotChildOf(spans[1], parent_span)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_threads.py
new file mode 100644
index 0000000000..fdc0549d62
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_common_request_handler/test_threads.py
@@ -0,0 +1,134 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from concurrent.futures import ThreadPoolExecutor
+
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_logger, get_one_by_operation_name
+from .request_handler import RequestHandler
+
+logger = get_logger(__name__)
+
+
+class Client:
+ def __init__(self, request_handler, executor):
+ self.request_handler = request_handler
+ self.executor = executor
+
+ def send_task(self, message):
+ request_context = {}
+
+ def before_handler():
+ self.request_handler.before_request(message, request_context)
+
+ def after_handler():
+ self.request_handler.after_request(message, request_context)
+
+ self.executor.submit(before_handler).result()
+ self.executor.submit(after_handler).result()
+
+ return f"{message}::response"
+
+ def send(self, message):
+ return self.executor.submit(self.send_task, message)
+
+ def send_sync(self, message, timeout=5.0):
+ fut = self.executor.submit(self.send_task, message)
+ return fut.result(timeout=timeout)
+
+
+class TestThreads(OpenTelemetryTestCase):
+ """
+ There is only one instance of 'RequestHandler' per 'Client'. Methods of
+ 'RequestHandler' are executed concurrently in different threads which are
+ reused (executor). Therefore we cannot use current active span and
+ activate span. So one issue here is setting correct parent span.
+ """
+
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.executor = ThreadPoolExecutor(max_workers=3)
+ self.client = Client(RequestHandler(self.tracer), self.executor)
+
+ def test_two_callbacks(self):
+ response_future1 = self.client.send("message1")
+ response_future2 = self.client.send("message2")
+
+ self.assertEqual("message1::response", response_future1.result(5.0))
+ self.assertEqual("message2::response", response_future2.result(5.0))
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+
+ for span in spans:
+ self.assertEqual(
+ span.attributes.get(tags.SPAN_KIND, None),
+ tags.SPAN_KIND_RPC_CLIENT,
+ )
+
+ self.assertNotSameTrace(spans[0], spans[1])
+ self.assertIsNone(spans[0].parent)
+ self.assertIsNone(spans[1].parent)
+
+ def test_parent_not_picked(self):
+ """Active parent should not be picked up by child."""
+
+ with self.tracer.start_active_span("parent"):
+ response = self.client.send_sync("no_parent")
+ self.assertEqual("no_parent::response", response)
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+
+ child_span = get_one_by_operation_name(spans, "send")
+ self.assertIsNotNone(child_span)
+
+ parent_span = get_one_by_operation_name(spans, "parent")
+ self.assertIsNotNone(parent_span)
+
+ # Here check that there is no parent-child relation.
+ self.assertIsNotChildOf(child_span, parent_span)
+
+ def test_bad_solution_to_set_parent(self):
+ """Solution is bad because parent is per client and is not automatically
+ activated depending on the context.
+ """
+
+ with self.tracer.start_active_span("parent") as scope:
+ client = Client(
+ # Pass a span context to be used ad the parent.
+ RequestHandler(self.tracer, scope.span.context),
+ self.executor,
+ )
+ response = client.send_sync("correct_parent")
+ self.assertEqual("correct_parent::response", response)
+
+ response = client.send_sync("wrong_parent")
+ self.assertEqual("wrong_parent::response", response)
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+
+ spans = sorted(spans, key=lambda x: x.start_time)
+ parent_span = get_one_by_operation_name(spans, "parent")
+ self.assertIsNotNone(parent_span)
+
+ spans = [s for s in spans if s != parent_span]
+ self.assertEqual(len(spans), 2)
+ for span in spans:
+ self.assertIsChildOf(span, parent_span)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/README.rst
new file mode 100644
index 0000000000..8c4ffd864a
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/README.rst
@@ -0,0 +1,18 @@
+
+Late Span finish example.
+=========================
+
+This example shows a ``Span`` for a top-level operation, with independent, unknown lifetime, acting as parent of a few asynchronous subtasks (which must re-activate it but not finish it).
+
+.. code-block:: python
+
+ # Fire away a few subtasks, passing a parent Span whose lifetime
+ # is not tied at all to the children.
+ def submit_subtasks(self, parent_span):
+ def task(name, interval):
+ with self.tracer.scope_manager.activate(parent_span, False):
+ with self.tracer.start_active_span(name):
+ time.sleep(interval)
+
+ self.executor.submit(task, "task1", 0.1)
+ self.executor.submit(task, "task2", 0.3)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_asyncio.py
new file mode 100644
index 0000000000..d27e51ca88
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_asyncio.py
@@ -0,0 +1,64 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_logger, stop_loop_when
+
+logger = get_logger(__name__)
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ # Create a Span and use it as (explicit) parent of a pair of subtasks.
+ parent_span = self.tracer.start_span("parent")
+ self.submit_subtasks(parent_span)
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) >= 2,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ # Late-finish the parent Span now.
+ parent_span.finish()
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+ self.assertNamesEqual(spans, ["task1", "task2", "parent"])
+
+ for idx in range(2):
+ self.assertSameTrace(spans[idx], spans[-1])
+ self.assertIsChildOf(spans[idx], spans[-1])
+ self.assertTrue(spans[idx].end_time <= spans[-1].end_time)
+
+ # Fire away a few subtasks, passing a parent Span whose lifetime
+ # is not tied at all to the children.
+ def submit_subtasks(self, parent_span):
+ async def task(name):
+ logger.info("Running %s", name)
+ with self.tracer.scope_manager.activate(parent_span, False):
+ with self.tracer.start_active_span(name):
+ await asyncio.sleep(0.1)
+
+ self.loop.create_task(task("task1"))
+ self.loop.create_task(task("task2"))
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_threads.py
new file mode 100644
index 0000000000..2cd43d7e70
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_late_span_finish/test_threads.py
@@ -0,0 +1,57 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def test_main(self):
+ # Create a Span and use it as (explicit) parent of a pair of subtasks.
+ parent_span = self.tracer.start_span("parent")
+ self.submit_subtasks(parent_span)
+
+ # Wait for the threadpool to be done.
+ self.executor.shutdown(True)
+
+ # Late-finish the parent Span now.
+ parent_span.finish()
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 3)
+ self.assertNamesEqual(spans, ["task1", "task2", "parent"])
+
+ for idx in range(2):
+ self.assertSameTrace(spans[idx], spans[-1])
+ self.assertIsChildOf(spans[idx], spans[-1])
+ self.assertTrue(spans[idx].end_time <= spans[-1].end_time)
+
+ # Fire away a few subtasks, passing a parent Span whose lifetime
+ # is not tied at all to the children.
+ def submit_subtasks(self, parent_span):
+ def task(name, interval):
+ with self.tracer.scope_manager.activate(parent_span, False):
+ with self.tracer.start_active_span(name):
+ time.sleep(interval)
+
+ self.executor.submit(task, "task1", 0.1)
+ self.executor.submit(task, "task2", 0.3)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/README.rst
new file mode 100644
index 0000000000..952d1ec51d
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/README.rst
@@ -0,0 +1,19 @@
+
+Listener Response example.
+==========================
+
+This example shows a ``Span`` created upon a message being sent to a ``Client``, and its handling along a related, **not shared** ``ResponseListener`` object with a ``on_response(self, response)`` method to finish it.
+
+.. code-block:: python
+
+ def _task(self, message, listener):
+ res = "%s::response" % message
+ listener.on_response(res)
+ return res
+
+ def send_sync(self, message):
+ span = self.tracer.start_span("send")
+ span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ listener = ResponseListener(span)
+ return self.executor.submit(self._task, message, listener).result()
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/response_listener.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/response_listener.py
new file mode 100644
index 0000000000..dd143c20b8
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/response_listener.py
@@ -0,0 +1,7 @@
+class ResponseListener:
+ def __init__(self, span):
+ self.span = span
+
+ def on_response(self, res):
+ del res
+ self.span.finish()
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_asyncio.py
new file mode 100644
index 0000000000..d0f0a6a577
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_asyncio.py
@@ -0,0 +1,59 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_one_by_tag
+from .response_listener import ResponseListener
+
+
+async def task(message, listener):
+ res = f"{message}::response"
+ listener.on_response(res)
+ return res
+
+
+class Client:
+ def __init__(self, tracer, loop):
+ self.tracer = tracer
+ self.loop = loop
+
+ def send_sync(self, message):
+ span = self.tracer.start_span("send")
+ span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ listener = ResponseListener(span)
+ return self.loop.run_until_complete(task(message, listener))
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ client = Client(self.tracer, self.loop)
+ res = client.send_sync("message")
+ self.assertEqual(res, "message::response")
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 1)
+
+ span = get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+ self.assertIsNotNone(span)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_threads.py
new file mode 100644
index 0000000000..39d0a3d1d4
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_listener_per_request/test_threads.py
@@ -0,0 +1,58 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from concurrent.futures import ThreadPoolExecutor
+
+from opentracing.ext import tags
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_one_by_tag
+from .response_listener import ResponseListener
+
+
+class Client:
+ def __init__(self, tracer):
+ self.tracer = tracer
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def _task(self, message, listener):
+ # pylint: disable=no-self-use
+ res = f"{message}::response"
+ listener.on_response(res)
+ return res
+
+ def send_sync(self, message):
+ span = self.tracer.start_span("send")
+ span.set_tag(tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+
+ listener = ResponseListener(span)
+ return self.executor.submit(self._task, message, listener).result()
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+
+ def test_main(self):
+ client = Client(self.tracer)
+ res = client.send_sync("message")
+ self.assertEqual(res, "message::response")
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 1)
+
+ span = get_one_by_tag(spans, tags.SPAN_KIND, tags.SPAN_KIND_RPC_CLIENT)
+ self.assertIsNotNone(span)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/README.rst
new file mode 100644
index 0000000000..204f282cf2
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/README.rst
@@ -0,0 +1,44 @@
+
+Multiple callbacks example.
+===========================
+
+This example shows a ``Span`` created for a top-level operation, covering a set of asynchronous operations (representing callbacks), and have this ``Span`` finished when **all** of them have been executed.
+
+``Client.send()`` is used to create a new asynchronous operation (callback), and in turn every operation both restores the active ``Span``, and creates a child ``Span`` (useful for measuring the performance of each callback).
+
+Implementation details:
+
+
+* For ``threading``, a thread-safe counter is put in each ``Span`` to keep track of the pending callbacks, and call ``Span.finish()`` when the count becomes 0.
+* For ``asyncio`` the children corotuines representing the subtasks are simply yielded over, so no counter is needed.
+
+``threading`` implementation:
+
+.. code-block:: python
+
+ def task(self, interval, parent_span):
+ logger.info("Starting task")
+
+ try:
+ scope = self.tracer.scope_manager.activate(parent_span, False)
+ with self.tracer.start_active_span("task"):
+ time.sleep(interval)
+ finally:
+ scope.close()
+ if parent_span._ref_count.decr() == 0:
+ parent_span.finish()
+
+``asyncio`` implementation:
+
+.. code-block:: python
+
+ async def task(self, interval, parent_span):
+ logger.info("Starting task")
+
+ with self.tracer.start_active_span("task"):
+ await asyncio.sleep(interval)
+
+ # Invoke and yield over the corotuines.
+ with self.tracer.start_active_span("parent"):
+ tasks = self.submit_callbacks()
+ await asyncio.gather(*tasks)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_asyncio.py
new file mode 100644
index 0000000000..bbfb620a84
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_asyncio.py
@@ -0,0 +1,72 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import random
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import get_logger, stop_loop_when
+
+random.seed()
+logger = get_logger(__name__)
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ # Need to run within a Task, as the scope manager depends
+ # on Task.current_task()
+ async def main_task():
+ with self.tracer.start_active_span("parent"):
+ tasks = self.submit_callbacks()
+ await asyncio.gather(*tasks)
+
+ self.loop.create_task(main_task())
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) >= 4,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 4)
+ self.assertNamesEqual(spans, ["task", "task", "task", "parent"])
+
+ for idx in range(3):
+ self.assertSameTrace(spans[idx], spans[-1])
+ self.assertIsChildOf(spans[idx], spans[-1])
+
+ async def task(self, interval, parent_span):
+ logger.info("Starting task")
+
+ with self.tracer.scope_manager.activate(parent_span, False):
+ with self.tracer.start_active_span("task"):
+ await asyncio.sleep(interval)
+
+ def submit_callbacks(self):
+ parent_span = self.tracer.scope_manager.active.span
+ tasks = []
+ for _ in range(3):
+ interval = 0.1 + random.randint(200, 500) * 0.001
+ task = self.loop.create_task(self.task(interval, parent_span))
+ tasks.append(task)
+
+ return tasks
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_threads.py
new file mode 100644
index 0000000000..d94f834e51
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_multiple_callbacks/test_threads.py
@@ -0,0 +1,73 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import time
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import RefCount, get_logger
+
+random.seed()
+logger = get_logger(__name__)
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def test_main(self):
+ try:
+ scope = self.tracer.start_active_span(
+ "parent", finish_on_close=False
+ )
+ scope.span.ref_count = RefCount(1)
+ self.submit_callbacks(scope.span)
+ finally:
+ scope.close()
+ if scope.span.ref_count.decr() == 0:
+ scope.span.finish()
+
+ self.executor.shutdown(True)
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 4)
+ self.assertNamesEqual(spans, ["task", "task", "task", "parent"])
+
+ for idx in range(3):
+ self.assertSameTrace(spans[idx], spans[-1])
+ self.assertIsChildOf(spans[idx], spans[-1])
+
+ def task(self, interval, parent_span):
+ logger.info("Starting task")
+
+ scope = None
+ try:
+ scope = self.tracer.scope_manager.activate(parent_span, False)
+ with self.tracer.start_active_span("task"):
+ time.sleep(interval)
+ finally:
+ scope.close()
+ if parent_span.ref_count.decr() == 0:
+ parent_span.finish()
+
+ def submit_callbacks(self, parent_span):
+ for _ in range(3):
+ parent_span.ref_count.incr()
+ self.executor.submit(
+ self.task, 0.1 + random.randint(200, 500) * 0.001, parent_span
+ )
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/README.rst
new file mode 100644
index 0000000000..cc3ce0185b
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/README.rst
@@ -0,0 +1,47 @@
+
+Nested callbacks example.
+=========================
+
+This example shows a ``Span`` for a top-level operation, and how it can be passed down on a list of nested callbacks (always one at a time), have it as the active one for each of them, and finished **only** when the last one executes. For Python, we have decided to do it in a **fire-and-forget** fashion.
+
+Implementation details:
+
+
+* For ``threading``, the ``Span`` is manually activated in each coroutine/task.
+* For ``asyncio``, the active ``Span`` is not activated down the chain as the ``Context`` automatically propagates it.
+
+``threading`` implementation:
+
+.. code-block:: python
+
+ def submit(self):
+ span = self.tracer.scope_manager.active.span
+
+ def task1():
+ with self.tracer.scope_manager.activate(span, False):
+ span.set_tag("key1", "1")
+
+ def task2():
+ with self.tracer.scope_manager.activate(span, False):
+ span.set_tag("key2", "2")
+ ...
+
+``asyncio`` implementation:
+
+.. code-block:: python
+
+ async def task1():
+ span.set_tag("key1", "1")
+
+ async def task2():
+ span.set_tag("key2", "2")
+
+ async def task3():
+ span.set_tag("key3", "3")
+ span.finish()
+
+ self.loop.create_task(task3())
+
+ self.loop.create_task(task2())
+
+ self.loop.create_task(task1())
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_asyncio.py
new file mode 100644
index 0000000000..f00258624c
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_asyncio.py
@@ -0,0 +1,70 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import stop_loop_when
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ # Start a Span and let the callback-chain
+ # finish it when the task is done
+ async def task():
+ with self.tracer.start_active_span("one", finish_on_close=False):
+ self.submit()
+
+ self.loop.create_task(task())
+
+ stop_loop_when(
+ self.loop,
+ lambda: len(self.tracer.finished_spans()) == 1,
+ timeout=5.0,
+ )
+ self.loop.run_forever()
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 1)
+ self.assertEqual(spans[0].name, "one")
+
+ for idx in range(1, 4):
+ self.assertEqual(
+ spans[0].attributes.get(f"key{idx}", None), str(idx)
+ )
+
+ def submit(self):
+ span = self.tracer.scope_manager.active.span
+
+ async def task1():
+ span.set_tag("key1", "1")
+
+ async def task2():
+ span.set_tag("key2", "2")
+
+ async def task3():
+ span.set_tag("key3", "3")
+ span.finish()
+
+ self.loop.create_task(task3())
+
+ self.loop.create_task(task2())
+
+ self.loop.create_task(task1())
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_threads.py
new file mode 100644
index 0000000000..955298537d
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_nested_callbacks/test_threads.py
@@ -0,0 +1,72 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+from ..utils import await_until
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def tearDown(self): # pylint: disable=invalid-name
+ self.executor.shutdown(False)
+
+ def test_main(self):
+ # Start a Span and let the callback-chain
+ # finish it when the task is done
+ with self.tracer.start_active_span("one", finish_on_close=False):
+ self.submit()
+
+ # Cannot shutdown the executor and wait for the callbacks
+ # to be run, as in such case only the first will be executed,
+ # and the rest will get canceled.
+ await_until(lambda: len(self.tracer.finished_spans()) == 1, 5)
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 1)
+ self.assertEqual(spans[0].name, "one")
+
+ for idx in range(1, 4):
+ self.assertEqual(
+ spans[0].attributes.get(f"key{idx}", None), str(idx)
+ )
+
+ def submit(self):
+ span = self.tracer.scope_manager.active.span
+
+ def task1():
+ with self.tracer.scope_manager.activate(span, False):
+ span.set_tag("key1", "1")
+
+ def task2():
+ with self.tracer.scope_manager.activate(span, False):
+ span.set_tag("key2", "2")
+
+ def task3():
+ with self.tracer.scope_manager.activate(
+ span, True
+ ):
+ span.set_tag("key3", "3")
+
+ self.executor.submit(task3)
+
+ self.executor.submit(task2)
+
+ self.executor.submit(task1)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/README.rst b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/README.rst
new file mode 100644
index 0000000000..eaeda8e6f8
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/README.rst
@@ -0,0 +1,42 @@
+
+Subtask Span propagation example.
+=================================
+
+This example shows an active ``Span`` being simply propagated to the subtasks -either threads or coroutines-, and finished **by** the parent task. In real-life scenarios instrumentation libraries may help with ``Span`` propagation **if** not offered by default (see implementation details below), but we show here the case without such help.
+
+Implementation details:
+
+* For ``threading``, the ``Span`` is manually passed down the call chain, activating it in each corotuine/task.
+* For ``asyncio``, the active ``Span`` is not passed nor activated down the chain as the ``Context`` automatically propagates it.
+
+``threading`` implementation:
+
+.. code-block:: python
+
+ def parent_task(self, message):
+ with self.tracer.start_active_span("parent") as scope:
+ f = self.executor.submit(self.child_task, message, scope.span)
+ res = f.result()
+
+ return res
+
+ def child_task(self, message, span):
+ with self.tracer.scope_manager.activate(span, False):
+ with self.tracer.start_active_span("child"):
+ return "%s::response" % message
+
+``asyncio`` implementation:
+
+.. code-block:: python
+
+ async def parent_task(self, message): # noqa
+ with self.tracer.start_active_span("parent"):
+ res = await self.child_task(message)
+
+ return res
+
+ async def child_task(self, message):
+ # No need to pass/activate the parent Span, as it stays in the context.
+ with self.tracer.start_active_span("child"):
+ return "%s::response" % message
+
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/__init__.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_asyncio.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_asyncio.py
new file mode 100644
index 0000000000..653f9bd810
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_asyncio.py
@@ -0,0 +1,45 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+
+
+class TestAsyncio(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.loop = asyncio.get_event_loop()
+
+ def test_main(self):
+ res = self.loop.run_until_complete(self.parent_task("message"))
+ self.assertEqual(res, "message::response")
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+ self.assertNamesEqual(spans, ["child", "parent"])
+ self.assertIsChildOf(spans[0], spans[1])
+
+ async def parent_task(self, message): # noqa
+ with self.tracer.start_active_span("parent"):
+ res = await self.child_task(message)
+
+ return res
+
+ async def child_task(self, message):
+ # No need to pass/activate the parent Span, as it stays in the context.
+ with self.tracer.start_active_span("child"):
+ return f"{message}::response"
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_threads.py b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_threads.py
new file mode 100644
index 0000000000..0d003c9062
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/test_subtask_span_propagation/test_threads.py
@@ -0,0 +1,46 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from concurrent.futures import ThreadPoolExecutor
+
+# pylint: disable=import-error
+from ..otel_ot_shim_tracer import MockTracer
+from ..testcase import OpenTelemetryTestCase
+
+
+class TestThreads(OpenTelemetryTestCase):
+ def setUp(self): # pylint: disable=invalid-name
+ self.tracer = MockTracer()
+ self.executor = ThreadPoolExecutor(max_workers=3)
+
+ def test_main(self):
+ res = self.executor.submit(self.parent_task, "message").result()
+ self.assertEqual(res, "message::response")
+
+ spans = self.tracer.finished_spans()
+ self.assertEqual(len(spans), 2)
+ self.assertNamesEqual(spans, ["child", "parent"])
+ self.assertIsChildOf(spans[0], spans[1])
+
+ def parent_task(self, message):
+ with self.tracer.start_active_span("parent") as scope:
+ fut = self.executor.submit(self.child_task, message, scope.span)
+ res = fut.result()
+
+ return res
+
+ def child_task(self, message, span):
+ with self.tracer.scope_manager.activate(span, False):
+ with self.tracer.start_active_span("child"):
+ return f"{message}::response"
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/testcase.py b/shim/opentelemetry-opentracing-shim/tests/testbed/testcase.py
new file mode 100644
index 0000000000..3c16682fad
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/testcase.py
@@ -0,0 +1,46 @@
+import unittest
+
+import opentelemetry.trace as trace_api
+
+
+# pylint: disable=C0103
+class OpenTelemetryTestCase(unittest.TestCase):
+ def assertSameTrace(self, spanA, spanB):
+ return self.assertEqual(spanA.context.trace_id, spanB.context.trace_id)
+
+ def assertNotSameTrace(self, spanA, spanB):
+ return self.assertNotEqual(
+ spanA.context.trace_id, spanB.context.trace_id
+ )
+
+ def assertIsChildOf(self, spanA, spanB):
+ # spanA is child of spanB
+ self.assertIsNotNone(spanA.parent)
+
+ ctxA = spanA.parent
+ if not isinstance(ctxA, trace_api.SpanContext):
+ ctxA = spanA.parent.context
+
+ ctxB = spanB
+ if not isinstance(ctxB, trace_api.SpanContext):
+ ctxB = spanB.context
+
+ return self.assertEqual(ctxA.span_id, ctxB.span_id)
+
+ def assertIsNotChildOf(self, spanA, spanB):
+ # spanA is NOT child of spanB
+ if spanA.parent is None:
+ return
+
+ ctxA = spanA.parent
+ if not isinstance(ctxA, trace_api.SpanContext):
+ ctxA = spanA.parent.context
+
+ ctxB = spanB
+ if not isinstance(ctxB, trace_api.SpanContext):
+ ctxB = spanB.context
+
+ self.assertNotEqual(ctxA.span_id, ctxB.span_id)
+
+ def assertNamesEqual(self, spans, names):
+ self.assertEqual(list(map(lambda x: x.name, spans)), names)
diff --git a/shim/opentelemetry-opentracing-shim/tests/testbed/utils.py b/shim/opentelemetry-opentracing-shim/tests/testbed/utils.py
new file mode 100644
index 0000000000..88cc4838b8
--- /dev/null
+++ b/shim/opentelemetry-opentracing-shim/tests/testbed/utils.py
@@ -0,0 +1,76 @@
+import logging
+import threading
+import time
+
+
+class RefCount:
+ """Thread-safe counter"""
+
+ def __init__(self, count=1):
+ self._lock = threading.Lock()
+ self._count = count
+
+ def incr(self):
+ with self._lock:
+ self._count += 1
+ return self._count
+
+ def decr(self):
+ with self._lock:
+ self._count -= 1
+ return self._count
+
+
+def await_until(func, timeout=5.0):
+ """Polls for func() to return True"""
+ end_time = time.time() + timeout
+ while time.time() < end_time and not func():
+ time.sleep(0.01)
+
+
+def stop_loop_when(loop, cond_func, timeout=5.0):
+ """
+ Registers a periodic callback that stops the loop when cond_func() == True.
+ Compatible with both Tornado and asyncio.
+ """
+ if cond_func() or timeout <= 0.0:
+ loop.stop()
+ return
+
+ timeout -= 0.1
+ loop.call_later(0.1, stop_loop_when, loop, cond_func, timeout)
+
+
+def get_logger(name):
+ """Returns a logger with log level set to INFO"""
+ logging.basicConfig(level=logging.INFO)
+ return logging.getLogger(name)
+
+
+def get_one_by_tag(spans, key, value):
+ """Return a single Span with a tag value/key from a list,
+ errors if more than one is found."""
+
+ found = []
+ for span in spans:
+ if span.attributes.get(key) == value:
+ found.append(span)
+
+ if len(found) > 1:
+ raise RuntimeError("Too many values")
+
+ return found[0] if len(found) > 0 else None
+
+
+def get_one_by_operation_name(spans, name):
+ """Return a single Span with a name from a list,
+ errors if more than one is found."""
+ found = []
+ for span in spans:
+ if span.name == name:
+ found.append(span)
+
+ if len(found) > 1:
+ raise RuntimeError("Too many values")
+
+ return found[0] if len(found) > 0 else None
diff --git a/tests/opentelemetry-docker-tests/tests/docker-compose.yml b/tests/opentelemetry-docker-tests/tests/docker-compose.yml
index 2f89e3388e..6ecb12129a 100644
--- a/tests/opentelemetry-docker-tests/tests/docker-compose.yml
+++ b/tests/opentelemetry-docker-tests/tests/docker-compose.yml
@@ -1,6 +1,7 @@
version: '3'
services:
+<<<<<<< HEAD
otmongo:
ports:
- "27017:27017"
@@ -58,3 +59,16 @@ services:
ACCEPT_EULA: "Y"
SA_PASSWORD: "yourStrong(!)Password"
command: /opt/mssql/bin/sqlservr
+=======
+ otopencensus:
+ image: rafaeljesus/opencensus-collector:latest
+ command: --logging-exporter DEBUG
+ ports:
+ - "8888:8888"
+ - "55678:55678"
+ otcollector:
+ image: otel/opentelemetry-collector:0.31.0
+ ports:
+ - "4317:4317"
+ - "4318:55681"
+>>>>>>> upstream/main
diff --git a/tests/opentelemetry-docker-tests/tests/opencensus/test_opencensusexporter_functional.py b/tests/opentelemetry-docker-tests/tests/opencensus/test_opencensusexporter_functional.py
new file mode 100644
index 0000000000..a3c1ee2030
--- /dev/null
+++ b/tests/opentelemetry-docker-tests/tests/opencensus/test_opencensusexporter_functional.py
@@ -0,0 +1,58 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.context import attach, detach, set_value
+from opentelemetry.exporter.opencensus.trace_exporter import (
+ OpenCensusSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import SimpleSpanProcessor
+from opentelemetry.test.test_base import TestBase
+
+
+class ExportStatusSpanProcessor(SimpleSpanProcessor):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.export_status = []
+
+ def on_end(self, span):
+ token = attach(set_value("suppress_instrumentation", True))
+ self.export_status.append(self.span_exporter.export((span,)))
+ detach(token)
+
+
+class TestOpenCensusSpanExporter(TestBase):
+ def setUp(self):
+ super().setUp()
+
+ trace.set_tracer_provider(TracerProvider())
+ self.tracer = trace.get_tracer(__name__)
+ self.span_processor = ExportStatusSpanProcessor(
+ OpenCensusSpanExporter(endpoint="localhost:55678")
+ )
+
+ trace.get_tracer_provider().add_span_processor(self.span_processor)
+
+ def test_export(self):
+ with self.tracer.start_as_current_span("foo"):
+ with self.tracer.start_as_current_span("bar"):
+ with self.tracer.start_as_current_span("baz"):
+ pass
+
+ self.assertTrue(len(self.span_processor.export_status), 3)
+
+ for export_status in self.span_processor.export_status:
+ self.assertEqual(export_status.name, "SUCCESS")
+ self.assertEqual(export_status.value, 0)
diff --git a/tests/opentelemetry-docker-tests/tests/otlpexporter/__init__.py b/tests/opentelemetry-docker-tests/tests/otlpexporter/__init__.py
new file mode 100644
index 0000000000..d4340fb910
--- /dev/null
+++ b/tests/opentelemetry-docker-tests/tests/otlpexporter/__init__.py
@@ -0,0 +1,48 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from abc import ABC, abstractmethod
+
+from opentelemetry.context import attach, detach, set_value
+from opentelemetry.sdk.trace.export import SimpleSpanProcessor
+
+
+class ExportStatusSpanProcessor(SimpleSpanProcessor):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.export_status = []
+
+ def on_end(self, span):
+ token = attach(set_value("suppress_instrumentation", True))
+ self.export_status.append(self.span_exporter.export((span,)))
+ detach(token)
+
+
+class BaseTestOTLPExporter(ABC):
+ @abstractmethod
+ def get_span_processor(self):
+ pass
+
+ # pylint: disable=no-member
+ def test_export(self):
+ with self.tracer.start_as_current_span("foo"):
+ with self.tracer.start_as_current_span("bar"):
+ with self.tracer.start_as_current_span("baz"):
+ pass
+
+ self.assertTrue(len(self.span_processor.export_status), 3)
+
+ for export_status in self.span_processor.export_status:
+ self.assertEqual(export_status.name, "SUCCESS")
+ self.assertEqual(export_status.value, 0)
diff --git a/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_grpc_exporter_functional.py b/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_grpc_exporter_functional.py
new file mode 100644
index 0000000000..d48b305396
--- /dev/null
+++ b/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_grpc_exporter_functional.py
@@ -0,0 +1,39 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.test.test_base import TestBase
+
+from . import BaseTestOTLPExporter, ExportStatusSpanProcessor
+
+
+class TestOTLPGRPCExporter(BaseTestOTLPExporter, TestBase):
+ # pylint: disable=no-self-use
+ def get_span_processor(self):
+ return ExportStatusSpanProcessor(
+ OTLPSpanExporter(insecure=True, timeout=1)
+ )
+
+ def setUp(self):
+ super().setUp()
+
+ trace.set_tracer_provider(TracerProvider())
+ self.tracer = trace.get_tracer(__name__)
+ self.span_processor = self.get_span_processor()
+
+ trace.get_tracer_provider().add_span_processor(self.span_processor)
diff --git a/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_http_exporter_functional.py b/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_http_exporter_functional.py
new file mode 100644
index 0000000000..59a333dec6
--- /dev/null
+++ b/tests/opentelemetry-docker-tests/tests/otlpexporter/test_otlp_http_exporter_functional.py
@@ -0,0 +1,37 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry import trace
+from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
+ OTLPSpanExporter,
+)
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.test.test_base import TestBase
+
+from . import BaseTestOTLPExporter, ExportStatusSpanProcessor
+
+
+class TestOTLPHTTPExporter(BaseTestOTLPExporter, TestBase):
+ # pylint: disable=no-self-use
+ def get_span_processor(self):
+ return ExportStatusSpanProcessor(OTLPSpanExporter())
+
+ def setUp(self):
+ super().setUp()
+
+ trace.set_tracer_provider(TracerProvider())
+ self.tracer = trace.get_tracer(__name__)
+ self.span_processor = self.get_span_processor()
+
+ trace.get_tracer_provider().add_span_processor(self.span_processor)
diff --git a/tests/opentelemetry-test-utils/README.rst b/tests/opentelemetry-test-utils/README.rst
new file mode 100644
index 0000000000..774669cb8b
--- /dev/null
+++ b/tests/opentelemetry-test-utils/README.rst
@@ -0,0 +1,10 @@
+OpenTelemetry Test Utilities
+============================
+
+This package provides internal testing utilities for the OpenTelemetry Python project and provides no stability or quality guarantees.
+Please do not use it for anything other than writing or running tests for the OpenTelemetry Python project (github.com/open-telemetry/opentelemetry-python).
+
+
+References
+----------
+* `OpenTelemetry Project `_
diff --git a/tests/opentelemetry-test-utils/pyproject.toml b/tests/opentelemetry-test-utils/pyproject.toml
new file mode 100644
index 0000000000..1eb24bae36
--- /dev/null
+++ b/tests/opentelemetry-test-utils/pyproject.toml
@@ -0,0 +1,47 @@
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "opentelemetry-test-utils"
+dynamic = ["version"]
+description = "Test utilities for OpenTelemetry unit tests"
+readme = "README.rst"
+license = "Apache-2.0"
+requires-python = ">=3.7"
+authors = [
+ { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" },
+]
+classifiers = [
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+]
+dependencies = [
+ "asgiref ~= 3.0",
+ "opentelemetry-api == 1.23.0.dev",
+ "opentelemetry-sdk == 1.23.0.dev",
+]
+
+[project.optional-dependencies]
+test = []
+
+[project.urls]
+Homepage = "https://github.com/open-telemetry/opentelemetry-python/tests/opentelemetry-test-utils"
+
+[tool.hatch.version]
+path = "src/opentelemetry/test/version.py"
+
+[tool.hatch.build.targets.sdist]
+include = [
+ "/src",
+]
+
+[tool.hatch.build.targets.wheel]
+packages = ["src/opentelemetry"]
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/__init__.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/__init__.py
new file mode 100644
index 0000000000..068ed12e86
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/__init__.py
@@ -0,0 +1,55 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# type: ignore
+
+from traceback import format_tb
+from unittest import TestCase
+
+
+class _AssertNotRaisesMixin:
+ class _AssertNotRaises:
+ def __init__(self, test_case):
+ self._test_case = test_case
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type_, value, tb): # pylint: disable=invalid-name
+ if value is not None and type_ in self._exception_types:
+
+ self._test_case.fail(
+ "Unexpected exception was raised:\n{}".format(
+ "\n".join(format_tb(tb))
+ )
+ )
+
+ return True
+
+ def __call__(self, exception, *exceptions):
+ # pylint: disable=attribute-defined-outside-init
+ self._exception_types = (exception, *exceptions)
+ return self
+
+ def __init__(self, *args, **kwargs):
+
+ super().__init__(*args, **kwargs)
+ # pylint: disable=invalid-name
+ self.assertNotRaises = self._AssertNotRaises(self)
+
+
+class TestCase(
+ _AssertNotRaisesMixin, TestCase
+): # pylint: disable=function-redefined
+ pass
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/asgitestutil.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/asgitestutil.py
new file mode 100644
index 0000000000..05be4e0214
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/asgitestutil.py
@@ -0,0 +1,76 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+
+from asgiref.testing import ApplicationCommunicator
+
+from opentelemetry.test.test_base import TestBase
+
+
+def setup_testing_defaults(scope):
+ scope.update(
+ {
+ "client": ("127.0.0.1", 32767),
+ "headers": [],
+ "http_version": "1.0",
+ "method": "GET",
+ "path": "/",
+ "query_string": b"",
+ "scheme": "http",
+ "server": ("127.0.0.1", 80),
+ "type": "http",
+ }
+ )
+
+
+class AsgiTestBase(TestBase):
+ def setUp(self):
+ super().setUp()
+
+ self.scope = {}
+ setup_testing_defaults(self.scope)
+ self.communicator = None
+
+ def tearDown(self):
+ if self.communicator:
+ asyncio.get_event_loop().run_until_complete(
+ self.communicator.wait()
+ )
+
+ def seed_app(self, app):
+ self.communicator = ApplicationCommunicator(app, self.scope)
+
+ def send_input(self, message):
+ asyncio.get_event_loop().run_until_complete(
+ self.communicator.send_input(message)
+ )
+
+ def send_default_request(self):
+ self.send_input({"type": "http.request", "body": b""})
+
+ def get_output(self):
+ output = asyncio.get_event_loop().run_until_complete(
+ self.communicator.receive_output(0)
+ )
+ return output
+
+ def get_all_output(self):
+ outputs = []
+ while True:
+ try:
+ outputs.append(self.get_output())
+ except asyncio.TimeoutError:
+ break
+ return outputs
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/concurrency_test.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/concurrency_test.py
new file mode 100644
index 0000000000..5d178e24ff
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/concurrency_test.py
@@ -0,0 +1,90 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import threading
+import unittest
+from functools import partial
+from typing import Callable, List, Optional, TypeVar
+from unittest.mock import Mock
+
+ReturnT = TypeVar("ReturnT")
+
+
+class MockFunc:
+ """A thread safe mock function
+
+ Use this as part of your mock if you want to count calls across multiple
+ threads.
+ """
+
+ def __init__(self) -> None:
+ self.lock = threading.Lock()
+ self.call_count = 0
+ self.mock = Mock()
+
+ def __call__(self, *args, **kwargs):
+ with self.lock:
+ self.call_count += 1
+ return self.mock
+
+
+class ConcurrencyTestBase(unittest.TestCase):
+ """Test base class/mixin for tests of concurrent code
+
+ This test class calls ``sys.setswitchinterval(1e-12)`` to try to create more
+ contention while running tests that use many threads. It also provides
+ ``run_with_many_threads`` to run some test code in many threads
+ concurrently.
+ """
+
+ orig_switch_interval = sys.getswitchinterval()
+
+ @classmethod
+ def setUpClass(cls) -> None:
+ super().setUpClass()
+ # switch threads more often to increase chance of contention
+ sys.setswitchinterval(1e-12)
+
+ @classmethod
+ def tearDownClass(cls) -> None:
+ super().tearDownClass()
+ sys.setswitchinterval(cls.orig_switch_interval)
+
+ @staticmethod
+ def run_with_many_threads(
+ func_to_test: Callable[[], ReturnT],
+ num_threads: int = 100,
+ ) -> List[ReturnT]:
+ """Util to run ``func_to_test`` in ``num_threads`` concurrently"""
+
+ barrier = threading.Barrier(num_threads)
+ results: List[Optional[ReturnT]] = [None] * num_threads
+
+ def thread_start(idx: int) -> None:
+ nonlocal results
+ # Get all threads here before releasing them to create contention
+ barrier.wait()
+ results[idx] = func_to_test()
+
+ threads = [
+ threading.Thread(target=partial(thread_start, i))
+ for i in range(num_threads)
+ ]
+ for thread in threads:
+ thread.start()
+ for thread in threads:
+ thread.join()
+
+ return results # type: ignore
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/globals_test.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/globals_test.py
new file mode 100644
index 0000000000..23b3112430
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/globals_test.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+from opentelemetry import trace as trace_api
+from opentelemetry._logs import _internal as logging_api
+from opentelemetry.metrics import _internal as metrics_api
+from opentelemetry.metrics._internal import _ProxyMeterProvider
+from opentelemetry.util._once import Once
+
+
+# pylint: disable=protected-access
+def reset_trace_globals() -> None:
+ """WARNING: only use this for tests."""
+ trace_api._TRACER_PROVIDER_SET_ONCE = Once()
+ trace_api._TRACER_PROVIDER = None
+ trace_api._PROXY_TRACER_PROVIDER = trace_api.ProxyTracerProvider()
+
+
+# pylint: disable=protected-access
+def reset_metrics_globals() -> None:
+ """WARNING: only use this for tests."""
+ metrics_api._METER_PROVIDER_SET_ONCE = Once() # type: ignore[attr-defined]
+ metrics_api._METER_PROVIDER = None # type: ignore[attr-defined]
+ metrics_api._PROXY_METER_PROVIDER = _ProxyMeterProvider() # type: ignore[attr-defined]
+
+
+# pylint: disable=protected-access
+def reset_logging_globals() -> None:
+ """WARNING: only use this for tests."""
+ logging_api._LOGGER_PROVIDER_SET_ONCE = Once() # type: ignore[attr-defined]
+ logging_api._LOGGER_PROVIDER = None # type: ignore[attr-defined]
+ # logging_api._PROXY_LOGGER_PROVIDER = _ProxyLoggerProvider() # type: ignore[attr-defined]
+
+
+class TraceGlobalsTest(unittest.TestCase):
+ """Resets trace API globals in setUp/tearDown
+
+ Use as a base class or mixin for your test that modifies trace API globals.
+ """
+
+ def setUp(self) -> None:
+ super().setUp()
+ reset_trace_globals()
+
+ def tearDown(self) -> None:
+ super().tearDown()
+ reset_trace_globals()
+
+
+class MetricsGlobalsTest(unittest.TestCase):
+ """Resets metrics API globals in setUp/tearDown
+
+ Use as a base class or mixin for your test that modifies metrics API globals.
+ """
+
+ def setUp(self) -> None:
+ super().setUp()
+ reset_metrics_globals()
+
+ def tearDown(self) -> None:
+ super().tearDown()
+ reset_metrics_globals()
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/httptest.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/httptest.py
new file mode 100644
index 0000000000..84591ca0f1
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/httptest.py
@@ -0,0 +1,68 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import re
+import unittest
+from http import HTTPStatus
+from http.server import BaseHTTPRequestHandler, HTTPServer
+from threading import Thread
+
+
+class HttpTestBase(unittest.TestCase):
+ DEFAULT_RESPONSE = b"Hello!"
+
+ class Handler(BaseHTTPRequestHandler):
+ protocol_version = "HTTP/1.1" # Support keep-alive.
+ timeout = 3 # Seconds
+
+ STATUS_RE = re.compile(r"/status/(\d+)")
+
+ def do_GET(self): # pylint:disable=invalid-name
+ status_match = self.STATUS_RE.fullmatch(self.path)
+ status = 200
+ if status_match:
+ status = int(status_match.group(1))
+ if status == 200:
+ body = HttpTestBase.DEFAULT_RESPONSE
+ self.send_response(HTTPStatus.OK)
+ self.send_header("Content-Length", str(len(body)))
+ self.end_headers()
+ self.wfile.write(body)
+ else:
+ self.send_error(status)
+
+ @classmethod
+ def create_server(cls):
+ server_address = ("127.0.0.1", 0) # Only bind to localhost.
+ return HTTPServer(server_address, cls.Handler)
+
+ @classmethod
+ def run_server(cls):
+ httpd = cls.create_server()
+ worker = Thread(
+ target=httpd.serve_forever, daemon=True, name="Test server worker"
+ )
+ worker.start()
+ return worker, httpd
+
+ @classmethod
+ def setUpClass(cls):
+ super().setUpClass()
+ cls.server_thread, cls.server = cls.run_server()
+
+ @classmethod
+ def tearDownClass(cls):
+ cls.server.shutdown()
+ cls.server_thread.join()
+ super().tearDownClass()
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/metrictestutil.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/metrictestutil.py
new file mode 100644
index 0000000000..ff25b092a6
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/metrictestutil.py
@@ -0,0 +1,100 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from opentelemetry.attributes import BoundedAttributes
+from opentelemetry.sdk.metrics.export import (
+ AggregationTemporality,
+ Gauge,
+ Metric,
+ NumberDataPoint,
+ Sum,
+)
+
+
+def _generate_metric(
+ name, data, attributes=None, description=None, unit=None
+) -> Metric:
+ if description is None:
+ description = "foo"
+ if unit is None:
+ unit = "s"
+ return Metric(
+ name=name,
+ description=description,
+ unit=unit,
+ data=data,
+ )
+
+
+def _generate_sum(
+ name,
+ value,
+ attributes=None,
+ description=None,
+ unit=None,
+ is_monotonic=True,
+) -> Metric:
+ if attributes is None:
+ attributes = BoundedAttributes(attributes={"a": 1, "b": True})
+ return _generate_metric(
+ name,
+ Sum(
+ data_points=[
+ NumberDataPoint(
+ attributes=attributes,
+ start_time_unix_nano=1641946015139533244,
+ time_unix_nano=1641946016139533244,
+ value=value,
+ )
+ ],
+ aggregation_temporality=AggregationTemporality.CUMULATIVE,
+ is_monotonic=is_monotonic,
+ ),
+ description=description,
+ unit=unit,
+ )
+
+
+def _generate_gauge(
+ name, value, attributes=None, description=None, unit=None
+) -> Metric:
+ if attributes is None:
+ attributes = BoundedAttributes(attributes={"a": 1, "b": True})
+ return _generate_metric(
+ name,
+ Gauge(
+ data_points=[
+ NumberDataPoint(
+ attributes=attributes,
+ start_time_unix_nano=1641946015139533244,
+ time_unix_nano=1641946016139533244,
+ value=value,
+ )
+ ],
+ ),
+ description=description,
+ unit=unit,
+ )
+
+
+def _generate_unsupported_metric(
+ name, attributes=None, description=None, unit=None
+) -> Metric:
+ return _generate_metric(
+ name,
+ None,
+ description=description,
+ unit=unit,
+ )
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/mock_textmap.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/mock_textmap.py
new file mode 100644
index 0000000000..c3e901ee28
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/mock_textmap.py
@@ -0,0 +1,104 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import typing
+
+from opentelemetry import trace
+from opentelemetry.context import Context
+from opentelemetry.propagators.textmap import (
+ CarrierT,
+ Getter,
+ Setter,
+ TextMapPropagator,
+ default_getter,
+ default_setter,
+)
+
+
+class NOOPTextMapPropagator(TextMapPropagator):
+ """A propagator that does not extract nor inject.
+
+ This class is useful for catching edge cases assuming
+ a SpanContext will always be present.
+ """
+
+ def extract(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: Getter = default_getter,
+ ) -> Context:
+ return Context()
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ return None
+
+ @property
+ def fields(self):
+ return set()
+
+
+class MockTextMapPropagator(TextMapPropagator):
+ """Mock propagator for testing purposes."""
+
+ TRACE_ID_KEY = "mock-traceid"
+ SPAN_ID_KEY = "mock-spanid"
+
+ def extract(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ getter: Getter = default_getter,
+ ) -> Context:
+ if context is None:
+ context = Context()
+ trace_id_list = getter.get(carrier, self.TRACE_ID_KEY)
+ span_id_list = getter.get(carrier, self.SPAN_ID_KEY)
+
+ if not trace_id_list or not span_id_list:
+ return context
+
+ return trace.set_span_in_context(
+ trace.NonRecordingSpan(
+ trace.SpanContext(
+ trace_id=int(trace_id_list[0]),
+ span_id=int(span_id_list[0]),
+ is_remote=True,
+ )
+ ),
+ context,
+ )
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ span = trace.get_current_span(context)
+ setter.set(
+ carrier, self.TRACE_ID_KEY, str(span.get_span_context().trace_id)
+ )
+ setter.set(
+ carrier, self.SPAN_ID_KEY, str(span.get_span_context().span_id)
+ )
+
+ @property
+ def fields(self):
+ return {self.TRACE_ID_KEY, self.SPAN_ID_KEY}
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/spantestutil.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/spantestutil.py
new file mode 100644
index 0000000000..912de9ee03
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/spantestutil.py
@@ -0,0 +1,55 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from functools import partial
+
+from opentelemetry import trace as trace_api
+from opentelemetry.sdk import trace as trace_sdk
+from opentelemetry.sdk.trace import Resource
+
+
+def new_tracer(span_limits=None, resource=None) -> trace_api.Tracer:
+ provider_factory = trace_sdk.TracerProvider
+ if resource is not None:
+ provider_factory = partial(provider_factory, resource=resource)
+ return provider_factory(span_limits=span_limits).get_tracer(__name__)
+
+
+def get_span_with_dropped_attributes_events_links():
+ attributes = {}
+ for index in range(130):
+ attributes[f"key{index}"] = [f"value{index}"]
+ links = []
+ for index in range(129):
+ links.append(
+ trace_api.Link(
+ trace_sdk._Span(
+ name=f"span{index}",
+ context=trace_api.INVALID_SPAN_CONTEXT,
+ attributes=attributes,
+ ).get_span_context(),
+ attributes=attributes,
+ )
+ )
+
+ tracer = new_tracer(
+ span_limits=trace_sdk.SpanLimits(),
+ resource=Resource(attributes=attributes),
+ )
+ with tracer.start_as_current_span(
+ "span", links=links, attributes=attributes
+ ) as span:
+ for index in range(131):
+ span.add_event(f"event{index}", attributes=attributes)
+ return span
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/test_base.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/test_base.py
new file mode 100644
index 0000000000..f9ac2dfc19
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/test_base.py
@@ -0,0 +1,277 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import unittest
+from contextlib import contextmanager
+from typing import Optional, Sequence, Tuple
+
+from opentelemetry import metrics as metrics_api
+from opentelemetry import trace as trace_api
+from opentelemetry.sdk.metrics import MeterProvider
+from opentelemetry.sdk.metrics._internal.point import Metric
+from opentelemetry.sdk.metrics.export import (
+ DataPointT,
+ HistogramDataPoint,
+ InMemoryMetricReader,
+ MetricReader,
+ NumberDataPoint,
+)
+from opentelemetry.sdk.trace import TracerProvider, export
+from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
+ InMemorySpanExporter,
+)
+from opentelemetry.test.globals_test import (
+ reset_metrics_globals,
+ reset_trace_globals,
+)
+
+
+class TestBase(unittest.TestCase):
+ # pylint: disable=C0103
+
+ def setUp(self):
+ super().setUp()
+ result = self.create_tracer_provider()
+ self.tracer_provider, self.memory_exporter = result
+ # This is done because set_tracer_provider cannot override the
+ # current tracer provider.
+ reset_trace_globals()
+ trace_api.set_tracer_provider(self.tracer_provider)
+
+ self.memory_exporter.clear()
+ # This is done because set_meter_provider cannot override the
+ # current meter provider.
+ reset_metrics_globals()
+ (
+ self.meter_provider,
+ self.memory_metrics_reader,
+ ) = self.create_meter_provider()
+ metrics_api.set_meter_provider(self.meter_provider)
+
+ def tearDown(self):
+ super().tearDown()
+ reset_trace_globals()
+ reset_metrics_globals()
+
+ def get_finished_spans(self):
+ return FinishedTestSpans(
+ self, self.memory_exporter.get_finished_spans()
+ )
+
+ def assertEqualSpanInstrumentationInfo(self, span, module):
+ self.assertEqual(span.instrumentation_info.name, module.__name__)
+ self.assertEqual(span.instrumentation_info.version, module.__version__)
+
+ def assertEqualSpanInstrumentationScope(self, span, module):
+ self.assertEqual(span.instrumentation_scope.name, module.__name__)
+ self.assertEqual(
+ span.instrumentation_scope.version, module.__version__
+ )
+
+ def assertSpanHasAttributes(self, span, attributes):
+ for key, val in attributes.items():
+ self.assertIn(key, span.attributes)
+ self.assertEqual(val, span.attributes[key])
+
+ def sorted_spans(self, spans): # pylint: disable=R0201
+ """
+ Sorts spans by span creation time.
+
+ Note: This method should not be used to sort spans in a deterministic way as the
+ order depends on timing precision provided by the platform.
+ """
+ return sorted(
+ spans,
+ key=lambda s: s._start_time, # pylint: disable=W0212
+ reverse=True,
+ )
+
+ @staticmethod
+ def create_tracer_provider(**kwargs):
+ """Helper to create a configured tracer provider.
+
+ Creates and configures a `TracerProvider` with a
+ `SimpleSpanProcessor` and a `InMemorySpanExporter`.
+ All the parameters passed are forwarded to the TracerProvider
+ constructor.
+
+ Returns:
+ A list with the tracer provider in the first element and the
+ in-memory span exporter in the second.
+ """
+ tracer_provider = TracerProvider(**kwargs)
+ memory_exporter = InMemorySpanExporter()
+ span_processor = export.SimpleSpanProcessor(memory_exporter)
+ tracer_provider.add_span_processor(span_processor)
+
+ return tracer_provider, memory_exporter
+
+ @staticmethod
+ def create_meter_provider(**kwargs) -> Tuple[MeterProvider, MetricReader]:
+ """Helper to create a configured meter provider
+ Creates a `MeterProvider` and an `InMemoryMetricReader`.
+ Returns:
+ A tuple with the meter provider in the first element and the
+ in-memory metrics exporter in the second
+ """
+ memory_reader = InMemoryMetricReader()
+ metric_readers = kwargs.get("metric_readers", [])
+ metric_readers.append(memory_reader)
+ kwargs["metric_readers"] = metric_readers
+ meter_provider = MeterProvider(**kwargs)
+ return meter_provider, memory_reader
+
+ @staticmethod
+ @contextmanager
+ def disable_logging(highest_level=logging.CRITICAL):
+ logging.disable(highest_level)
+
+ try:
+ yield
+ finally:
+ logging.disable(logging.NOTSET)
+
+ def get_sorted_metrics(self):
+ resource_metrics = (
+ self.memory_metrics_reader.get_metrics_data().resource_metrics
+ )
+
+ all_metrics = []
+ for metrics in resource_metrics:
+ for scope_metrics in metrics.scope_metrics:
+ all_metrics.extend(scope_metrics.metrics)
+
+ return self.sorted_metrics(all_metrics)
+
+ @staticmethod
+ def sorted_metrics(metrics):
+ """
+ Sorts metrics by metric name.
+ """
+ return sorted(
+ metrics,
+ key=lambda m: m.name,
+ )
+
+ def assert_metric_expected(
+ self,
+ metric: Metric,
+ expected_data_points: Sequence[DataPointT],
+ est_value_delta: Optional[float] = 0,
+ ):
+ self.assertEqual(
+ len(expected_data_points), len(metric.data.data_points)
+ )
+ for expected_data_point in expected_data_points:
+ self.assert_data_point_expected(
+ expected_data_point, metric.data.data_points, est_value_delta
+ )
+
+ # pylint: disable=unidiomatic-typecheck
+ @staticmethod
+ def is_data_points_equal(
+ expected_data_point: DataPointT,
+ data_point: DataPointT,
+ est_value_delta: Optional[float] = 0,
+ ):
+ if type(expected_data_point) != type( # noqa: E721
+ data_point
+ ) or not isinstance(
+ expected_data_point, (HistogramDataPoint, NumberDataPoint)
+ ):
+ return False
+
+ values_diff = None
+ if isinstance(data_point, NumberDataPoint):
+ values_diff = abs(expected_data_point.value - data_point.value)
+ elif isinstance(data_point, HistogramDataPoint):
+ values_diff = abs(expected_data_point.sum - data_point.sum)
+ if expected_data_point.count != data_point.count or (
+ est_value_delta == 0
+ and (
+ expected_data_point.min != data_point.min
+ or expected_data_point.max != data_point.max
+ )
+ ):
+ return False
+
+ return (
+ values_diff <= est_value_delta
+ and expected_data_point.attributes == dict(data_point.attributes)
+ )
+
+ def assert_data_point_expected(
+ self,
+ expected_data_point: DataPointT,
+ data_points: Sequence[DataPointT],
+ est_value_delta: Optional[float] = 0,
+ ):
+ is_data_point_exist = False
+ for data_point in data_points:
+ if self.is_data_points_equal(
+ expected_data_point, data_point, est_value_delta
+ ):
+ is_data_point_exist = True
+ break
+
+ self.assertTrue(
+ is_data_point_exist,
+ msg=f"Data point {expected_data_point} does not exist",
+ )
+
+ @staticmethod
+ def create_number_data_point(value, attributes):
+ return NumberDataPoint(
+ value=value,
+ attributes=attributes,
+ start_time_unix_nano=0,
+ time_unix_nano=0,
+ )
+
+ @staticmethod
+ def create_histogram_data_point(
+ sum_data_point, count, max_data_point, min_data_point, attributes
+ ):
+ return HistogramDataPoint(
+ count=count,
+ sum=sum_data_point,
+ min=min_data_point,
+ max=max_data_point,
+ attributes=attributes,
+ start_time_unix_nano=0,
+ time_unix_nano=0,
+ bucket_counts=[],
+ explicit_bounds=[],
+ )
+
+
+class FinishedTestSpans(list):
+ def __init__(self, test, spans):
+ super().__init__(spans)
+ self.test = test
+
+ def by_name(self, name):
+ for span in self:
+ if span.name == name:
+ return span
+ self.test.fail(f"Did not find span with name {name}")
+ return None
+
+ def by_attr(self, key, value):
+ for span in self:
+ if span.attributes.get(key) == value:
+ return span
+ self.test.fail(f"Did not find span with attrs {key}={value}")
+ return None
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/version.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/version.py
new file mode 100644
index 0000000000..ecc7bc1725
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/version.py
@@ -0,0 +1 @@
+__version__ = "0.44b0.dev"
diff --git a/tests/opentelemetry-test-utils/src/opentelemetry/test/wsgitestutil.py b/tests/opentelemetry-test-utils/src/opentelemetry/test/wsgitestutil.py
new file mode 100644
index 0000000000..28a4c2698e
--- /dev/null
+++ b/tests/opentelemetry-test-utils/src/opentelemetry/test/wsgitestutil.py
@@ -0,0 +1,56 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import io
+import wsgiref.util as wsgiref_util
+
+from opentelemetry import trace
+from opentelemetry.test.test_base import TestBase
+
+
+class WsgiTestBase(TestBase):
+ def setUp(self):
+ super().setUp()
+
+ self.write_buffer = io.BytesIO()
+ self.write = self.write_buffer.write
+
+ self.environ = {}
+ wsgiref_util.setup_testing_defaults(self.environ)
+
+ self.status = None
+ self.response_headers = None
+ self.exc_info = None
+
+ def start_response(self, status, response_headers, exc_info=None):
+ self.status = status
+ self.response_headers = response_headers
+ self.exc_info = exc_info
+ return self.write
+
+ def assertTraceResponseHeaderMatchesSpan(
+ self, headers, span
+ ): # pylint: disable=invalid-name
+ self.assertIn("traceresponse", headers)
+ self.assertEqual(
+ headers["access-control-expose-headers"],
+ "traceresponse",
+ )
+
+ trace_id = trace.format_trace_id(span.get_span_context().trace_id)
+ span_id = trace.format_span_id(span.get_span_context().span_id)
+ self.assertEqual(
+ f"00-{trace_id}-{span_id}-01",
+ headers["traceresponse"],
+ )
diff --git a/tests/opentelemetry-test-utils/tests/__init__.py b/tests/opentelemetry-test-utils/tests/__init__.py
new file mode 100644
index 0000000000..b0a6f42841
--- /dev/null
+++ b/tests/opentelemetry-test-utils/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/tests/opentelemetry-test-utils/tests/test_utils.py b/tests/opentelemetry-test-utils/tests/test_utils.py
new file mode 100644
index 0000000000..ce97951f86
--- /dev/null
+++ b/tests/opentelemetry-test-utils/tests/test_utils.py
@@ -0,0 +1,82 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from opentelemetry.test import TestCase
+
+
+class TestAssertNotRaises(TestCase):
+ def test_no_exception(self):
+
+ try:
+
+ with self.assertNotRaises(Exception):
+ pass
+
+ except Exception as error: # pylint: disable=broad-except
+
+ self.fail( # pylint: disable=no-member
+ f"Unexpected exception {error} was raised"
+ )
+
+ def test_no_specified_exception_single(self):
+
+ try:
+
+ with self.assertNotRaises(KeyError):
+ 1 / 0 # pylint: disable=pointless-statement
+
+ except Exception as error: # pylint: disable=broad-except
+
+ self.fail( # pylint: disable=no-member
+ f"Unexpected exception {error} was raised"
+ )
+
+ def test_no_specified_exception_multiple(self):
+
+ try:
+
+ with self.assertNotRaises(KeyError, IndexError):
+ 1 / 0 # pylint: disable=pointless-statement
+
+ except Exception as error: # pylint: disable=broad-except
+
+ self.fail( # pylint: disable=no-member
+ f"Unexpected exception {error} was raised"
+ )
+
+ def test_exception(self):
+
+ with self.assertRaises(AssertionError):
+
+ with self.assertNotRaises(ZeroDivisionError):
+ 1 / 0 # pylint: disable=pointless-statement
+
+ def test_missing_exception(self):
+
+ with self.assertRaises(AssertionError) as error:
+
+ with self.assertNotRaises(ZeroDivisionError):
+
+ def raise_zero_division_error():
+ raise ZeroDivisionError()
+
+ raise_zero_division_error()
+
+ error_lines = error.exception.args[0].split("\n")
+
+ self.assertEqual(
+ error_lines[0].strip(), "Unexpected exception was raised:"
+ )
+ self.assertEqual(error_lines[2].strip(), "raise_zero_division_error()")
+ self.assertEqual(error_lines[5].strip(), "raise ZeroDivisionError()")
diff --git a/tests/w3c_tracecontext_validation_server.py b/tests/w3c_tracecontext_validation_server.py
new file mode 100644
index 0000000000..5c47708ee1
--- /dev/null
+++ b/tests/w3c_tracecontext_validation_server.py
@@ -0,0 +1,75 @@
+# Copyright The OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This server is intended to be used with the W3C tracecontext validation
+Service. It implements the APIs needed to be exercised by the test bed.
+"""
+
+import json
+
+import flask
+import requests
+
+from opentelemetry import trace
+from opentelemetry.instrumentation.requests import RequestsInstrumentor
+from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import (
+ ConsoleSpanExporter,
+ SimpleSpanProcessor,
+)
+
+# FIXME This could likely be avoided by integrating this script into the
+# standard test running mechanisms.
+
+# Integrations are the glue that binds the OpenTelemetry API and the
+# frameworks and libraries that are used together, automatically creating
+# Spans and propagating context as appropriate.
+trace.set_tracer_provider(TracerProvider())
+RequestsInstrumentor().instrument()
+
+# SpanExporter receives the spans and send them to the target location.
+span_processor = SimpleSpanProcessor(ConsoleSpanExporter())
+trace.get_tracer_provider().add_span_processor(span_processor)
+
+app = flask.Flask(__name__)
+app.wsgi_app = OpenTelemetryMiddleware(app.wsgi_app)
+
+
+@app.route("/verify-tracecontext", methods=["POST"])
+def verify_tracecontext():
+ """Upon reception of some payload, sends a request back to the designated
+ url.
+
+ This route is designed to be testable with the w3c tracecontext server /
+ client test.
+ """
+ for action in flask.request.json:
+ requests.post(
+ url=action["url"],
+ data=json.dumps(action["arguments"]),
+ headers={
+ "Accept": "application/json",
+ "Content-Type": "application/json; charset=utf-8",
+ },
+ timeout=5.0,
+ )
+ return "hello"
+
+
+if __name__ == "__main__":
+ try:
+ app.run(debug=True)
+ finally:
+ span_processor.shutdown()
diff --git a/tox.ini b/tox.ini
index ccf1cc0c14..6f85c783c9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -5,6 +5,7 @@ skip_missing_interpreters = True
envlist =
; Environments are organized by individual package, allowing
; for specifying supported Python versions per package.
+<<<<<<< HEAD
; opentelemetry-resource-detector-container
py3{7,8,9,10,11}-test-resource-detector-container
@@ -242,10 +243,79 @@ envlist =
docs
generate
+=======
+ py3{7,8,9,10,11}-opentelemetry-api
+ pypy3-opentelemetry-api
+
+ ; Test against both protobuf 3.x and 4.x
+ py3{7,8,9,10,11}-proto{3,4}-opentelemetry-protobuf
+ pypy3-proto{3,4}-opentelemetry-protobuf
+
+ py3{7,8,9,10,11}-opentelemetry-sdk
+ pypy3-opentelemetry-sdk
+
+ py3{7,8,9,10,11}-opentelemetry-semantic-conventions
+ pypy3-opentelemetry-semantic-conventions
+
+ ; docs/getting-started
+ py3{8,9,10,11}-opentelemetry-getting-started
+
+ py3{7,8,9,10,11}-opentelemetry-opentracing-shim
+ pypy3-opentelemetry-opentracing-shim
+
+ py3{7,8,9,10,11}-opentelemetry-opencensus-shim
+ ; opencensus-shim intentionally excluded from pypy3 (grpcio install fails)
+
+ py3{7,8,9,10,11}-opentelemetry-exporter-opencensus
+ ; exporter-opencensus intentionally excluded from pypy3
+
+ py3{7,8,9,10,11}-proto{3,4}-opentelemetry-exporter-otlp-proto-common
+
+ ; opentelemetry-exporter-otlp
+ py3{7,8,9,10,11}-opentelemetry-exporter-otlp-combined
+ ; intentionally excluded from pypy3
+
+ py3{7,8,9,10,11}-proto{3,4}-opentelemetry-exporter-otlp-proto-grpc
+ ; intentionally excluded from pypy3
+
+ py3{7,8,9,10,11}-proto{3,4}-opentelemetry-exporter-otlp-proto-http
+ pypy3-opentelemetry-proto{3,4}-exporter-otlp-proto-http
+
+ py3{7,8,9,10,11}-opentelemetry-exporter-prometheus
+ pypy3-opentelemetry-exporter-prometheus
+
+ ; opentelemetry-exporter-zipkin
+ py3{7,8,9,10,11}-opentelemetry-exporter-zipkin-combined
+ pypy3-opentelemetry-exporter-zipkin-combined
+
+ py3{7,8,9,10,11}-opentelemetry-exporter-zipkin-proto-http
+ pypy3-opentelemetry-exporter-zipkin-proto-http
+
+ py3{7,8,9,10,11}-opentelemetry-exporter-zipkin-json
+ pypy3-opentelemetry-exporter-zipkin-json
+
+ py3{7,8,9,10,11}-opentelemetry-propagator-b3
+ pypy3-opentelemetry-propagator-b3
+
+ py3{7,8,9,10,11}-opentelemetry-propagator-jaeger
+ pypy3-opentelemetry-propagator-jaeger
+
+ py3{7,8,9,10,11}-opentelemetry-test-utils
+ pypy3-opentelemetry-test-utils
+
+ lint
+ spellcheck
+ tracecontext
+ mypy,mypyinstalled
+ docs
+ docker-tests-proto{3,4}
+ public-symbols-check
+>>>>>>> upstream/main
[testenv]
deps =
-c dev-requirements.txt
+<<<<<<< HEAD
test: pytest
test: pytest-benchmark
coverage: pytest
@@ -359,12 +429,56 @@ changedir =
test-propagator-ot-trace: propagator/opentelemetry-propagator-ot-trace/tests
test-exporter-richconsole: exporter/opentelemetry-exporter-richconsole/tests
test-exporter-prometheus-remote-write: exporter/opentelemetry-exporter-prometheus-remote-write/tests
+=======
+ opentelemetry: pytest
+ opentelemetry: pytest-benchmark
+ opentelemetry: flaky
+ coverage: pytest
+ coverage: pytest-cov
+ mypy,mypyinstalled: mypy
+
+ ; proto 3 and 4 tests install the respective version of protobuf
+ proto3: protobuf~=3.19.0
+ proto4: protobuf~=4.0
+
+
+setenv =
+ ; override CONTRIB_REPO_SHA via env variable when testing other branches/commits than main
+ ; i.e: CONTRIB_REPO_SHA=dde62cebffe519c35875af6d06fae053b3be65ec tox -e
+ CONTRIB_REPO_SHA={env:CONTRIB_REPO_SHA:"main"}
+ CONTRIB_REPO="git+https://github.com/open-telemetry/opentelemetry-python-contrib.git@{env:CONTRIB_REPO_SHA}"
+ mypy: MYPYPATH={toxinidir}/opentelemetry-api/src/:{toxinidir}/tests/opentelemetry-test-utils/src/
+
+changedir =
+ api: opentelemetry-api/tests
+ sdk: opentelemetry-sdk/tests
+ protobuf: opentelemetry-proto/tests
+ semantic-conventions: opentelemetry-semantic-conventions/tests
+ getting-started: docs/getting_started/tests
+ opentracing-shim: shim/opentelemetry-opentracing-shim/tests
+ opencensus-shim: shim/opentelemetry-opencensus-shim/tests
+
+ exporter-opencensus: exporter/opentelemetry-exporter-opencensus/tests
+ exporter-otlp-proto-common: exporter/opentelemetry-exporter-otlp-proto-common/tests
+ exporter-otlp-combined: exporter/opentelemetry-exporter-otlp/tests
+ exporter-otlp-proto-grpc: exporter/opentelemetry-exporter-otlp-proto-grpc/tests
+ exporter-otlp-proto-http: exporter/opentelemetry-exporter-otlp-proto-http/tests
+ exporter-prometheus: exporter/opentelemetry-exporter-prometheus/tests
+ exporter-zipkin-combined: exporter/opentelemetry-exporter-zipkin/tests
+ exporter-zipkin-proto-http: exporter/opentelemetry-exporter-zipkin-proto-http/tests
+ exporter-zipkin-json: exporter/opentelemetry-exporter-zipkin-json/tests
+
+ propagator-b3: propagator/opentelemetry-propagator-b3/tests
+ propagator-jaeger: propagator/opentelemetry-propagator-jaeger/tests
+ test-utils: tests/opentelemetry-test-utils/tests
+>>>>>>> upstream/main
commands_pre =
; Install without -e to test the actual installation
py3{7,8,9,10,11}: python -m pip install -U pip setuptools wheel
; Install common packages for all the tests. These are not needed in all the
; cases but it saves a lot of boilerplate in this file.
+<<<<<<< HEAD
test: pip install "opentelemetry-api[test] @ {env:CORE_REPO}#egg=opentelemetry-api&subdirectory=opentelemetry-api"
test: pip install "opentelemetry-semantic-conventions[test] @ {env:CORE_REPO}#egg=opentelemetry-semantic-conventions&subdirectory=opentelemetry-semantic-conventions"
test: pip install "opentelemetry-sdk[test] @ {env:CORE_REPO}#egg=opentelemetry-sdk&subdirectory=opentelemetry-sdk"
@@ -504,6 +618,79 @@ changedir = docs
commands =
sphinx-build -E -a -W -b html -T . _build/html
+=======
+ opentelemetry: pip install {toxinidir}/opentelemetry-api {toxinidir}/opentelemetry-semantic-conventions {toxinidir}/opentelemetry-sdk {toxinidir}/tests/opentelemetry-test-utils
+
+ protobuf: pip install {toxinidir}/opentelemetry-proto
+
+ getting-started: pip install -r requirements.txt
+ getting-started: pip install -e "{env:CONTRIB_REPO}#egg=opentelemetry-util-http&subdirectory=util/opentelemetry-util-http"
+ getting-started: pip install -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation&subdirectory=opentelemetry-instrumentation"
+ getting-started: pip install -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation-requests&subdirectory=instrumentation/opentelemetry-instrumentation-requests"
+ getting-started: pip install -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation-wsgi&subdirectory=instrumentation/opentelemetry-instrumentation-wsgi"
+ getting-started: pip install -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation-flask&subdirectory=instrumentation/opentelemetry-instrumentation-flask"
+
+ opencensus: pip install {toxinidir}/exporter/opentelemetry-exporter-opencensus
+
+ exporter-otlp-proto-common: pip install {toxinidir}/opentelemetry-proto
+ exporter-otlp-proto-common: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common
+
+ exporter-otlp-combined: pip install {toxinidir}/opentelemetry-proto
+ exporter-otlp-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common
+ exporter-otlp-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-grpc
+ exporter-otlp-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-http
+ exporter-otlp-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp
+
+ exporter-otlp-proto-grpc: pip install {toxinidir}/opentelemetry-proto
+ exporter-otlp-proto-grpc: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common
+ exporter-otlp-proto-grpc: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-grpc
+
+ exporter-otlp-proto-http: pip install {toxinidir}/opentelemetry-proto
+ exporter-otlp-proto-http: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common
+ exporter-otlp-proto-http: pip install {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-http[test]
+
+ opentracing-shim: pip install {toxinidir}/opentelemetry-sdk
+ opentracing-shim: pip install {toxinidir}/shim/opentelemetry-opentracing-shim
+
+ opencensus-shim: pip install {toxinidir}/opentelemetry-sdk
+ opencensus-shim: pip install {toxinidir}/shim/opentelemetry-opencensus-shim[test]
+
+ exporter-prometheus: pip install {toxinidir}/exporter/opentelemetry-exporter-prometheus
+
+ exporter-zipkin-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin-json
+ exporter-zipkin-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin-proto-http
+ exporter-zipkin-combined: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin
+
+ exporter-zipkin-proto-http: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin-json
+ exporter-zipkin-proto-http: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin-proto-http
+
+ exporter-zipkin-json: pip install {toxinidir}/exporter/opentelemetry-exporter-zipkin-json
+
+ b3: pip install {toxinidir}/propagator/opentelemetry-propagator-b3
+
+ propagator-jaeger: pip install {toxinidir}/propagator/opentelemetry-propagator-jaeger
+
+; In order to get a healthy coverage report,
+; we have to install packages in editable mode.
+ coverage: python {toxinidir}/scripts/eachdist.py install --editable
+
+; Using file:// here because otherwise tox invokes just "pip install
+; opentelemetry-api", leading to an error
+ mypyinstalled: pip install file://{toxinidir}/opentelemetry-api/
+
+commands =
+ opentelemetry: pytest {posargs}
+ coverage: {toxinidir}/scripts/coverage.sh
+
+ mypy: mypy --install-types --non-interactive --namespace-packages --explicit-package-bases opentelemetry-api/src/opentelemetry/
+
+; For test code, we don't want to enforce the full mypy strictness
+ mypy: mypy --install-types --non-interactive --namespace-packages --config-file=mypy-relaxed.ini opentelemetry-api/tests/
+
+; Test that mypy can pick up typeinfo from an installed package (otherwise,
+; implicit Any due to unfollowed import would result).
+ mypyinstalled: mypy --install-types --non-interactive --namespace-packages opentelemetry-api/tests/mypysmoke.py --strict
+>>>>>>> upstream/main
[testenv:spellcheck]
basepython: python3
@@ -521,6 +708,7 @@ deps =
-r dev-requirements.txt
commands_pre =
+<<<<<<< HEAD
python -m pip install "{env:CORE_REPO}#egg=opentelemetry-api&subdirectory=opentelemetry-api"
python -m pip install "{env:CORE_REPO}#egg=opentelemetry-semantic-conventions&subdirectory=opentelemetry-semantic-conventions"
python -m pip install "{env:CORE_REPO}#egg=opentelemetry-sdk&subdirectory=opentelemetry-sdk"
@@ -581,10 +769,34 @@ commands_pre =
python -m pip install -e {toxinidir}/propagator/opentelemetry-propagator-aws-xray[test]
python -m pip install -e {toxinidir}/propagator/opentelemetry-propagator-ot-trace[test]
python -m pip install -e {toxinidir}/opentelemetry-distro[test]
+=======
+ python -m pip install -e {toxinidir}/opentelemetry-api[test]
+ python -m pip install -e {toxinidir}/opentelemetry-semantic-conventions[test]
+ python -m pip install -e {toxinidir}/opentelemetry-sdk[test]
+ python -m pip install -e {toxinidir}/opentelemetry-proto[test]
+ python -m pip install -e {toxinidir}/tests/opentelemetry-test-utils[test]
+ python -m pip install -e {toxinidir}/shim/opentelemetry-opentracing-shim[test]
+ python -m pip install -e {toxinidir}/shim/opentelemetry-opencensus-shim[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-opencensus[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-grpc[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-http[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-otlp[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-prometheus[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-zipkin-json[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-zipkin-proto-http[test]
+ python -m pip install -e {toxinidir}/exporter/opentelemetry-exporter-zipkin[test]
+ python -m pip install -e {toxinidir}/propagator/opentelemetry-propagator-b3[test]
+ python -m pip install -e {toxinidir}/propagator/opentelemetry-propagator-jaeger[test]
+ # Pin protobuf version due to lint failing on v3.20.0
+ # https://github.com/protocolbuffers/protobuf/issues/9730
+ python -m pip install protobuf==3.19.4
+>>>>>>> upstream/main
commands =
python scripts/eachdist.py lint --check-only
+<<<<<<< HEAD
[testenv:docker-tests]
basepython: python3
deps =
@@ -611,11 +823,62 @@ deps =
remoulade>=0.50
mysqlclient~=2.1.1
pyyaml==5.3.1
+=======
+[testenv:docs]
+basepython: python3
+recreate = True
+deps =
+ -c {toxinidir}/dev-requirements.txt
+ -r {toxinidir}/docs-requirements.txt
+changedir = docs
+commands =
+ sphinx-build -E -a -W -b html -T . _build/html
+
+[testenv:tracecontext]
+basepython: python3
+deps =
+ # needed for tracecontext
+ aiohttp~=3.6
+ # needed for example trace integration
+ flask~=1.1
+ requests~=2.7
+ # temporary fix. we should update the jinja, flask deps
+ # See https://github.com/pallets/markupsafe/issues/282
+ # breaking change introduced in markupsafe causes jinja, flask to break
+ markupsafe==2.0.1
+
+commands_pre =
+ pip install -e {toxinidir}/opentelemetry-api \
+ -e {toxinidir}/opentelemetry-semantic-conventions \
+ -e {toxinidir}/opentelemetry-sdk \
+ -e "{env:CONTRIB_REPO}#egg=opentelemetry-util-http&subdirectory=util/opentelemetry-util-http" \
+ -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation&subdirectory=opentelemetry-instrumentation" \
+ -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation-requests&subdirectory=instrumentation/opentelemetry-instrumentation-requests" \
+ -e "{env:CONTRIB_REPO}#egg=opentelemetry-instrumentation-wsgi&subdirectory=instrumentation/opentelemetry-instrumentation-wsgi"
+
+commands =
+ {toxinidir}/scripts/tracecontext-integration-test.sh
+
+[testenv:docker-tests-proto{3,4}]
+deps =
+ pytest==7.1.3
+ # Pinning PyYAML for issue: https://github.com/yaml/pyyaml/issues/724
+ PyYAML==5.3.1
+ # Pinning docker for issue: https://github.com/docker/compose/issues/11309
+ docker<7
+ docker-compose==1.29.2
+ requests==2.28.2
+
+ ; proto 3 and 4 tests install the respective version of protobuf
+ proto3: protobuf~=3.19.0
+ proto4: protobuf~=4.0
+>>>>>>> upstream/main
changedir =
tests/opentelemetry-docker-tests/tests
commands_pre =
+<<<<<<< HEAD
pip install "{env:CORE_REPO}#egg=opentelemetry-api&subdirectory=opentelemetry-api" \
"{env:CORE_REPO}#egg=opentelemetry-semantic-conventions&subdirectory=opentelemetry-semantic-conventions" \
"{env:CORE_REPO}#egg=opentelemetry-sdk&subdirectory=opentelemetry-sdk" \
@@ -642,10 +905,30 @@ commands_pre =
commands =
pytest {posargs}
+=======
+ pip freeze
+ pip install -e {toxinidir}/opentelemetry-api \
+ -e {toxinidir}/opentelemetry-semantic-conventions \
+ -e {toxinidir}/opentelemetry-sdk \
+ -e {toxinidir}/tests/opentelemetry-test-utils \
+ ; opencensus exporter does not work with protobuf 4
+ proto3: -e {toxinidir}/exporter/opentelemetry-exporter-opencensus \
+ -e {toxinidir}/opentelemetry-proto \
+ -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-common \
+ -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-grpc \
+ -e {toxinidir}/exporter/opentelemetry-exporter-otlp-proto-http \
+ -e {toxinidir}/exporter/opentelemetry-exporter-otlp
+ docker-compose up -d
+commands =
+ proto3: pytest {posargs}
+ ; opencensus exporter does not work with protobuf 4
+ proto4: pytest --ignore opencensus {posargs}
+>>>>>>> upstream/main
commands_post =
docker-compose down -v
+<<<<<<< HEAD
[testenv:generate]
deps =
-r {toxinidir}/gen-requirements.txt
@@ -654,3 +937,12 @@ commands =
{toxinidir}/scripts/generate_instrumentation_bootstrap.py
{toxinidir}/scripts/generate_instrumentation_readme.py
{toxinidir}/scripts/generate_instrumentation_metapackage.py
+=======
+[testenv:public-symbols-check]
+basepython: python3
+recreate = True
+deps =
+ GitPython==3.1.40
+commands =
+ python {toxinidir}/scripts/public_symbols_checker.py
+>>>>>>> upstream/main