This example uses the same sample application that is used in the java/linux example.
The following tools are required to build and deploy the Java application and the Splunk OpenTelemetry Collector:
- Docker
- Kubernetes
- Helm 3
This example requires the Splunk Distribution of the OpenTelemetry collector to be running on the host and available within the Kubernetes cluster. Follow the instructions in Install the Collector for Kubernetes using Helm to install the collector in your k8s cluster.
If you'd like to capture logs from the Kubernetes cluster, ensure the HEC URL and HEC token are provided when the collector is deployed.
Here's an example command that shows how to deploy the collector in Kubernetes using Helm:
helm install splunk-otel-collector --set="splunkObservability.accessToken=<Access Token>,clusterName=<Cluster Name>,splunkObservability.realm=<Realm>,gateway.enabled=false,splunkPlatform.endpoint=https://<HEC URL>:443/services/collector/event,splunkPlatform.token=<HEC token>,splunkPlatform.index=<Index>,splunkObservability.profilingEnabled=true,environment=<Environment Name>" splunk-otel-collector-chart/splunk-otel-collector
You'll need to substitute your access token, realm, and other information.
Open a command line terminal and navigate to the root of the directory.
For example:
cd ~/splunk-opentelemetry-examples/instrumentation/java/linux
To run the application in K8s, we'll need a Docker image for the application. We've already built one, so feel free to skip this section unless you want to use your own image.
You can use the following command to build the Docker image:
docker build --platform="linux/amd64" -t doorgame:1.0 doorgame
Note that the Dockerfile adds the latest version of the Splunk Java agent to the container image, and then includes it as part of the java startup command when the container is launched:
# Adds the latest version of the Splunk Java agent
ADD https://github.com/signalfx/splunk-otel-java/releases/latest/download/splunk-otel-javaagent.jar .
# Modifies the entry point
ENTRYPOINT ["java","-javaagent:splunk-otel-javaagent.jar","-jar","/app/profiling-workshop-all.jar"]
If you'd like to test the Docker image locally you can use:
docker run -p 9090:9090 doorgame:1.0
Then access the application by pointing your browser to http://localhost:9090
.
We'll then need to push the Docker image to a repository that you have access to, such as your Docker Hub account. We've already done this for you, so feel free to skip this step unless you'd like to use your own image.
Specifically, we've pushed the image to GitHub's container repository using the following commands:
docker tag doorgame:1.0 ghcr.io/splunk/doorgame:1.0
docker push ghcr.io/splunk/doorgame:1.0
Now that we have our Docker image, we can deploy the application to our Kubernetes cluster. We'll do this by using the following kubectl command to deploy the doorgame.yaml manifest file:
kubectl apply -f ./doorgame/doorgame.yaml
The Docker image already includes the splunk-otel-javaagent.jar file, and adds it to the Java startup command. The doorgame.yaml manifest file adds to this configuration by setting the following environment variables, to configure how the Java agent gathers and exports data to the collector running within the cluster:
env:
- name: PORT
value: "9090"
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(NODE_IP):4318"
- name: OTEL_SERVICE_NAME
value: "doorgame"
- name: OTEL_PROPAGATORS
value: "tracecontext,baggage"
- name: SPLUNK_PROFILER_ENABLED
value: "true"
- name: SPLUNK_PROFILER_MEMORY_ENABLED
value: "true"
To test the application, we'll need to get the Cluster IP:
kubectl describe svc doorgame | grep IP:
Then we can access the application by pointing our browser to http://<IP Address>:81
.
If you're testing with minikube then use the following command to connect to the service:
minikube service doorgame
The application should look like the following:
After a minute or so, you should start to see traces for the Java application appearing in Splunk Observability Cloud:
Note that the trace has been decorated with Kubernetes attributes, such as k8s.pod.name
and k8s.pod.uid
. This allows us to retain context when we navigate from APM to
infrastructure data within Splunk Observability Cloud.
Metrics are collected by splunk-otel-javaagent.jar automatically. For example,
the jvm.memory.used
metric shows us the amount of memory used in the JVM
by type of memory:
The Splunk Distribution of OpenTelemetry Java automatically adds trace context to logs. However, it doesn't add this context to the actual log file (unless you explicitly configure the logging framework to do so). Instead, the trace context is added behind the scenes to the log events exported to the OpenTelemetry Collector.
For example, if we add the debug exporter to the logs pipeline of the collector, we can see that the trace_id and span_id have been added to the following log event for our application:
splunk-otel-collector-1 | ScopeLogs #0
splunk-otel-collector-1 | ScopeLogs SchemaURL:
splunk-otel-collector-1 | InstrumentationScope com.splunk.profiling.workshop.DoorGame
splunk-otel-collector-1 | LogRecord #0
splunk-otel-collector-1 | ObservedTimestamp: 2024-10-09 22:24:20.878047 +0000 UTC
splunk-otel-collector-1 | Timestamp: 2024-10-09 22:24:20.87802 +0000 UTC
splunk-otel-collector-1 | SeverityText: INFO
splunk-otel-collector-1 | SeverityNumber: Info(9)
splunk-otel-collector-1 | Body: Str(Starting a new game)
splunk-otel-collector-1 | Trace ID: 5d6747dc9a1f69a5879c076ac0943e05
splunk-otel-collector-1 | Span ID: d596e1684d358fe0
splunk-otel-collector-1 | Flags: 1
The OpenTelemetry Collector can be configured to export log data to Splunk platform using the Splunk HEC exporter. The logs can then be made available to Splunk Observability Cloud using Log Observer Connect. This will provide full correlation between spans generated by Java instrumentation with metrics and logs.
Here's an example of what that looks like. We can see that the trace includes a Related Content link at the bottom right:
Clicking on this link brings us to Log Observer Connect, which filters on log entries related to this specific trace:
The default configuration may result in duplicate log events sent to Splunk platform. The first set of log events comes from the file log receiver which reads the logs of all Kubernetes pods. The second set of logs comes from the Java agent, which are sent to the collector via OTLP.
There are two ways to avoid duplicate logs:
We can set the OTEL_LOGS_EXPORTER
environment variable to "none" to prevent
the Java agent from exporting logs. The equivalent JVM property is -Dotel.logs.exporter
.
This option is simple, however we'd lose the trace context that the Java agent automatically adds when it exports logs. To address this, we could follow the steps in Configure your logging library to add the trace context explicitly to the logging configuration used by the application.
We can add an annotation to the Kubernetes deployment manifest as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: doorgame
spec:
selector:
matchLabels:
app: doorgame
template:
metadata:
labels:
app: doorgame
annotations:
splunk.com/exclude: "true"
This ensures that the collector will not export logs for any pods with this annotation. To apply the changes, run:
kubectl apply -f ./doorgame/doorgame.yaml
With this option we'd have to update the collector configuration as well by providing
a custom values.yaml
, like the example here.
This configuration ensures that the splunk.com/exclude annotation is only applied to the logs that come from the filelog and fluentforward receivers, so that any logs that come from the Java agent via the otlp receiver are not excluded.
We would then update the collector configuration with the following command:
helm upgrade splunk-otel-collector -f ../k8s/values.yaml splunk-otel-collector-chart/splunk-otel-collector