An implementation of Etsy's statsd in Go, based on original code from @kisielk.
The project provides both a server called "gostatsd" which works much like Etsy's version, but also provides a library for developing customized servers.
Backends are pluggable and only need to support the backend interface.
Being written in Go, it is able to use all cores which makes it easy to scale up the server based on load. The server can also be run HA and be scaled out, see Load balancing and scaling out.
Gostatsd currently targets Go 1.10.2. There are no known hard dependencies in the code beween 1.9 and 1.10.2, but some may be introduced in future.
From the gostatsd
directory run make build
. The binary will be built in build/bin/<arch>/gostatsd
.
You will need to install the build dependencies by running make setup
in the gostatsd
directory. This must be done before the first build, and again if the dependencies change.
If you are unable to build gostatsd
please try running make setup
again before reporting a bug.
gostatsd --help
gives a complete description of available options and their
defaults. You can use make run
to run the server with just the stdout
backend
to display info on screen.
You can also run through docker
by running make run-docker
which will use docker-compose
to run gostatsd
with a graphite backend and a grafana dashboard.
While not generally tested on Windows, it should work. Maximum throughput is likely to be better on a linux system, however.
Backends and cloud providers are configured using toml
, json
or yaml
configuration file
passed via the --config-path
flag. For all configuration options see source code of the backends you
are interested in. Configuration file might look like this:
[graphite]
address = "192.168.99.100:2003"
[datadog]
api_key = "my-secret-key" # Datadog API key required.
[statsdaemon]
address = "docker.local:8125"
disable_tags = false
[aws]
max_retries = 4
[newrelic]
address = "http://localhost:8001/v1/data"
event-type = "GoStatsD"
#see full configuration options further below
This backend sends a HTTP Payload to the New Relic Infrastructure Agent via it's inbuilt HTTP Server. Sending via the inbuilt HTTP server provides additional features, such as automatically applying additional metadata to the event the host may have such as AWS tags, instance type, host information, labels etc.
The payload structure required to be accepted by the agent can be viewed here.
To enable the HTTP server, modify /etc/newrelic.yml to include the below, and restart the agent (Step 1.2).
http_server_enabled: true
http_server_host: 127.0.0.1 #(default host)
http_server_port: 8001 #(default port)
Additional options are available to rename attributes if required.
[newrelic]
tag-prefix = ""
metric-name = "name"
metric-type = "type"
per-second = "per_second"
value = "value"
timer-min = "min"
timer-max = "max"
timer-count = "samples_count"
timer-mean = "samples_mean"
timer-median = "samples_median"
timer-stddev = "samples_std_dev"
timer-sum = "samples_sum"
timer-sumsquare = "samples_sum_squares"
By default, timer metrics will result in aggregated metrics of the form (exact name varies by backend):
<base>.Count
<base>.CountPerSecond
<base>.Mean
<base>.Median
<base>.Lower
<base>.Upper
<base>.StdDev
<base>.Sum
<base>.SumSquares
In addition, the following aggregated metrics will be emitted for each configured percentile:
<base>.Count_XX
<base>.Mean_XX
<base>.Sum_XX
<base>.SumSquares_XX
<base>.Upper_XX - for positive only
<base>.Lower_-XX - for negative only
These can be controlled through the disabled-sub-metrics
configuration section:
[disabled-sub-metrics]
# Regular metrics
count=false
count-per-second=false
mean=false
median=false
lower=false
upper=false
stddev=false
sum=false
sum-squares=false
# Percentile metrics
count-pct=false
mean-pct=false
sum-pct=false
sum-squares-pct=false
lower-pct=false
upper-pct=false
By default (for compatibility), they are all false and the metrics will be emitted.
The server listens for UDP packets on the address given by the --metrics-addr
flag,
aggregates them, then sends them to the backend servers given by the --backends
flag (space separated list of backend names).
Currently supported backends are:
- graphite
- datadog
- statsdaemon
- stdout
- cloudwatch
- newrelic
The format of each metric is:
<bucket name>:<value>|<type>\n
<bucket name>
is a string likeabc.def.g
, just like a graphite bucket name<value>
is a string representation of a floating point number<type>
is one ofc
,g
, orms
for "counter", "gauge", and "timer" respectively.
A single packet can contain multiple metrics, each ending with a newline.
Optionally, gostatsd
supports sample rates (for simple counters, and for timer counters) and tags:
<bucket name>:<value>|c|@<sample rate>\n
wheresample rate
is a float between 0 and 1<bucket name>:<value>|c|@<sample rate>|#<tags>\n
wheretags
is a comma separated list of tags<bucket name>:<value>|<type>|#<tags>\n
wheretags
is a comma separated list of tags
Tags format is: simple
or key:value
.
A simple way to test your installation or send metrics from a script is to use
echo
and the netcat utility nc
:
echo 'abc.def.g:10|c' | nc -w1 -u localhost 8125
Many metrics for the internal processes are emitted. See METRICS.md for details. Go expvar is also
exposed if the --profile
flag is used.
By default gostatsd
will batch read multiple packets to optimise read performance. The amount of memory allocated
for these read buffers is determined by the config options:
max-readers * receive-batch-size * 64KB (max packet size)
The metric avg_packets_in_batch
can be used to track the average number of datagrams received per batch, and the
--receive-batch-size
flag used to tune it. There may be some benefit to tuning the --max-readers
flag as well.
In your source code:
import "github.com/atlassian/gostatsd/pkg/statsd"
Documentation can be found via go doc github.com/atlassian/gostatsd/pkg/statsd
or at
https://godoc.org/github.com/atlassian/gostatsd/pkg/statsd
Pull requests, issues and comments welcome. For pull requests:
- Add tests for new features and bug fixes
- Follow the existing style
- Separate unrelated changes into multiple pull requests
See the existing issues for things to start contributing.
For bigger changes, make sure you start a discussion first by creating an issue and explaining the intended change.
Atlassian requires contributors to sign a Contributor License Agreement, known as a CLA. This serves as a record stating that the contributor is entitled to contribute the code/documentation/translation to the project and is willing to have it used in distributions and derivative works (or is willing to transfer ownership).
Prior to accepting your contributions we ask that you please follow the appropriate link below to digitally sign the CLA. The Corporate CLA is for those who are contributing as a member of an organization and the individual CLA is for those contributing as an individual.
Copyright (c) 2012 Kamil Kisiel. Copyright @ 2016-2017 Atlassian Pty Ltd and others.
Licensed under the MIT license. See LICENSE file.