- NewRelic backend added by Kav91, see README.md for options and details
- Added support for sampling timers to be compatible with original statsd.
- Fixed
--statser-type
didn't apply - Added the ability to filter tags and metrics, see FILTERING.md for details
- New Cloudwatch backend contirubted by JorgenEvens
- Backend name is
cloudwatch
- Contains the single option
namespace
, defaulting toStatsD
- Backend name is
- Duplicate tags are filtered out so they are aggregated correctly.
- Datadog user-agent has changed to
gostatsd
by default. Can now be configured. - New Datadog option:
user-agent
, configures the user agent supplied to Datadog. Usepython-requests/2.6.0 CPython/2.7.10
for old behavior.
- Roll back change to configuration, due to spf13/viper#380. Documentation is still valid.
- Fix a bug in the cache provider where transient failures were replacing good cache
- Started passing around a logger, not used everywhere yet
- Documentation fixes
- Added
enable-http2
flag for Datadog backend to control HTTP/2 support, defaults tofalse
- Build with Go 1.10.2
- Fixed a bug making the service not work on Windows.
- Add new flag
--statser-type
to make internal metric destination configurable. Defaults tointernal
, also supportslogging
andnull
- Fixes rate limiter on bad lines. A value <= 0 will disable entirely.
- Parses histogram metrics as timers
- Log bad lines with rate limit
- Add new flag
--bad-lines-per-minute
, controls the rate limit on logging lines which fail to parse. Defaults to0
. Supports floats.
- Memory/GC optimisation with buffers being reused in the Datadog backend
- Metrics in the cloud provider cache should now be correct
- BREAKING: The way "per second" rates are calculated has changed from "value / configured flush interval" to "value / actual flush interval".
- Uses the default EC2 credentials
- Timer sub-metrics now have a configuration option for opt-out. See README.md for details.
- Build -race on ubuntu:16.04 instead of 16.10
- Performance work, round 3
- Performance work, round 2
- Add new flag
--estimated-tags
, pre-allocates the Tags array. Defaults to4
.
- Fix index out of range error and tag corruption in AWS CP
- Performance work
- More metrics rework. Internal metrics are now in-phase with flushing.
- Better distribution of metrics between aggregators when received from multiple hosts.
- New Datadog option:
max_requests
, the maximum number of metric http requests that can be made by the Datadog backend. - BREAKING: Additional and renamed metrics in the flusher, see METRICS.md
- BREAKING: Heartbeat changes:
--heartbeat-interval
changed to--heartbeat-enabled
.- Heartbeat is in-phase with the flush.
- Heartbeat sends a value of 1, so a sum aggregation can be applied.
- New internal metrics around the cloud provider cache and AWS cloud provider, see METRICS.md for details.
- Add new flag
--conn-per-reader
adding support for a separate connection per reader (requires system support for reusing addresses) - Refactor MetricDispatcher into BackendHandler satisfying separate MetricHandler and EventHandler interfaces
- Add TagHandler for static tags
- BREAKING: Independent scaling of datagram reading and parsing
- Separate MetricReceiver into DatagramReceiver and DatagramParser
- Allow independent scaling of parser workers with new flag
--max-parsers
. Defaults to number of CPU cores. - Change default of
--max-readers
flag to min(8, number of CPU cores)
- GC optimization with buffer reuse
- New docker image suffix,
version
-syms, includes symbol table - New flag
--receive-batch-size
to set datagram batch size. Defaults50
. - Batch reading should improve performance by using
recvmmsg(2)
. This has additional memory considerations documented in README.md, and can tuned by tracking theavg_packets_in_batch
metric and adjusting as necessary.
- New flag
--heartbeat-interval
sends heartbeat metrics on an interval, tagged by version and commit. Defaults0
. Set to0
to disable.
- BREAKING: use space instead of comma to specify multiple values for the following parameters:
backends
,percent-threshold
,default-tags
andinternal-tags
. - BREAKING: Removed Datadog
dual_stack
option in favor of explicit network selection. - New Datadog option:
network
allows control of the network protocol used, typical values aretcp
,tcp4
, ortcp6
. Defaultstcp
. See Dial for further information.
- New Datadog option:
dual_stack
allows control of RFC-6555 "Happy Eyeballs" for IPv6 control. Defaultsfalse
.
- No functional changes over previous version. Release tag to trigger build process.
- Build with Go 1.9
- Add support for compression in Datadog payload.
- New Datadog option:
compress_payload
allows compression of Datadog payload. Defaultstrue
. - Add staged shutdown
- Update logrus import path
- New flag
--internal-tags
configures tags on internal metrics (default none) - New flag
--internal-namespace
configures namespace on internal metrics (default "statsd") - BREAKING: Significant internal metric changes, including new names. See METRICS.md for details
- New flag
--ignore-host
prevents capturing of source IP address. Hostname can be provided by client via ahost:
tag.
- Handle EC2
InvalidInstanceID.NotFound
error gracefully - Build with Go 1.8
- Make cloud handler cache configurable
- Batch AWS Describe Instances call
- Fix a deadlock in Cloud Handler
- Minor internals refactoring
- Do not log an error if source instance was not found
- BREAKING: Renamed
aws.http_timeout
intoaws.client_timeout
for consistency - BREAKING: Renamed
datadog.timeout
intodatadog.client_timeout
for consistency - Tweaked some timeouts on HTTP clients
- BREAKING: Renamed parameter
maxCloudRequests
intomax-cloud-requests
for consistency - BREAKING: Renamed parameter
burstCloudRequests
intoburst-cloud-requests
for consistency - Fix a bunch of linting issues
- Run tests concurrently
- Configure various timeouts on HTTP clients
- Update dependencies
- Big internals refactoring and cleanup
- Fix bug in Graphite backend introduced in 0.15.0 (#75)
- Fix bug where max queue size parameter was not applied properly
- Use context in more places
- Stricter TLS configuration
- Support TCP transport and write timeouts in statsd backend
- Reuse UDP/TCP sockets to reduce number of DNS lookups in statsd and graphite backends
- Reuse memory buffers in more cases
- Update dependencies
- Go 1.7
- Config option to disable sending tags to statsdaemon backend
- Fix NPE if cloud provider is not specified
- Minor internal cleanups
- Some additional tweaks to flushing code
- Minor refactorings
- Fix bug in rate calculation for Datadog
- Fix bug introduced in 0.14.6 in Datadog backend when invalid hostname was sent for metrics
- Set tags for own metrics (#55)
- Add expvar support
- Minor internals refactoring and optimization
- Limit max concurrent events (#24)
- Memory consumption optimizations
- Improved and reworked tags support; Unicode characters are preserved now.
- Minor internals refactoring and optimization
- Send start and stop events (#21)
- Linux binary is now built inside of Docker container rather than on the host
- Fix data race in Datadog backend (#44)
- Minor internals refactoring
- Fix batching support in Datadog backend
- Better Graphite support (#35)
- Minor internals refactoring
- Update to Alpine 3.4
- Cap request size in Datadog backend (#27)
- Set timeouts on Dials and tcp sends (#23)
- Reuse HTTP connections in Datadog backend
- Async rate limited cloud provider lookups (#22, #3)
- Internals refactoring
- Add configurable CPU profiler endpoint
- Increase default Datadog timeouts to reduce number of errors in the logs
- Log intermediate errors in Datadog backend
- Consistently set timeouts for AWS SDK service clients
- Update all dependencies
- Fix Datadog backend retry error #18
- Various internal improvements
- Fix goroutine start bug in dispatcher - versions 0.12.6, 0.12.7 do not work properly
- Datadog events support
- Remove deprecated -f flag passed to Docker tag command
- Rename num_stats back to numStats to be compatible with original statsd
- null backend to do benchmarking
- Internals refactoring
- Implement negative lookup cache (#8)
- Read configuration from environment, flags and config
- Do not multiply number of metric workers
- Do not replace dash and underscore in metric names and tags
- Fix handling of NaN specific case
- Minor refactorings for linter
- Use pointer to metric instead of passing by value
- Optimise (5-10x) performance of line parser
- Revert dropping message and block instead
- Use different values for number of worker to read from socket, process messages and metrics
- Minor fixes and performance improvements
- Calculate per second counters since last flush time instead of interval
- Process internal stats as standard metrics
- Add benchmarks
- Improve performance for buffering data out of the socket
- Drop messages and metrics instead of blocking when overloaded
- Normalize tags consistently but don't force lower case for metric names
- Add support for cloud plugins to retrieve host information e.g. instance id, aws tags, etc.
- Add load testing tool
- Performance improvements for receiver and aggregator
- Use goroutines to read net.PacketConn instead of buffered channel
- Graphite: replace dots by underscores in metric name (tags)
- Limit concurrency by using buffered channels for incoming messages
- Discard empty tags
- Datadog: add retries on post errors
- Add support for default tags
- statsd backend: ensure not going over the udp datagram max size on metrics send
- Datadog: normalise tags to always be of form "key:value"
- Reset counters, gauges, timers, etc. after an expiration delay
- Datadog: add interval to metric payload
- Datadog: use source ip address as hostname
- Remove extraneous dot in metric names for stdout and graphite backends when no tag
- Add more and improve internal statsd stats
- Datadog: set dogstatsd version and user-agent headers
- Datadog: use
rate
type for per second metric
- Fix issue with tags overwriting metrics without tags
- Fix undesired overwriting of metrics
- Datadog: rollback change to metric names for preserving backward compatibility
- Add more timers aggregations e.g. mean, standard deviation, etc.
- Improve
graphite
backend reliability by re-opening connection on each metrics send - Fix value of metrics displayed in web UI
- Improve web console UI look'n'feel
- Add statsd backend
- Add support for global metrics namespace
- Datadog: remove api url from config
- Add support for "Set" metric type
- Datadog: send number of metrics received
- Use appropriate log level for errors
- Remove logging the url on each flush in datadog backend
- Use alpine base image for docker instead of scratch to avoid ca certs root error
- Add datadog backend
- Fix reset of metrics
- Implement tags handling: use tags in metric names
- Implement support for pluggable backends
- Add basic stdout backend
- Configure backends via toml, yaml or json configuration files
- Add support for tags and sample rate
- Initial release