You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-9Lines changed: 15 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,20 @@
1
-
# avaje-metric-core
1
+
# avaje-metrics
2
2
3
3
Please read the main documentation at: http://avaje-metrics.github.io
4
4
5
+
- Provides Timer, Counter, Gauge based metrics
6
+
- Built in standard metrics for JVM and CGroup/K8s
7
+
- Much lighter weight relative to the other metrics libraries we compare ourselves to
8
+
- Intentionally does NOT use Histograms and instead uses lighter weight Timer providing count, total, max, mean
9
+
- Intentionally does NOT provide Exponentially weighted moving averages (as they are too laggy)
10
+
5
11
## Maven dependency
6
12
7
13
```xml
8
14
<dependency>
9
-
<groupId>org.avaje.metric</groupId>
10
-
<artifactId>avaje-metric-core</artifactId>
11
-
<version>4.4.1</version>
15
+
<groupId>io.avaje</groupId>
16
+
<artifactId>avaje-metrics</artifactId>
17
+
<version>9.1</version>
12
18
</dependency>
13
19
```
14
20
@@ -21,8 +27,8 @@ Micro benchmarks are notoriously difficult but that said the overhead using vers
21
27
22
28
Request timing would not add much more to execution time per say but relatively speaking can produce a lot of output (that is reported in a background thread) and creates objects so adds some extra GC cost. In production you would expect to limit the number of metrics you collect on and limit the number of request timings you collect.
23
29
24
-
## Example Metrics output (typically reported every 60 seconds)
25
-
Below is a sample of the metric log output. The metrics are periodically collected
30
+
## Example Metrics output (typically reported every 60 seconds)
31
+
Below is a sample of the metric log output. The metrics are periodically collected
26
32
and output to a file or sent to a repository.
27
33
28
34
```console
@@ -49,11 +55,11 @@ and output to a file or sent to a repository.
> Per Request timing is a little bit more expensive to collect and can produce a lot of output. As such it is expected that you only turn it on when needed. For example, for the next 5 invocations of CustomerResource.asBean() collect per request timings.
55
61
56
-
Per request timing can be set for specific timing metrics - for example, collect per request timing on the next 5 invocations of the CustomerResource.asBean() method.
62
+
Per request timing can be set for specific timing metrics - for example, collect per request timing on the next 5 invocations of the CustomerResource.asBean() method.
57
63
58
64
Per request timing output shows the nested calls and where the time went for that single request. The p column shows the percentage of total execution - for example 81% of execution time was taken in Muse.iDoTheRealWorkAroundHere. Typically in looking at this output you ignore/remove/collapse anything that has percentage of 0.
CustomerResource.asBean took 612 milliseconds to execute. If you look at Muse.iDoTheRealWorkAroundHere it took 81% of the total execution time (500 milliseconds, 500204 microseconds).
83
+
CustomerResource.asBean took 612 milliseconds to execute. If you look at Muse.iDoTheRealWorkAroundHere it took 81% of the total execution time (500 milliseconds, 500204 microseconds).
0 commit comments