JSON API Metrics - Documentation versus observed metrics

In the docs (Production Setup — Daml SDK 1.18.1 documentation) , I can see that the JSON API should report meters with m1, m5, m15 and means rates. However, after deploying the JSON API I can only see the “total” metrics. For example: http_request_throughput_total, allocation_party_throughput_total. I’m wondering how can I see the the m1/m5/m15/mean metrics and what does the “total” metric mean?

As a side note - I can see docs on JSON API metrics for sdk version 1.18.1 and 1.18.0 but not for other SDK versions. I am using sdk version 1.17.1. I’m not sure if metrics have changes from sdk 1.17 to 1.18.

The page about the production setup was created for 1.18.0 and the metrics were moved there. Before that page was exclusively about metrics (here for version 1.17.1).

What are you using to visualize the metrics? If you use Prometheus to scrape metrics:

  1. the names can be slightly mangled (e.g. I believe dots need to be replaced and usually underscores are used)
  2. I noticed in the past that between Graphite and Prometheus there is a noticeable delay (I remember at the very least minutes, but probably more) between when logging collection started and when it was ultimately available for visualization

I’ve used Grafana for the visualization and the different metrics appeared as documented with the following names:

  • daml.http_json_api.http_request_throughput.m1
  • daml.http_json_api.http_request_throughput.m5
  • daml.http_json_api.http_request_throughput.m15
  • etc.

We use Dropwizard metrics and I already noticed in the past that from time to time there are aliases (e.g. value and number for cached gauges), so I suspect that’s an alias for count.

Here you can find more information about Dropwizard metrics meters.

One clarification about the delay I mentioned above: this delay only appeared when I was starting from a completely new system, starting up a new cluster with new instances of each tool (including Prometheus). After the metrics recording started, it was up to date with the metrics coming from the system (to the best of my knowledge, of course) and if I remember correctly even backfilled to make up for the time during which numbers were not available.

We expose the metrics via the Prometheus endpoint that the JSON API provides, these are then ingested into logstash, and finally visualized in Kibana. The names are changed (have underscores in them). I am still not getting the m1, m5 or m15 values though.