<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Metrics-generator on Grafana Labs</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/</link><description>Recent content in Metrics-generator on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/tempo/v2.2.x/metrics-generator/index.xml" rel="self" type="application/rss+xml"/><item><title>Active series</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/active-series/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/active-series/</guid><content><![CDATA[&lt;h1 id=&#34;active-series&#34;&gt;Active series&lt;/h1&gt;
&lt;p&gt;An active series is a time series that receives new data points or samples. When you stop writing new data points to a time series, shortly afterwards it is no longer considered active.&lt;/p&gt;
&lt;p&gt;Metrics generated by Tempo&amp;rsquo;s metrics generator can provide both RED (Rate/Error/Duration) metrics and interdependency graphs between services in a trace (the Service Graph functionality in Grafana).
These capabilities rely on a set of generated span metrics and service metrics.&lt;/p&gt;
&lt;p&gt;Any spans that are ingested by Tempo can create many metric series. However, this doesn&amp;rsquo;t mean that every time a span is ingested that a new active series is created.&lt;/p&gt;
&lt;p&gt;The number of active series generated depends on the label pairs generated from span data that are associated with the metrics, similar to other Prometheus-formated data.&lt;/p&gt;
&lt;p&gt;For additional information, refer to the &lt;a href=&#34;/docs/grafana-cloud/billing-and-usage/active-series-and-dpm/#active-series&#34;&gt;Active series and DPM documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;active-series-calculation&#34;&gt;Active series calculation&lt;/h2&gt;
&lt;p&gt;Active series for a metric increase when a new value for a label key is introduced. For example, the &lt;code&gt;span_kind&lt;/code&gt; label has a total of five possible values, and the &lt;code&gt;status_code&lt;/code&gt; label has a total of three possible values.&lt;/p&gt;
&lt;p&gt;At first glance, you might make an assumption that this means that at least 15 (5*3) active series will be generated for each span. But this isn&amp;rsquo;t the case.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s consider a span that&amp;rsquo;s emitted from some piece of code in a service:&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/static/img/docs/tempo/SingleSpan.jpeg&#34;
  alt=&#34;Single span visualization&#34; width=&#34;371&#34;
     height=&#34;350&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s a single service with a single span.
If the code inside the span never leaves the service, then the &lt;code&gt;span_kind&lt;/code&gt; label generated by the metrics generator will be &lt;code&gt;SPAN_KIND_INTERNAL&lt;/code&gt; and never deviate. It&amp;rsquo;ll never be one of the other four possible values.&lt;/p&gt;
&lt;p&gt;Similarly, if the code inside the span never errors, it&amp;rsquo;ll only have the &lt;code&gt;STATUS_CODE_OK&lt;/code&gt; state for the &lt;code&gt;span_status&lt;/code&gt; label.
This means that the metrics generator will only generate a single active series, where the service name will be &lt;em&gt;Service 1&lt;/em&gt; and the span name will be &lt;em&gt;span1&lt;/em&gt;.
If we looked at the Prometheus data for the &lt;code&gt;traces_spanmetrics_call_total&lt;/code&gt; metric, we&amp;rsquo;d see a single active series:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;service&lt;/th&gt;
              &lt;th&gt;span_name&lt;/th&gt;
              &lt;th&gt;span_kind&lt;/th&gt;
              &lt;th&gt;status_code&lt;/th&gt;
              &lt;th&gt;Metric value&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;1&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;It doesn&amp;rsquo;t matter how many times that span occurs in a trace either, for example maybe a span is generated within a loop.
In code run once, 10 times, 100 times, 1000 times, only a single active series will be produced, where a counter might be increased 1, 10, 100, or 1000 times:&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/static/img/docs/tempo/SingleSpanLoop.jpeg&#34;
  alt=&#34;Single span with loop&#34; width=&#34;371&#34;
     height=&#34;350&#34;/&gt;&lt;/p&gt;
&lt;p&gt;If you looked at the Prometheus data, you&amp;rsquo;d see an instant value for &lt;code&gt;traces_spanmetrics_call_total&lt;/code&gt; similar to the table. Again, one active series for the metric:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;service&lt;/th&gt;
              &lt;th&gt;span_name&lt;/th&gt;
              &lt;th&gt;span_kind&lt;/th&gt;
              &lt;th&gt;status_code&lt;/th&gt;
              &lt;th&gt;Metric value&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;120&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;However, let&amp;rsquo;s now assume that it does loop and there are occasionally errors.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/static/img/docs/tempo/SinglespanLoopError.jpeg&#34;
  alt=&#34;Single span with loop and errors&#34; width=&#34;371&#34;
     height=&#34;350&#34;/&gt;&lt;/p&gt;
&lt;p&gt;There are now two potential outcomes for a span when the code loops: one where everything successfully completes and one where there is an error.
This means that when the span completes &lt;code&gt;status_code&lt;/code&gt; is now either &lt;code&gt;STATUS_CODE_OK&lt;/code&gt; or &lt;code&gt;STATUS_CODE_ERROR&lt;/code&gt;.
Because of that, the label values can be one of two values on a metric, and we now have two active series being generated based on the &lt;code&gt;status_code&lt;/code&gt;, one for the &lt;code&gt;OK&lt;/code&gt; status and one for the error.&lt;/p&gt;
&lt;p&gt;Again, we could loop once, 10 times, 100, or more times, but there will only ever be two active series.&lt;/p&gt;
&lt;p&gt;If we now looked at Prometheus instant values for &lt;code&gt;traces_spanmetrics_call_total&lt;/code&gt;, we&amp;rsquo;d now see the following table:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;service&lt;/th&gt;
              &lt;th&gt;span_name&lt;/th&gt;
              &lt;th&gt;span_kind&lt;/th&gt;
              &lt;th&gt;status_code&lt;/th&gt;
              &lt;th&gt;Metric value&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;96&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;24&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;What happens if you call out to another service though? Let&amp;rsquo;s add an option where, based on some arbitrary data, we sometimes make a downstream call to another service, but otherwise continue to runs loops in our own service:&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/static/img/docs/tempo/SingleSpanLoopErrorAnotherService.jpeg&#34;
  alt=&#34;Multiple spans with loops and errors&#34; width=&#34;720&#34;
     height=&#34;303&#34;/&gt;&lt;/p&gt;
&lt;p&gt;In this scenario, &lt;code&gt;span1&lt;/code&gt;&amp;rsquo;s &lt;code&gt;span_kind&lt;/code&gt; label would now be one of either &lt;code&gt;SPAN_KIND_INTERNAL&lt;/code&gt; or &lt;code&gt;SPAN_KIND_CLIENT&lt;/code&gt; (as it has acted as a client calling a downstream server).
If a call to the downstream service could also potentially fail, then for &lt;code&gt;SPAN_KIND_CLIENT&lt;/code&gt;, the &lt;code&gt;status_code&lt;/code&gt; could be either &lt;code&gt;STATUS_CODE_ERROR&lt;/code&gt; or &lt;code&gt;STATUS_CODE_OK&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;At this point, &lt;code&gt;traces_spanmetrics_call_total&lt;/code&gt; would have four different variations in labels:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;service&lt;/th&gt;
              &lt;th&gt;span_name&lt;/th&gt;
              &lt;th&gt;span_kind&lt;/th&gt;
              &lt;th&gt;status_code&lt;/th&gt;
              &lt;th&gt;Metric value&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;34&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;6&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_CLIENT&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;23&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_CLIENT&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;3&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;Because of the variation in values, we now have four active series for our metric instead of one. But, as far as Service 1 is concerned, there&amp;rsquo;s still only four active series, because there isn&amp;rsquo;t any other variation of the values for labels. You can run 1 trace, 10 traces, 100 traces (each with however many loops of spans there are) and only four active series will ever be produced.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ve actually only told half the story in our last diagram. &lt;em&gt;Service 1&lt;/em&gt; called a second service, &lt;em&gt;Service 2&lt;/em&gt;, which continues the trace by adding a new span, &lt;code&gt;span2&lt;/code&gt;.
If there was a loop inside Service 2 with a single span that was generated from an upstream call from Service 1, and then a number of spans that were driven internally, which could also error, we&amp;rsquo;d end up with the possible values in the metric for &lt;code&gt;traces_spanmetrics_call_total&lt;/code&gt; below:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;service&lt;/th&gt;
              &lt;th&gt;span_name&lt;/th&gt;
              &lt;th&gt;span_kind&lt;/th&gt;
              &lt;th&gt;status_code&lt;/th&gt;
              &lt;th&gt;Metric value&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;89&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;13&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_CLIENT&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;44&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 1&lt;/td&gt;
              &lt;td&gt;span1&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_CLIENT&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;9&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 2&lt;/td&gt;
              &lt;td&gt;span2&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_SERVER&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;30&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 2&lt;/td&gt;
              &lt;td&gt;span2&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_SERVER&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;14&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 2&lt;/td&gt;
              &lt;td&gt;span2&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_OK&lt;/td&gt;
              &lt;td&gt;99&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Service 2&lt;/td&gt;
              &lt;td&gt;span2&lt;/td&gt;
              &lt;td&gt;SPAN_KIND_INTERNAL&lt;/td&gt;
              &lt;td&gt;STATUS_CODE_ERROR&lt;/td&gt;
              &lt;td&gt;23&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;At this point, all our traces will be composed of two potential span names, each of which produce two separate types of &lt;code&gt;span_kind&lt;/code&gt; and two separate types of &lt;code&gt;status_code&lt;/code&gt;. So we have eight active series for a metric.&lt;/p&gt;
&lt;p&gt;The variability of values for each potential span condition determines the number of active series being produced by Tempo when ingesting spans for a trace, and not the number of traces of spans that are seen.&lt;/p&gt;
&lt;h2 id=&#34;custom-span-attributes&#34;&gt;Custom span attributes&lt;/h2&gt;
&lt;p&gt;There&amp;rsquo;s another consideration for active series: extra label key/value pairs that can be added onto metrics from a span&amp;rsquo;s attributes.
The Tempo metrics generator allows the user to use arbitrary span attributes to be created as label pairs for metrics.
When considering the number of active series generated, you also need to determine how many possible values there are for the span attribute being turned into a label.&lt;/p&gt;
&lt;p&gt;For example, if you added an &lt;code&gt;http.method&lt;/code&gt; span attribute into a metric label pair, there are five possible values (because there are five possible REST methods):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;HEAD&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PUT&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DELETE&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If this label pair is added to every span metric, that&amp;rsquo;s another 5 &lt;em&gt;potential&lt;/em&gt; active series generated for each metric (in all likelihood this is a very worst case scenario, very few spans will call all five REST methods).
Instead of 8 active series in the last table above, we&amp;rsquo;d have 40 (8 * 5).&lt;/p&gt;
]]></content><description>&lt;h1 id="active-series">Active series&lt;/h1>
&lt;p>An active series is a time series that receives new data points or samples. When you stop writing new data points to a time series, shortly afterwards it is no longer considered active.&lt;/p></description></item><item><title>Cardinality</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/cardinality/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/cardinality/</guid><content><![CDATA[&lt;h1 id=&#34;cardinality&#34;&gt;Cardinality&lt;/h1&gt;
&lt;p&gt;Cardinality refers to the total combination of key/value pairs, such as labels and label values for a given metric series or log stream, and how many unique combinations they generate.
For more information on cardinality, see the &lt;a href=&#34;/blog/2022/02/15/what-are-cardinality-spikes-and-why-do-they-matter/&#34;&gt;What are cardinality spikes and why do they matter?&lt;/a&gt; blog post.&lt;/p&gt;
&lt;p&gt;Because writes to a time-series database (TSDB) database are in series, high cardinality does not make a big difference to performance at ingest.
However, cardinality can have a major impact on querying where, the higher the cardinality, the more items are required to be iterated over.&lt;/p&gt;
&lt;h2 id=&#34;traces-collection-and-metrics&#34;&gt;Traces collection and metrics&lt;/h2&gt;
&lt;p&gt;Tempo’s server-side metrics generation adds functionality to the collection of traces by creating Prometheus-based metrics that track a variety of metrics such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Total span call counts&lt;/li&gt;
&lt;li&gt;Span latency histograms&lt;/li&gt;
&lt;li&gt;Total span size count&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The metrics-generator creates metrics which define the relationship between services via edges and nodes.
Each of these metrics are queryable using a set of Prometheus labels (key/value pairs).&lt;/p&gt;
&lt;p&gt;Each new value for a label increases the number of active series associated with a metric. (To learn more about active series, read the &lt;a href=&#34;../active-series/&#34;&gt;Trace active series&lt;/a&gt; documentation.)&lt;/p&gt;
&lt;p&gt;This is also known as an increase in cardinality, and the number of active series generated for a metric is directly proportional to the number of labels that exist for that metrics alongside the number of values each label has added.&lt;/p&gt;
&lt;p&gt;In a non-modified instance of the metrics generator, a small number of labels are added automatically.
Because labels like &lt;code&gt;span_kind&lt;/code&gt; and &lt;code&gt;status_code&lt;/code&gt; only have a few valid values, the largest variable for the number of active series produced for each metric depends on the number of service names and span names associated with trace spans.&lt;/p&gt;
&lt;p&gt;The metrics-generator can also be configured to also add extra labels on metrics, using span attribute key/value pairs which are mapped directly to these labels see the &lt;a href=&#34;../../configuration/#metrics-generator&#34;&gt;custom span attribute documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Be careful when configuring custom attributes: the greater the number of values seen in a specific attribute, the greater the number of active series will be produced. For more information about active series, refer to the &lt;a href=&#34;../active-series/&#34;&gt;active series documentation&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s say that you are adding a custom attribute that includes unique customer IDs as a metrics label. If you have 100 customers, this could potentially multiple the number of active series generated by up to 100 (for example, going from 25,000 active series to 2.5M).
Always consider which attributes will actually be useful as labels for querying metrics, as well as the cardinality that they will increase metrics by.&lt;/p&gt;
&lt;h2 id=&#34;dry-running-the-metrics-generator&#34;&gt;Dry-running the metrics-generator&lt;/h2&gt;
&lt;p&gt;An often most reliable solution is by running the metrics-generator in a dry-run mode.
Using the dry-run mode generates metrics but does not collecting them, thus not writing them to a metrics storage.
The override &lt;code&gt;metrics_generator_disable_collection&lt;/code&gt; is defined for this use-case.&lt;/p&gt;
&lt;p&gt;To get an estimate, run the metrics-generator normally and set the override to &lt;code&gt;true&lt;/code&gt;.
Then, check &lt;code&gt;tempo_metrics_generator_registry_active_series&lt;/code&gt; to get an estimation of the active series for that set-up.&lt;/p&gt;
]]></content><description>&lt;h1 id="cardinality">Cardinality&lt;/h1>
&lt;p>Cardinality refers to the total combination of key/value pairs, such as labels and label values for a given metric series or log stream, and how many unique combinations they generate.
For more information on cardinality, see the &lt;a href="/blog/2022/02/15/what-are-cardinality-spikes-and-why-do-they-matter/">What are cardinality spikes and why do they matter?&lt;/a> blog post.&lt;/p></description></item><item><title>Span metrics</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/span_metrics/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/span_metrics/</guid><content><![CDATA[&lt;h1 id=&#34;span-metrics&#34;&gt;Span metrics&lt;/h1&gt;
&lt;p&gt;The span metrics processor generates metrics from ingested tracing data, including request, error, and duration (RED) metrics.&lt;/p&gt;
&lt;p&gt;Span metrics generate two metrics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A counter that computes requests&lt;/li&gt;
&lt;li&gt;A histogram that tracks the distribution of durations of all requests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Span metrics are of particular interest if your system is not monitored with metrics,
but it has distributed tracing implemented.
You get out-of-the-box metrics from your tracing pipeline.&lt;/p&gt;
&lt;p&gt;Even if you already have metrics, span metrics can provide in-depth monitoring of your system.
The generated metrics will show application level insight into your monitoring,
as far as tracing gets propagated through your applications.&lt;/p&gt;
&lt;p&gt;Last but not least, span metrics lower the entry barrier for using &lt;a href=&#34;/docs/grafana/latest/basics/exemplars/&#34;&gt;exemplars&lt;/a&gt;.
An exemplar is a specific trace representative of measurement taken in a given time interval.
Since traces and metrics co-exist in the metrics-generator,
exemplars can be automatically added, providing additional value to these metrics.&lt;/p&gt;
&lt;h2 id=&#34;how-to-run&#34;&gt;How to run&lt;/h2&gt;
&lt;p&gt;To enable service graphs in Tempo/GET, enable the metrics generator and add an overrides section which enables the &lt;code&gt;span-metrics&lt;/code&gt; generator. See &lt;a href=&#34;../../configuration/#metrics-generator&#34;&gt;here for configuration details&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;how-it-works&#34;&gt;How it works&lt;/h2&gt;
&lt;p&gt;The span metrics processor works by inspecting every received span and computing the total count and the duration of spans for every unique combination of dimensions.
Dimensions can be the service name, the operation, the span kind, the status code and any attribute present in the span.&lt;/p&gt;
&lt;p&gt;This processor is designed with the goal to mirror the implementation from the OpenTelemetry Collector of the &lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/spanmetricsprocessor&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;processor&lt;/a&gt; with the same name.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;To learn more about cardinality and how to perform a dry run of the metrics generator, see the &lt;a href=&#34;../cardinality/&#34;&gt;Cardinality documentation&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h3 id=&#34;metrics&#34;&gt;Metrics&lt;/h3&gt;
&lt;p&gt;The following metrics are exported:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Type&lt;/th&gt;
              &lt;th&gt;Labels&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_spanmetrics_latency&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;Dimensions&lt;/td&gt;
              &lt;td&gt;Duration of the span&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_spanmetrics_calls_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;Dimensions&lt;/td&gt;
              &lt;td&gt;Total count of the span&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_spanmetrics_size_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;Dimensions&lt;/td&gt;
              &lt;td&gt;Total size of spans ingested&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;

&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;In Tempo 1.4 and 1.4.1, the histogram metric was called &lt;code&gt;traces_spanmetrics_duration_seconds&lt;/code&gt;. This was changed later to be consistent with the metrics generated by the Grafana Agent and the OpenTelemetry Collector.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;By default, the metrics processor adds the following labels to each metric: &lt;code&gt;service&lt;/code&gt;, &lt;code&gt;span_name&lt;/code&gt;, &lt;code&gt;span_kind&lt;/code&gt;, &lt;code&gt;status_code&lt;/code&gt;, &lt;code&gt;status_message&lt;/code&gt;, &lt;code&gt;job&lt;/code&gt;, and &lt;code&gt;instance&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;service&lt;/code&gt; - The name of the service that generated the span&lt;/li&gt;
&lt;li&gt;&lt;code&gt;span_name&lt;/code&gt; - The unique name of the span&lt;/li&gt;
&lt;li&gt;&lt;code&gt;span_kind&lt;/code&gt; - The type of span, this can be one of five values:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_SERVER&lt;/code&gt; - The span was generated by a call from another service&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_CLIENT&lt;/code&gt; - The span made a call to another service&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_INTERNAL&lt;/code&gt; - The span does not have interaction outside of the service it was generated in&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_PUBLISHER&lt;/code&gt; - The span created data that was pushed onto a bus or message broker&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_CONSUMER&lt;/code&gt; - The span consumed data that was on a bus or messaging system&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;status_code&lt;/code&gt; - The result of the span, this can be one of three values:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;STATUS_CODE_UNSET&lt;/code&gt; - Result of the span was unset/unknown&lt;/li&gt;
&lt;li&gt;&lt;code&gt;STATUS_CODE_OK&lt;/code&gt; - The span operation completed successfully&lt;/li&gt;
&lt;li&gt;&lt;code&gt;STATUS_CODE_ERROR&lt;/code&gt; - The span operation completed with an error&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;status_message&lt;/code&gt; (optionally enabled) - The message that details the reason for the &lt;code&gt;status_code&lt;/code&gt; label&lt;/li&gt;
&lt;li&gt;&lt;code&gt;job&lt;/code&gt; - The name of the job, a combination of namespace and service; only added if &lt;code&gt;metrics_generator_processor_span_metrics_enable_target_info: true&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;instance&lt;/code&gt; - The instance ID; only added if &lt;code&gt;metrics_generator_processor_span_metrics_enable_target_info: true&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Additional user defined labels can be created using the &lt;a href=&#34;../../configuration/#metrics-generator&#34;&gt;&lt;code&gt;dimensions&lt;/code&gt; configuration option&lt;/a&gt;.
When a configured dimension collides with one of the default labels (e.g. &lt;code&gt;status_code&lt;/code&gt;), the label for the respective dimension is prefixed with double underscore (i.e. &lt;code&gt;__status_code&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Custom labeling of dimensions is also supported using the &lt;a href=&#34;../../configuration/#metrics-generator&#34;&gt;&lt;code&gt;dimension_mapping&lt;/code&gt; configuration option&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An optional metric called &lt;code&gt;traces_target_info&lt;/code&gt; using all resource level attributes as dimensions can be enabled in the &lt;a href=&#34;../../configuration/#metrics-generator&#34;&gt;&lt;code&gt;enable_target_info&lt;/code&gt; configuration option&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you use a ratio-based sampler, you can use the custom sampler below to not lose metric information. However, you also need to set &lt;code&gt;metrics_generator.processor.span_metrics.span_multiplier_key&lt;/code&gt; to &lt;code&gt;&amp;quot;X-SampleRatio&amp;quot;&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Go&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-go&#34;&gt;package tracer
import (
	&amp;#34;go.opentelemetry.io/otel/attribute&amp;#34;
	tracesdk &amp;#34;go.opentelemetry.io/otel/sdk/trace&amp;#34;
)

type RatioBasedSampler struct {
	innerSampler        tracesdk.Sampler
	sampleRateAttribute attribute.KeyValue
}

func NewRatioBasedSampler(fraction float64) RatioBasedSampler {
	innerSampler := tracesdk.TraceIDRatioBased(fraction)
	return RatioBasedSampler{
		innerSampler:        innerSampler,
		sampleRateAttribute: attribute.Float64(&amp;#34;X-SampleRatio&amp;#34;, fraction),
	}
}

func (ds RatioBasedSampler) ShouldSample(parameters tracesdk.SamplingParameters) tracesdk.SamplingResult {
	sampler := ds.innerSampler
	result := sampler.ShouldSample(parameters)
	if result.Decision == tracesdk.RecordAndSample {
		result.Attributes = append(result.Attributes, ds.sampleRateAttribute)
	}
	return result
}

func (ds RatioBasedSampler) Description() string {
	return &amp;#34;Ratio Based Sampler which gives information about sampling ratio&amp;#34;
}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;filtering&#34;&gt;Filtering&lt;/h3&gt;
&lt;p&gt;In some cases, you may want to reduce the number of metrics produced by the &lt;code&gt;spanmetrics&lt;/code&gt; processor.
You can configure the processor to use an &lt;code&gt;include&lt;/code&gt; filter to match criteria that must be present in the span in order to be included.
Following the include filter, you can use an &lt;code&gt;exclude&lt;/code&gt; filter to reject portions of what was previously included by the filter policy.&lt;/p&gt;
&lt;p&gt;Currently, only filtering by resource and span attributes with the following value types is supported.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;bool&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;double&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;int&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;string&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Additionally, these intrinsic span attributes may be filtered upon:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;name&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;status&lt;/code&gt; (code)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kind&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following intrinsic kinds are available for filtering.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_SERVER&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_INTERNAL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_CLIENT&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_PRODUCER&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;SPAN_KIND_CONSUMER&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Intrinsic keys can be acted on directly when implementing a filter policy. For example:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;---
metrics_generator:
  processor:
    span_metrics:
      filter_policies:
        - include:
            match_type: strict
            attributes:
              - key: kind
                value: SPAN_KIND_SERVER&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In this example, spans which are of &lt;code&gt;kind&lt;/code&gt; &amp;ldquo;server&amp;rdquo; are included for metrics export.&lt;/p&gt;
&lt;p&gt;When selecting spans based on non-intrinsic attributes, it is required to specify the scope of the attribute, similar to how it is specified in TraceQL.
For example, if the &lt;code&gt;resource&lt;/code&gt; contains a &lt;code&gt;location&lt;/code&gt; attribute which is to be used in a filter policy, then the reference needs to be specified as &lt;code&gt;resource.location&lt;/code&gt;.
This requires users to know and specify which scope an attribute is to be found and avoids the ambiguity of conflicting values at differing scopes. The following may help illustrate.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;---
metrics_generator:
  processor:
    span_metrics:
      filter_policies:
        - include:
            match_type: strict
            attributes:
              - key: resource.location
                value: earth&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In the above examples, we are using &lt;code&gt;match_type&lt;/code&gt; of &lt;code&gt;strict&lt;/code&gt;, which is a direct comparison of values.
You can use &lt;code&gt;regex&lt;/code&gt;, an additional option for &lt;code&gt;match_type&lt;/code&gt;, to build a regular expression to match against.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;---
metrics_generator:
  processor:
    span_metrics:
      filter_policies:
        - include:
            match_type: regex
            attributes:
              - key: resource.location
                value: eu-.*
        - exclude:
            match_type: regex
            attributes:
              - key: resource.tier
                value: dev-.*&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;In the above, we first include all spans which have a &lt;code&gt;resource.location&lt;/code&gt; that begins with &lt;code&gt;eu-&lt;/code&gt; with the &lt;code&gt;include&lt;/code&gt; statement, and then exclude those with begin with &lt;code&gt;dev-&lt;/code&gt;.
In this way, a flexible approach to filtering can be achieved to ensure that only metrics which are important are generated.&lt;/p&gt;
&lt;h2 id=&#34;example&#34;&gt;Example&lt;/h2&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../span-metrics-example.png&#34; alt=&#34;Span metrics overview&#34;&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="span-metrics">Span metrics&lt;/h1>
&lt;p>The span metrics processor generates metrics from ingested tracing data, including request, error, and duration (RED) metrics.&lt;/p>
&lt;p>Span metrics generate two metrics:&lt;/p>
&lt;ul>
&lt;li>A counter that computes requests&lt;/li>
&lt;li>A histogram that tracks the distribution of durations of all requests&lt;/li>
&lt;/ul>
&lt;p>Span metrics are of particular interest if your system is not monitored with metrics,
but it has distributed tracing implemented.
You get out-of-the-box metrics from your tracing pipeline.&lt;/p></description></item><item><title>Service graphs</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/service_graphs/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/service_graphs/</guid><content><![CDATA[&lt;h1 id=&#34;service-graphs&#34;&gt;Service graphs&lt;/h1&gt;
&lt;p&gt;A service graph is a visual representation of the interrelationships between various services.
Service graphs help you to understand the structure of a distributed system,
and the connections and dependencies between its components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Infer the topology of a distributed system.&lt;/strong&gt;
As distributed systems grow, they become more complex.
Service graphs help you to understand the structure of the system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provide a high-level overview of the health of your system.&lt;/strong&gt;
Service graphs display error rates, latencies, as well as other relevant data.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provide an historic view of a system’s topology.&lt;/strong&gt;
Distributed systems change very frequently,
and service graphs offer a way of seeing how these systems have evolved over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../grafana-service-graphs-panel.png&#34; alt=&#34;Service graphs example&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;how-they-work&#34;&gt;How they work&lt;/h2&gt;
&lt;p&gt;The metrics-generator processes traces and generates service graphs in the form of Prometheus metrics.&lt;/p&gt;
&lt;p&gt;Service graphs work by inspecting traces and looking for spans with parent-children relationship that represent a request.
The processor uses the &lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/README.md&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;OpenTelemetry semantic conventions&lt;/a&gt; to detect a myriad of requests.
It currently supports the following requests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A direct request between two services where the outgoing and the incoming span must have &lt;a href=&#34;https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#spankind&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;code&gt;span.kind&lt;/code&gt;&lt;/a&gt;, &lt;code&gt;client&lt;/code&gt;, and &lt;code&gt;server&lt;/code&gt;, respectively.&lt;/li&gt;
&lt;li&gt;A request across a messaging system where the outgoing and the incoming span must have &lt;code&gt;span.kind&lt;/code&gt;, &lt;code&gt;producer&lt;/code&gt;, and &lt;code&gt;consumer&lt;/code&gt; respectively.&lt;/li&gt;
&lt;li&gt;A database request; in this case the processor looks for spans containing attributes &lt;code&gt;span.kind&lt;/code&gt;=&lt;code&gt;client&lt;/code&gt; as well as &lt;code&gt;db.name&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Every span that can be paired up to form a request is kept in an in-memory store, until its corresponding pair span is received or the maximum waiting time has passed.
When either of these conditions are reached, the request is recorded and removed from the local store.&lt;/p&gt;
&lt;p&gt;Each emitted metrics series have the &lt;code&gt;client&lt;/code&gt; and &lt;code&gt;server&lt;/code&gt; label corresponding with the service doing the request and the service receiving the request.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;  tempo_service_graph_request_total{client=&amp;#34;app&amp;#34;, server=&amp;#34;db&amp;#34;, connection_type=&amp;#34;database&amp;#34;} 20&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h3 id=&#34;virtual-nodes&#34;&gt;Virtual nodes&lt;/h3&gt;
&lt;p&gt;Virtual nodes are nodes that form part of the lifecycle of a trace,
but spans for them are not being collected because they&amp;rsquo;re outside the user&amp;rsquo;s reach (for example, an external service for payment processing) or are not instrumented (for example, a frontend application).&lt;/p&gt;
&lt;p&gt;Virtual nodes can be detected in two different ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The root span has &lt;code&gt;span.kind&lt;/code&gt; set to &lt;code&gt;server&lt;/code&gt;. This indicates that the request has initiated by an external system that&amp;rsquo;s not instrumented, like a frontend application or an engineer via &lt;code&gt;curl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;client&lt;/code&gt; span does not have its matching &lt;code&gt;server&lt;/code&gt; span, but has a peer attribute present. In this case, we make the assumption that a call was made to an external service, for which Tempo won&amp;rsquo;t receive spans.
&lt;ul&gt;
&lt;li&gt;The default peer attributes are &lt;code&gt;peer.service&lt;/code&gt;, &lt;code&gt;db.name&lt;/code&gt; and &lt;code&gt;db.system&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The order of the attributes is important, as the first one that is present will be used as the virtual node name.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;metrics&#34;&gt;Metrics&lt;/h3&gt;
&lt;p&gt;The following metrics are exported:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Type&lt;/th&gt;
              &lt;th&gt;Labels&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_request_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of requests between two nodes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_request_failed_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of failed requests between two nodes&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_request_server_seconds&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Time for a request between two nodes as seen from the server&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_request_client_seconds&lt;/td&gt;
              &lt;td&gt;Histogram&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Time for a request between two nodes as seen from the client&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_unpaired_spans_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of unpaired spans&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;traces_service_graph_dropped_spans_total&lt;/td&gt;
              &lt;td&gt;Counter&lt;/td&gt;
              &lt;td&gt;client, server, connection_type&lt;/td&gt;
              &lt;td&gt;Total count of dropped spans&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;Duration is measured both from the client and the server sides.&lt;/p&gt;
&lt;p&gt;Possible values for &lt;code&gt;connection_type&lt;/code&gt;: unset, &lt;code&gt;messaging_system&lt;/code&gt;, or &lt;code&gt;database&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Additional labels can be included using the &lt;code&gt;dimensions&lt;/code&gt; configuration option.&lt;/p&gt;
&lt;p&gt;Since the service graph processor has to process both sides of an edge,
it needs to process all spans of a trace to function properly.
If spans of a trace are spread out over multiple instances, spans are not paired up reliably.&lt;/p&gt;
]]></content><description>&lt;h1 id="service-graphs">Service graphs&lt;/h1>
&lt;p>A service graph is a visual representation of the interrelationships between various services.
Service graphs help you to understand the structure of a distributed system,
and the connections and dependencies between its components:&lt;/p></description></item><item><title>Service graph view</title><link>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/service-graph-view/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/tempo/v2.2.x/metrics-generator/service-graph-view/</guid><content><![CDATA[&lt;h1 id=&#34;service-graph-view&#34;&gt;Service graph view&lt;/h1&gt;
&lt;p&gt;Grafana&amp;rsquo;s service graph view utilizes metrics generated by the metrics-generator (or Grafana Agent) to display span request rates, error rates, and durations, as well as service graphs.
Once the requirements are set up, this pre-configured view is immediately available.&lt;/p&gt;
&lt;p&gt;Using the service graph view, you can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Discover spans which are consistently erroring and the rates at which they occur&lt;/li&gt;
&lt;li&gt;Get an overview of the overall rate of span calls throughout your services&lt;/li&gt;
&lt;li&gt;Determine how long the slowest queries in your service take to complete&lt;/li&gt;
&lt;li&gt;Examine all traces that contain spans of particular interest based on rate, error and duration values (RED signals)&lt;/li&gt;
&lt;/ul&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../../getting-started/assets/apm-overview.png&#34; alt=&#34;Service graph view&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;requirements&#34;&gt;Requirements&lt;/h2&gt;
&lt;p&gt;You have to enable span metrics and service graph generation on the Grafana backend so metrics that are generated as traces are ingested.&lt;/p&gt;
&lt;p&gt;To use the service graph view, you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tempo or Grafana Cloud Traces with either 1) the metrics generator enabled and configured or 2) the Grafana Agent enabled and configured to send data to a Prometheus-compatible metrics store&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../service_graphs/enable-service-graphs/&#34;&gt;Services graphs&lt;/a&gt;, which are enabled by default in Grafana&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../span_metrics/#how-to-run&#34;&gt;Span metrics&lt;/a&gt; enabled in your Tempo data source configuration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The service graph view can be derived from metrics generated by either Tempo&amp;rsquo;s metrics-generator or by the Grafana Agent.&lt;/p&gt;
&lt;p&gt;For information on how to configure these features, refer to the &lt;a href=&#34;/docs/grafana/latest/datasources/tempo/&#34;&gt;Grafana Tempo data sources documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;what-does-the-service-graph-view-show&#34;&gt;What does the service graph view show?&lt;/h2&gt;
&lt;p&gt;Using this view, you can see the top five spans with a type of server (listed in the &lt;code&gt;Name&lt;/code&gt; column).
You can refine any of this data using the filters.
Selecting any of the data points lets you see more specific data.&lt;/p&gt;
&lt;p&gt;The service graph view provides a span metrics visualization (table, screen section 2) and service graph (screen section 3). In addition, you can use the filters (screen section 1) to customize the data displayed.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-overview-numbered.png&#34; alt=&#34;View with numbered sections&#34;&gt;&lt;/p&gt;
&lt;p&gt;Any information in the table that has an underline can be selected to show more detailed information.
You can also select any node in the service graph to display additional information.
In the dashboard shown below, the &lt;code&gt;Ingester.QueryStream&lt;/code&gt; span has a request rate of &lt;code&gt;144220.22&lt;/code&gt; requests per second.
The &lt;code&gt;/cortex.Ingester/Query&lt;/code&gt; span has the highest request rate.&lt;/p&gt;
&lt;h3 id=&#34;error-rate-example&#34;&gt;Error rate example&lt;/h3&gt;
&lt;p&gt;Let’s say we want to learn more about why &lt;code&gt;cortex.Ingester&lt;/code&gt; has the highest error rates.
Selecting the second row of the Error rate column displays details about the span metrics in a new window on the right side.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-error-rate-example.png&#34; alt=&#34;Error rate example&#34;&gt;&lt;/p&gt;
&lt;p&gt;The metrics query used to generate the data appears in the &lt;strong&gt;Metrics browser&lt;/strong&gt; field.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-error-example-editor.png&#34; alt=&#34;Error example query editor&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;span-metrics-table&#34;&gt;Span metrics table&lt;/h2&gt;
&lt;p&gt;The span metrics, shown in the table, are generated by the metrics-generator or the Grafana Agent.
These metrics are created from ingested tracing data, including RED metrics.&lt;/p&gt;
&lt;p&gt;Span metrics generate two metrics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A counter that computes requests&lt;/li&gt;
&lt;li&gt;A histogram that tracks the distribution of durations of all requests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For information about span metrics and how they are calculated, refer to the &lt;a href=&#34;../span_metrics/&#34;&gt;Span metrics documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-span-metrics.png&#34; alt=&#34;Span metrics table&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;table-contents&#34;&gt;Table contents&lt;/h3&gt;
&lt;p&gt;The span metrics table contains seven columns with five column headings. Selecting a heading sorts the data by ascending or descending values.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;&lt;strong&gt;Column&lt;/strong&gt;&lt;/th&gt;
              &lt;th&gt;&lt;strong&gt;Explanation&lt;/strong&gt;&lt;/th&gt;
              &lt;th&gt;&lt;strong&gt;PromQL query for span&lt;/strong&gt;&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Name&lt;/td&gt;
              &lt;td&gt;Use the span name. OTel semantic conventions generally expect the span name to be some kind of low cardinality indicator of the http route or database function being performed.&lt;/td&gt;
              &lt;td&gt;N/A&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Rate&lt;/td&gt;
              &lt;td&gt;LCD gauge (horizontal bar graph). Instances per second of the span. Clicking this field can jump to the appropriate metrics.&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;sum(rate(  traces_spanmetrics_calls_total{ span_name=&amp;quot;&amp;quot;, &amp;lt;filters&amp;gt; }[$__range]))&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Error Rate&lt;/td&gt;
              &lt;td&gt;Number and LCD gauge (horizontal bar graph). Clicking this field shows more detailed metrics.&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;sum(rate(  traces_spanmetrics_calls_total{ span_name=&amp;quot;&amp;quot;,   span_status=&amp;quot;STATUS_CODE_ERROR&amp;quot;, &amp;lt;filters&amp;gt; }[$__range]))&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Duration&lt;/td&gt;
              &lt;td&gt;p90 duration: 90% of all occurrences of this span complete within this time. Clicking this field shows the appropriate metrics.&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;histogram_quantile(.9, sum(rate(  traces_spanmetrics_duration_seconds_bucket{ span_name=&amp;quot;&amp;quot;, span_status=&amp;quot;STATUS_CODE_ERROR&amp;quot;,  &amp;lt;filters&amp;gt; }[$__range]) by (le))&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Links&lt;/td&gt;
              &lt;td&gt;Provide links to example traces given the span name and other applied filters. Link to a search for all spans with the same name from the same Tempo data source.&lt;/td&gt;
              &lt;td&gt;N/A&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;service-graphs&#34;&gt;Service graphs&lt;/h2&gt;
&lt;p&gt;A service graph (node graph) is a visual representation of the interrelationships between various services.
Service graphs help to understand the structure of a distributed system, and the connections and dependencies between its components.&lt;/p&gt;
&lt;p&gt;Service graphs infer the topology of a distributed system, provide a high level overview of the health of your system, and a historic view of a system’s topology.
Service graphs show error rates and latencies, among other relevant data.
The service graph layout can be the default or grid.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-service-graph-web.png&#34; alt=&#34;Service graph with a connected node layout&#34;&gt;&lt;/p&gt;
&lt;p&gt;The grid layout changes the service graph to a series of rows and columns.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-service-graph-rows.png&#34; alt=&#34;Service graph with grid layout&#34;&gt;&lt;/p&gt;
&lt;p&gt;If you are using the metrics-generator, then it processes traces and generates service graphs in the form of time series metrics like:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;tempo_service_graph_request_total{client=&amp;#34;app&amp;#34;, server=&amp;#34;db&amp;#34;} 20&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For information about service graphs and how they are calculated, refer to the &lt;a href=&#34;../service_graphs/&#34;&gt;Service Graphs documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;use-filters-to-reveal-details&#34;&gt;Use filters to reveal details&lt;/h2&gt;
&lt;p&gt;The service graph view uses service graphs and span metrics to provide a gateway to your tracing information.
This dashboard is derived from a fixed set of metrics queries.
These underlying queries can not be changed.
However, you can choose which traces are included in the metrics query by filtering.&lt;/p&gt;
&lt;p&gt;You can explore data by clicking on selectable items or by using filters.&lt;/p&gt;
&lt;h3 id=&#34;selecting-items-or-nodes-for-more-detail&#34;&gt;Selecting items or nodes for more detail&lt;/h3&gt;
&lt;p&gt;Clicking on selectable items, such as underlined text in the table or nodes on the service graph, lets you reveal specific details based upon your selection.&lt;/p&gt;
&lt;p&gt;In the table, you can select items in the &lt;strong&gt;Rate&lt;/strong&gt;, &lt;strong&gt;Error Rate&lt;/strong&gt;, &lt;strong&gt;Duration (p90)&lt;/strong&gt;, and &lt;strong&gt;Links&lt;/strong&gt; columns. Choosing one of these items provides details about the span metrics.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-rate-drilldown.png&#34; alt=&#34;Table with rate drill-down&#34;&gt;&lt;/p&gt;
&lt;p&gt;You can view request rate, request histogram, failed request rate, and traces for any node in the service graph.
To view more information, select the node in the service graph and then choose an option from the popup.
For details on navigating the service graph, refer to the &lt;a href=&#34;/docs/grafana/latest/visualizations/node-graph/&#34;&gt;Node graph panel&lt;/a&gt; documentation.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-service-graph-drilldown.png&#34; alt=&#34;Service graph with drill-down&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;filter-with-metric-queries&#34;&gt;Filter with metric queries&lt;/h3&gt;
&lt;p&gt;Using the filters at the top of the screen, you can narrow the data set based upon span attributes (key-value pairs or labels).
The filters build a query to refine what is shown in the service graph and span metrics.
You can add one or more label filters.&lt;/p&gt;
&lt;p&gt;To use the filters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;At the top of the Service Graph, select the text box after &lt;strong&gt;Filter&lt;/strong&gt; to display a list of available labels. In this case, &lt;strong&gt;server&lt;/strong&gt; is selected. &lt;br /&gt;&lt;img src=&#34;../apm-query-filter-label.png&#34; alt=&#34;Service graph with grid layout&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select or search for a value for the label. In this case, the value of &lt;strong&gt;server&lt;/strong&gt; is equal to &lt;strong&gt;tempo-ingester&lt;/strong&gt;. The default operator is equals (=). &lt;br /&gt; &lt;img src=&#34;../apm-query-filter-value1.png&#34; alt=&#34;Select value for label&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optional: Change the operator by selecting &lt;strong&gt;=&lt;/strong&gt; and choosing a new option from the drop-down. &lt;br /&gt;&lt;img src=&#34;../apm-filter-operator.png&#34; alt=&#34;Filter operators&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optional: Add additional key-value pairs to refine the data set. Any subsequent label filters use AND, which requires both key-value pairs to be presents for matches.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Run query&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Filters can be removed by selecting the filter drop-down and choosing &lt;strong&gt;– remove filter –&lt;/strong&gt;.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-remove-filter.png&#34; alt=&#34;Remove filters&#34;&gt;&lt;/p&gt;
&lt;p&gt;In the example below, each field or label represents a key-value pair. Number 1 selects a service as the label whose value is &lt;code&gt;Go-http-client&lt;/code&gt; (2). The second key-value pair has a client as a label whose value is &lt;code&gt;02e807&lt;/code&gt;.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-filter-example-numbered.png&#34; alt=&#34;Filter example with numbers&#34;&gt;&lt;/p&gt;
&lt;p&gt;If your metrics queries are too specific, they may not return any results.&lt;/p&gt;
&lt;p&gt;Updating the filter to be less specific returns a result. In this case, the results show only span metrics data associated with the &lt;code&gt;span_name&lt;/code&gt; label with a value of &lt;code&gt;/base.Ruler/Rules&lt;/code&gt;. No service graph data was available.&lt;/p&gt;
&lt;p align=&#34;center&#34;&gt;&lt;img src=&#34;../apm-filter-example2.png&#34; alt=&#34;Filter example with one results&#34;&gt;&lt;/p&gt;
]]></content><description>&lt;h1 id="service-graph-view">Service graph view&lt;/h1>
&lt;p>Grafana&amp;rsquo;s service graph view utilizes metrics generated by the metrics-generator (or Grafana Agent) to display span request rates, error rates, and durations, as well as service graphs.
Once the requirements are set up, this pre-configured view is immediately available.&lt;/p></description></item></channel></rss>