<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Traces Drilldown on Grafana Labs</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/</link><description>Recent content in Traces Drilldown on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v12.4/visualizations/simplified-exploration/traces/index.xml" rel="self" type="application/rss+xml"/><item><title>Access or install Traces Drilldown</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/access/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/access/</guid><content><![CDATA[&lt;h1 id=&#34;access-or-install-traces-drilldown&#34;&gt;Access or install Traces Drilldown&lt;/h1&gt;
&lt;p&gt;You can access Grafana Traces Drilldown using any of these:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;#set-up-in-grafana-cloud&#34;&gt;Grafana Cloud&lt;/a&gt;: The easiest method, since no setup or installation is required.&lt;/li&gt;
&lt;li&gt;Self-managed &lt;a href=&#34;#set-up-in-self-managed-grafana&#34;&gt;Grafana&lt;/a&gt; open source or Enterprise: You must install the Traces Drilldown plugin.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Traces Drilldown requires Grafana Tempo 2.6 or later with 
    &lt;a href=&#34;/docs/tempo/latest/operations/traceql-metrics/&#34;&gt;TraceQL metrics configured&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;set-up-in-grafana-cloud&#34;&gt;Set up in Grafana Cloud&lt;/h2&gt;
&lt;p&gt;To use Traces Drilldown with Grafana Cloud, you need the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Grafana Cloud account&lt;/li&gt;
&lt;li&gt;Grafana stack in Grafana Cloud receiving tracing data from your stack&amp;rsquo;s default &lt;a href=&#34;/docs/grafana-cloud/send-data/traces/&#34;&gt;Hosted Traces&lt;/a&gt; data source or a &lt;a href=&#34;/docs/grafana-cloud/connect-externally-hosted/data-sources/tempo/configure-tempo-data-source/&#34;&gt;Tempo data source&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;set-up-in-self-managed-grafana&#34;&gt;Set up in self-managed Grafana&lt;/h2&gt;
&lt;p&gt;To use Traces Drilldown with self-managed Grafana open source or Grafana Enterprise, you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your own Grafana instance running 11.6 or later&lt;/li&gt;
&lt;li&gt;Tempo 2.6 or later with 
    &lt;a href=&#34;/docs/tempo/latest/operations/traceql-metrics/&#34;&gt;TraceQL metrics configured&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Configured &lt;a href=&#34;/docs/grafana/latest/datasources/tempo/configure-tempo-data-source/&#34;&gt;Tempo data source&lt;/a&gt; receiving tracing data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Next, &lt;a href=&#34;#access-traces-drilldown&#34;&gt;access Traces Drilldown&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;install-the-traces-drilldown-plugin&#34;&gt;Install the Traces Drilldown plugin&lt;/h3&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Grafana v12 and later includes all Drilldown apps, including Traces Drilldown. No separate installation is required. Go to &lt;a href=&#34;#access-traces-drilldown&#34;&gt;Access Traces Drilldown&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;Traces Drilldown is distributed as a Grafana plugin.
You can find it in the official &lt;a href=&#34;/grafana/plugins/grafana-exploretraces-app/&#34;&gt;Grafana Plugin Directory&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;install-in-your-grafana-instance&#34;&gt;Install in your Grafana instance&lt;/h3&gt;
&lt;p&gt;You can install Traces Drilldown in your Grafana instance using &lt;code&gt;grafana cli&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;shell&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-shell&#34;&gt;grafana cli plugins install grafana-exploretraces-app&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Alternatively, follow these steps to install Traces Drilldown in Grafana:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In Grafana, go to &lt;strong&gt;Administration&lt;/strong&gt; &amp;gt; &lt;strong&gt;Plugins and data&lt;/strong&gt; &amp;gt; &lt;strong&gt;Plugins&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Search for &amp;ldquo;Traces Drilldown&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;Select Traces Drilldown.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Install&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The plugin is automatically activated after installation.&lt;/p&gt;
&lt;h3 id=&#34;install-in-a-docker-container&#34;&gt;Install in a Docker container&lt;/h3&gt;
&lt;p&gt;To install the app in a Docker container, configure the following environment variable:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;shell&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-shell&#34;&gt;GF_INSTALL_PLUGINS=grafana-exploretraces-app&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;access-traces-drilldown&#34;&gt;Access Traces Drilldown&lt;/h2&gt;
&lt;p&gt;To access Traces Drilldown, use the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open your Grafana stack in a web browser.&lt;/li&gt;
&lt;li&gt;In the main menu, select &lt;strong&gt;Drilldown&lt;/strong&gt; &amp;gt; &lt;strong&gt;Traces&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;next-steps&#34;&gt;Next steps&lt;/h2&gt;
&lt;p&gt;To learn how to use Traces Drilldown to explore your tracing data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;../concepts/&#34;&gt;Concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../get-started/&#34;&gt;Get started with Traces Drilldown&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../determine-use-case/&#34;&gt;Determine your use case&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../investigate/&#34;&gt;Investigate trends and spikes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana-cloud/telemetry-signals/use-signals-together/&#34;&gt;Use signals together&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;/docs/grafana-cloud/telemetry-signals/workflows/&#34;&gt;Telemetry signal workflows&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="access-or-install-traces-drilldown">Access or install Traces Drilldown&lt;/h1>
&lt;p>You can access Grafana Traces Drilldown using any of these:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="#set-up-in-grafana-cloud">Grafana Cloud&lt;/a>: The easiest method, since no setup or installation is required.&lt;/li>
&lt;li>Self-managed &lt;a href="#set-up-in-self-managed-grafana">Grafana&lt;/a> open source or Enterprise: You must install the Traces Drilldown plugin.&lt;/li>
&lt;/ul>
&lt;p>Traces Drilldown requires Grafana Tempo 2.6 or later with
&lt;a href="/docs/tempo/latest/operations/traceql-metrics/">TraceQL metrics configured&lt;/a>.&lt;/p></description></item><item><title>Concepts</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/concepts/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/concepts/</guid><content><![CDATA[&lt;h1 id=&#34;concepts&#34;&gt;Concepts&lt;/h1&gt;
&lt;p&gt;Distributed traces provide a way to monitor applications by tracking requests across services.
Traces record the details of a request to help understand why an issue is or was happening.&lt;/p&gt;
&lt;p&gt;Tracing is best used for analyzing the performance of your system, identifying bottlenecks, monitoring latency, and providing a complete picture of how requests are processed.&lt;/p&gt;
&lt;p&gt;To use the Grafana Traces Drilldown app, you should understand these concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;#concepts&#34;&gt;Concepts&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;#rate-error-and-duration-metrics&#34;&gt;Rate, error, and duration metrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;#traces-and-spans&#34;&gt;Traces and spans&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;rate-error-and-duration-metrics&#34;&gt;Rate, error, and duration metrics&lt;/h2&gt;
&lt;p&gt;The Traces Drilldown app lets you explore rate, error, and duration (RED) metrics generated from your traces by Tempo.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Useful for investigating&lt;/th&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Meaning&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Unusual spikes in activity&lt;/td&gt;
              &lt;td&gt;Rate&lt;/td&gt;
              &lt;td&gt;Number of requests per second&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Overall issues in your tracing ecosystem&lt;/td&gt;
              &lt;td&gt;Errors&lt;/td&gt;
              &lt;td&gt;Number of those requests that are failing&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Response times and latency issues&lt;/td&gt;
              &lt;td&gt;Duration&lt;/td&gt;
              &lt;td&gt;Amount of time those requests take, represented as a histogram&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;For more information about the RED method, refer to &lt;a href=&#34;/blog/2018/08/02/the-red-method-how-to-instrument-your-services/&#34;&gt;The RED Method: how to instrument your services&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;traces-and-spans&#34;&gt;Traces and spans&lt;/h2&gt;
&lt;p&gt;A trace represents the journey of a request or an action as it moves through all the nodes of a distributed system, especially containerized applications or microservices architectures.
This makes them the ideal observability signal for discovering bottlenecks and interconnection issues.&lt;/p&gt;
&lt;p&gt;Traces are composed of one or more spans.
A span is a unit of work within a trace that has a start time relative to the beginning of the trace, a duration, and an operation name for the unit of work.
It usually has a reference to a parent span in a trace, unless it&amp;rsquo;s the first span, also known as the root span.
It frequently includes key/value attributes that are relevant to the span itself, for example, the HTTP method used in the request, as well as other metadata such as the service name, sub-span events, or links to other spans.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;/docs/grafana-cloud/telemetry-signals/&#34;&gt;Understand your data&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="concepts">Concepts&lt;/h1>
&lt;p>Distributed traces provide a way to monitor applications by tracking requests across services.
Traces record the details of a request to help understand why an issue is or was happening.&lt;/p></description></item><item><title>Get started with Traces Drilldown</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/get-started/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/get-started/</guid><content><![CDATA[&lt;h1 id=&#34;get-started-with-traces-drilldown&#34;&gt;Get started with Traces Drilldown&lt;/h1&gt;
&lt;p&gt;You can use traces to identify errors in your apps and services and then to optimize and streamline them.&lt;/p&gt;
&lt;p&gt;When working with traces, start with the big picture.
Investigate using primary signals, RED metrics, filters, and structural or trace list tabs to explore your data.
To learn more, refer to &lt;a href=&#34;../concepts/&#34;&gt;Concepts&lt;/a&gt;.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Expand your observability journey and learn about &lt;a href=&#34;../../&#34;&gt;the Drilldown apps suite&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;!-- Commenting out this video until we can replace it with a new one --&gt;
&lt;!-- &lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#39;https://www.youtube.com/embed/a3uB1C2oHA4&#39; title=&#34;YouTube video player&#34; frameborder=&#34;0&#34; allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; allowfullscreen&gt;&lt;/iframe&gt; --&gt;
&lt;h2 id=&#34;before-you-begin&#34;&gt;Before you begin&lt;/h2&gt;
&lt;p&gt;To use Grafana Traces Drilldown with Grafana Cloud, you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Grafana Cloud account&lt;/li&gt;
&lt;li&gt;A Grafana stack in Grafana Cloud with a configured Tempo data source&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To use Traces Drilldown with self-managed Grafana, you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your own Grafana v11.6 or later instance with a configured Tempo data source&lt;/li&gt;
&lt;li&gt;Installed Traces Drilldown plugin&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more details, refer to &lt;a href=&#34;../access/&#34;&gt;Access Traces Drilldown&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;explore-your-tracing-data&#34;&gt;Explore your tracing data&lt;/h2&gt;
&lt;p&gt;Most investigations follow these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select the primary signal.&lt;/li&gt;
&lt;li&gt;Choose the metric you want to use: rates, errors, or duration.&lt;/li&gt;
&lt;li&gt;Define filters to refine the view of your data.&lt;/li&gt;
&lt;li&gt;Use the structural or trace list to drill down into the issue.&lt;/li&gt;
&lt;/ol&gt;




  &lt;div class=&#34;d-sm-flex flex-direction-row-reverse bg-gray-1 br-12 p-2 my-1&#34;&gt;
    &lt;img class=&#34;mb-1 lazyload&#34; data-src=&#34;/media/docs/icons/docs-play.svg&#34; width=&#34;228&#34; height=&#34;182&#34; alt=&#34;Give it a try using Grafana Play&#34;&gt;
    &lt;div&gt;
      &lt;div class=&#34;h4 pt-0 pb-half fw-500&#34;&gt;Give it a try using Grafana Play&lt;/div&gt;
      &lt;p class=&#34;pr-1 pb-half&#34;&gt;With Grafana Play, you can explore and see how it works, learning from practical examples to accelerate your development.
This feature can be seen on &lt;a href=&#34;https://play.grafana.org/a/grafana-exploretraces-app/explore&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;the Grafana Play site&lt;/a&gt;.&lt;/p&gt;
      &lt;div class=&#34;mx-auto&#34;&gt;
        &lt;a class=&#34;btn btn--primary btn--large arrow fw-600 br-8 w-175&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34; href=&#34;https://play.grafana.org/a/grafana-exploretraces-app/explore&#34;&gt;Try it&lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;

&lt;p&gt;Prefer a step-by-step walkthrough? Refer to the &lt;a href=&#34;./example-investigation/&#34;&gt;Investigation walkthrough&lt;/a&gt; to follow along on play.grafana.org.&lt;/p&gt;
&lt;h2 id=&#34;example-investigate-source-of-errors&#34;&gt;Example: Investigate source of errors&lt;/h2&gt;
&lt;p&gt;This example demonstrates investigation techniques and patterns you can use when investigating errors. It shows how to use advanced features like the &lt;strong&gt;Comparison&lt;/strong&gt; tab and &lt;strong&gt;Inspect&lt;/strong&gt; to find root causes.&lt;/p&gt;
&lt;p&gt;For a hands-on walkthrough you can follow step-by-step, refer to the &lt;a href=&#34;./example-investigation/&#34;&gt;Investigation walkthrough&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For example, you want to uncover the source of errors in your spans. You need to compare the errors in the traces to locate the problem trace. Here&amp;rsquo;s how this works.&lt;/p&gt;
&lt;h3 id=&#34;choose-the-level-of-data-and-a-metric&#34;&gt;Choose the level of data and a metric&lt;/h3&gt;
&lt;p&gt;To identify the trouble spot, you want to use raw tracing data instead of only the root span, which is the first span of every trace.
Select &lt;strong&gt;All spans&lt;/strong&gt; in the Filters, then choose the &lt;strong&gt;Errors&lt;/strong&gt; metric.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-allspans-errors-red-v1.2.png&#34;
  alt=&#34;Select All spans to view all raw span data and Errors as your metric&#34; width=&#34;1223&#34;
     height=&#34;842&#34;/&gt;&lt;/p&gt;
&lt;h3 id=&#34;handle-errors&#34;&gt;Handle errors&lt;/h3&gt;
&lt;p&gt;If you&amp;rsquo;re seeing errors in your traces, here are three common misunderstandings to avoid.&lt;/p&gt;
&lt;p&gt;First, not all red spans are application failures. A span marked &amp;rsquo;error&amp;rsquo; might indicate a timeout or expected validation failure. Check the error message and type before assuming something&amp;rsquo;s broken.&lt;/p&gt;
&lt;p&gt;Second, errors cascade. When one service fails, downstream spans inherit that error status. Look for the root span with the error to find the actual source, not only the last service in the chain.&lt;/p&gt;
&lt;p&gt;Finally, remember that error count isn&amp;rsquo;t the same as error rate. Ten errors might seem alarming, but if you handled ten thousand requests, that&amp;rsquo;s only 0.1%. Always consider the context.&lt;/p&gt;
&lt;p&gt;Check the span attributes and error details. They&amp;rsquo;ll tell you what really happened.&lt;/p&gt;
&lt;h3 id=&#34;correlate-attributes&#34;&gt;Correlate attributes&lt;/h3&gt;
&lt;p&gt;Use the &lt;strong&gt;Comparison&lt;/strong&gt; tab to correlate attributes values with errors. The results are ordered by the difference in those attributes by the highest ones first. This helps
you see what&amp;rsquo;s causing the errors immediately.
The &lt;strong&gt;Comparison&lt;/strong&gt; tab analyzes the difference between two sets of traces:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Green bars (Baseline): Normal/healthy trace behavior&lt;/li&gt;
&lt;li&gt;Red bars (Selection): Current selection with status = error filter&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The view compares your selection (red) to the baseline (green) and ranks attributes by the largest difference.
This indicates a significant spike in &lt;code&gt;HTTP 500&lt;/code&gt; (Internal Server Error) responses during your selected time range.
The visualization highlights that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;500 errors aren&amp;rsquo;t normal for this system, they don&amp;rsquo;t appear in the baseline comparison&lt;/li&gt;
&lt;li&gt;There were 500 traces containing HTTP 500 status codes during the error period&lt;/li&gt;
&lt;li&gt;This represents a 100% deviation from normal behavior&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Click &lt;strong&gt;Add to filters&lt;/strong&gt; to narrow the investigation to these values, or choose &lt;strong&gt;Inspect&lt;/strong&gt; to explore the full distribution.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-errors-comparison-http-status-code-v1.2.png&#34;
  alt=&#34;Errors are immediately visible by the large red bars&#34; width=&#34;1208&#34;
     height=&#34;841&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Hovering over any of the bars shows a tooltip with information about the value and the percentage of the total.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-errors-hover-tooltip.png&#34;
  alt=&#34;Tooltip showing the value and the percentage of the total&#34; width=&#34;488&#34;
     height=&#34;328&#34;/&gt;&lt;/p&gt;
&lt;h3 id=&#34;inspect-the-problem&#34;&gt;Inspect the problem&lt;/h3&gt;
&lt;p&gt;Select &lt;strong&gt;Inspect&lt;/strong&gt; on a card to drill into the distribution for that attribute.
In this example, selecting &lt;strong&gt;Inspect&lt;/strong&gt; on &lt;code&gt;span.http.status_code&lt;/code&gt; shows the distribution by value. Using this view shows the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Normal state: All requests completed successfully (&lt;code&gt;200&lt;/code&gt;/&lt;code&gt;201&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Error state: Significant portion return &lt;code&gt;500&lt;/code&gt; errors&lt;/li&gt;
&lt;li&gt;Root cause: something caused the internal server errors during the selected time frame&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use &lt;strong&gt;Add to filters&lt;/strong&gt; on the &lt;code&gt;500&lt;/code&gt; card to keep only error spans and continue the investigation.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-errors-comparison-http-status-attr-selected-v1.2.png&#34;
  alt=&#34;Inspect the HTTP 500 errors&#34; width=&#34;1222&#34;
     height=&#34;847&#34;/&gt;&lt;/p&gt;
&lt;h3 id=&#34;use-root-cause-errors&#34;&gt;Use Root cause errors&lt;/h3&gt;
&lt;p&gt;Select &lt;strong&gt;Root cause errors&lt;/strong&gt; for an aggregated view of all of the traces that have errors in them.
This screen provides critical insights into where and how the &lt;code&gt;HTTP 500&lt;/code&gt; error occurred in your distributed system.&lt;/p&gt;
&lt;p&gt;Using this view, you can see that the Frontend &amp;gt; Recommendations services have problems. Specifically, that the &lt;code&gt;/api/pizza&lt;/code&gt; endpoint chain is failing.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-root-cause-errors-v1.2.png&#34;
  alt=&#34;Root cause errors tab&#34; width=&#34;1219&#34;
     height=&#34;713&#34;/&gt;&lt;/p&gt;
&lt;p&gt;To view additional details, click the link icon and select &lt;strong&gt;View linked span&lt;/strong&gt; to open the trace drawer.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-root-cause-trace-drawer-v1.2.png&#34;
  alt=&#34;View linked spans to see details of errors&#34; width=&#34;974&#34;
     height=&#34;884&#34;/&gt;&lt;/p&gt;
&lt;p&gt;Error spans have a red icon next to them. Select the down arrow next to the span with an error to see details.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-root-cause-trace-expanded-v1.2.png&#34;
  alt=&#34;Select the down arrow next to the span with an error to see details&#34; width=&#34;689&#34;
     height=&#34;771&#34;/&gt;&lt;/p&gt;
&lt;h2 id=&#34;what-you-learned&#34;&gt;What you learned&lt;/h2&gt;
&lt;p&gt;This example demonstrated how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;All spans&lt;/strong&gt; to find errors deeper in the call chain&lt;/li&gt;
&lt;li&gt;Understand common error investigation misunderstandings&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;Comparison&lt;/strong&gt; tab to correlate attributes with errors&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Inspect&lt;/strong&gt; to drill into attribute distributions&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Root cause errors&lt;/strong&gt; to see error chain structures&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a step-by-step walkthrough you can follow along, refer to the &lt;a href=&#34;./example-investigation/&#34;&gt;Investigation walkthrough&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="get-started-with-traces-drilldown">Get started with Traces Drilldown&lt;/h1>
&lt;p>You can use traces to identify errors in your apps and services and then to optimize and streamline them.&lt;/p>
&lt;p>When working with traces, start with the big picture.
Investigate using primary signals, RED metrics, filters, and structural or trace list tabs to explore your data.
To learn more, refer to &lt;a href="../concepts/">Concepts&lt;/a>.&lt;/p></description></item><item><title>Determine your use case</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/determine-use-case/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/determine-use-case/</guid><content><![CDATA[&lt;h1 id=&#34;determine-your-use-case&#34;&gt;Determine your use case&lt;/h1&gt;
&lt;p&gt;Before you start investigating, identify your use case to choose the right approach and metric type.&lt;/p&gt;
&lt;p&gt;Your use case determines which RED metric you start with and how you navigate through your tracing data. You might know exactly what&amp;rsquo;s wrong, or you might need to explore to find issues.&lt;/p&gt;
&lt;h2 id=&#34;why-this-concept-matters&#34;&gt;Why this concept matters&lt;/h2&gt;
&lt;p&gt;Identifying your use case helps you start your investigation efficiently. It guides you to the right RED metric and workflow, saving time and helping you find root causes faster.&lt;/p&gt;
&lt;p&gt;Grafana Traces Drilldown supports three main types of investigations: error investigation, performance analysis, and activity monitoring. Each use case has a different starting point and workflow.&lt;/p&gt;
&lt;h2 id=&#34;how-it-works&#34;&gt;How it works&lt;/h2&gt;
&lt;p&gt;Each use case maps to a specific RED metric and investigation workflow. Your investigation goal determines which metric you start with and which tabs and views are most useful.&lt;/p&gt;
&lt;p&gt;Error investigation uses the &lt;strong&gt;Errors&lt;/strong&gt; metric to find failed requests and their root causes. Performance analysis uses the &lt;strong&gt;Duration&lt;/strong&gt; metric to identify slow operations and latency bottlenecks. Activity monitoring uses the &lt;strong&gt;Rate&lt;/strong&gt; metric to understand service communication patterns and request flows.&lt;/p&gt;
&lt;p&gt;Traces Drilldown adapts its interface based on your selected metric. When you choose &lt;strong&gt;Errors&lt;/strong&gt;, you see error-specific tabs like &lt;strong&gt;Exceptions&lt;/strong&gt; and &lt;strong&gt;Root cause errors&lt;/strong&gt;. When you choose &lt;strong&gt;Duration&lt;/strong&gt;, you see latency-focused tabs like &lt;strong&gt;Root cause latency&lt;/strong&gt; and &lt;strong&gt;Slow traces&lt;/strong&gt;. When you choose &lt;strong&gt;Rate&lt;/strong&gt;, you see &lt;strong&gt;Service structure&lt;/strong&gt; to visualize service communication.&lt;/p&gt;
&lt;h2 id=&#34;use-case-1-investigate-errors&#34;&gt;Use case 1: Investigate errors&lt;/h2&gt;
&lt;p&gt;Use this when you know requests are failing or you&amp;rsquo;ve seen error alerts.&lt;/p&gt;
&lt;p&gt;You might have noticed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Error alerts from your monitoring system&lt;/li&gt;
&lt;li&gt;Failed requests in your application logs&lt;/li&gt;
&lt;li&gt;User reports of errors or failed operations&lt;/li&gt;
&lt;li&gt;Spikes in error rates on dashboards&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;how-to-start&#34;&gt;How to start&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Errors&lt;/strong&gt; as your metric type.&lt;/li&gt;
&lt;li&gt;Start with &lt;strong&gt;Root spans&lt;/strong&gt; to see service-level error patterns.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;Comparison&lt;/strong&gt; tab to identify which attributes correlate with errors.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;Breakdown&lt;/strong&gt; tab to see which services or operations have the most errors.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;Exceptions&lt;/strong&gt; tab to find common error messages.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Root cause errors&lt;/strong&gt; to see the error chain structure.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;When to switch to All spans&lt;/strong&gt;: If you need to find errors deeper in the call chain, like database errors or downstream service failures that don&amp;rsquo;t appear at the root level, switch to &lt;strong&gt;All spans&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&#34;example-scenarios&#34;&gt;Example scenarios&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;You know a service is failing but not why&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Errors&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Filter by the service name.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Comparison&lt;/strong&gt; to see which attributes differ between successful and failed requests.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Root cause errors&lt;/strong&gt; to see the error chain structure.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You see error alerts but don&amp;rsquo;t know the source&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Errors&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see which services have the most errors.&lt;/li&gt;
&lt;li&gt;Drill into the problematic service using filters.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Comparison&lt;/strong&gt; to identify what&amp;rsquo;s different about the failing requests.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You need to find internal errors&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Start with &lt;strong&gt;Errors&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt; to see service-level patterns.&lt;/li&gt;
&lt;li&gt;If errors don&amp;rsquo;t appear at the root level, switch to &lt;strong&gt;All spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This reveals database errors, downstream service failures, or internal operation errors.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Exceptions&lt;/strong&gt; to find common error messages.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;use-case-2-analyze-performance&#34;&gt;Use case 2: Analyze performance&lt;/h2&gt;
&lt;p&gt;Use this when you want to identify slow operations, latency bottlenecks, or optimize response times.&lt;/p&gt;
&lt;p&gt;You might be investigating:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Slow response times reported by users&lt;/li&gt;
&lt;li&gt;High latency alerts&lt;/li&gt;
&lt;li&gt;Performance degradation over time&lt;/li&gt;
&lt;li&gt;Need to optimize specific operations&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;how-to-start-1&#34;&gt;How to start&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Duration&lt;/strong&gt; as your metric type.&lt;/li&gt;
&lt;li&gt;Start with &lt;strong&gt;Root spans&lt;/strong&gt; for end-to-end request latency.&lt;/li&gt;
&lt;li&gt;Use the duration heatmap to identify latency patterns.&lt;/li&gt;
&lt;li&gt;Select percentiles (p90, p95, p99) based on your SLA requirements.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Root cause latency&lt;/strong&gt; to see which operations are slowest.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Slow traces&lt;/strong&gt; to examine individual slow requests.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see duration by different attributes like service, environment, or region.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;When to switch to All spans&lt;/strong&gt;: If you need to find slow internal operations like database queries or background jobs that don&amp;rsquo;t appear at the root level, switch to &lt;strong&gt;All spans&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&#34;example-scenarios-1&#34;&gt;Example scenarios&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Users report slow responses&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Duration&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Look at the heatmap for latency spikes.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Root cause latency&lt;/strong&gt; to see which service operations are causing delays.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Slow traces&lt;/strong&gt; to examine individual slow requests.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You want to optimize a specific endpoint&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Duration&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Add filters for the endpoint.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see duration by different attributes like service, environment, or region.&lt;/li&gt;
&lt;li&gt;Select appropriate percentiles (p90, p95, p99) based on your optimization goals.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You need to find slow database queries&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Duration&lt;/strong&gt; metric and &lt;strong&gt;All spans&lt;/strong&gt; (database queries appear as child spans).&lt;/li&gt;
&lt;li&gt;Filter by database-related attributes.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see which queries are slowest.&lt;/li&gt;
&lt;li&gt;Examine the slowest spans in &lt;strong&gt;Slow traces&lt;/strong&gt; to identify problematic queries.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;use-case-3-monitor-activity&#34;&gt;Use case 3: Monitor activity&lt;/h2&gt;
&lt;p&gt;Use this when you want to understand service communication patterns, request flows, or overall system activity.&lt;/p&gt;
&lt;p&gt;You might want to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand how services communicate&lt;/li&gt;
&lt;li&gt;Monitor request rates and patterns&lt;/li&gt;
&lt;li&gt;Identify unusual activity spikes&lt;/li&gt;
&lt;li&gt;Map service dependencies&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;how-to-start-2&#34;&gt;How to start&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Rate&lt;/strong&gt; as your metric type.&lt;/li&gt;
&lt;li&gt;Start with &lt;strong&gt;Root spans&lt;/strong&gt; for service-level request rates.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Service structure&lt;/strong&gt; to visualize service-to-service communication.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see request rates by different attributes.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Comparison&lt;/strong&gt; to identify unusual patterns compared to baseline.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Traces&lt;/strong&gt; tab to examine individual requests.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;When to switch to All spans&lt;/strong&gt;: If you need to see internal operations or child spans within traces, switch to &lt;strong&gt;All spans&lt;/strong&gt;. Most activity monitoring use cases work well with &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id=&#34;example-scenarios-2&#34;&gt;Example scenarios&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;You want to understand service dependencies&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Rate&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Service structure&lt;/strong&gt; to see how services call each other.&lt;/li&gt;
&lt;li&gt;Identify the communication patterns and dependencies.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Traces&lt;/strong&gt; to examine individual request flows.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You notice unusual activity spikes&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Rate&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; to see which services or operations have increased rates.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Comparison&lt;/strong&gt; to compare against normal baseline behavior.&lt;/li&gt;
&lt;li&gt;Switch to &lt;strong&gt;Errors&lt;/strong&gt; or &lt;strong&gt;Duration&lt;/strong&gt; if the spike indicates problems.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;You&amp;rsquo;re doing capacity planning&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Rate&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Breakdown&lt;/strong&gt; by service, environment, or region.&lt;/li&gt;
&lt;li&gt;Understand request distribution patterns.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Service structure&lt;/strong&gt; to see communication volumes between services.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;choose-your-starting-point&#34;&gt;Choose your starting point&lt;/h2&gt;
&lt;p&gt;Your starting point depends on what you already know:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;You know what&amp;rsquo;s wrong&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Errors present → Start with &lt;strong&gt;Errors&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Performance issues → Start with &lt;strong&gt;Duration&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Specific service affected → Add a filter for that service first, then select the appropriate metric.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;You need to explore&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Start with &lt;strong&gt;Rate&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt; to get an overview.&lt;/li&gt;
&lt;li&gt;Look for unusual patterns in the graphs.&lt;/li&gt;
&lt;li&gt;Switch to &lt;strong&gt;Errors&lt;/strong&gt; or &lt;strong&gt;Duration&lt;/strong&gt; based on what you find.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;You&amp;rsquo;re doing proactive analysis&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Start with &lt;strong&gt;Rate&lt;/strong&gt; metric and &lt;strong&gt;Root spans&lt;/strong&gt; to understand normal patterns.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Comparison&lt;/strong&gt; to identify deviations from baseline.&lt;/li&gt;
&lt;li&gt;Switch to &lt;strong&gt;Errors&lt;/strong&gt; or &lt;strong&gt;Duration&lt;/strong&gt; when you find areas of concern.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;related-concepts&#34;&gt;Related concepts&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;../concepts/#rate-error-and-duration-metrics&#34;&gt;RED metrics&lt;/a&gt; - Understanding Rate, Errors, and Duration metrics&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../concepts/#traces-and-spans&#34;&gt;Traces and spans&lt;/a&gt; - How traces and spans work in distributed systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;related-tasks&#34;&gt;Related tasks&lt;/h2&gt;
&lt;p&gt;After you&amp;rsquo;ve determined your use case:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;../investigate/choose-red-metric/&#34;&gt;Choose a RED metric&lt;/a&gt; to match your investigation goal.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../investigate/choose-span-data/&#34;&gt;Choose root or full span data&lt;/a&gt; based on the depth you need.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../investigate/analyze-tracing-data/&#34;&gt;Analyze tracing data&lt;/a&gt; using the appropriate tabs for your metric type.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;../investigate/add-filters/&#34;&gt;Add filters&lt;/a&gt; to refine your investigation as you discover patterns.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="determine-your-use-case">Determine your use case&lt;/h1>
&lt;p>Before you start investigating, identify your use case to choose the right approach and metric type.&lt;/p>
&lt;p>Your use case determines which RED metric you start with and how you navigate through your tracing data. You might know exactly what&amp;rsquo;s wrong, or you might need to explore to find issues.&lt;/p></description></item><item><title>Investigate trends and spikes</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/investigate/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/investigate/</guid><content><![CDATA[&lt;h1 id=&#34;investigate-trends-and-spikes&#34;&gt;Investigate trends and spikes&lt;/h1&gt;
&lt;p&gt;Grafana Traces Drilldown provides powerful tools that help you identify and analyze problems in your applications and services.&lt;/p&gt;
&lt;p&gt;Using these steps, you can use the tracing data to investigate issues.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;./choose-span-data/&#34;&gt;Select &lt;strong&gt;Root spans&lt;/strong&gt; or &lt;strong&gt;All spans&lt;/strong&gt;&lt;/a&gt; to look at either the first span in a trace (the root span) or all span data.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;./choose-red-metric/&#34;&gt;Choose the metric&lt;/a&gt; you want to use: rates, errors, or duration.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;./analyze-tracing-data/&#34;&gt;Analyze data&lt;/a&gt; using &lt;strong&gt;Breakdown&lt;/strong&gt;, &lt;strong&gt;Comparison&lt;/strong&gt;, &lt;strong&gt;Service structure&lt;/strong&gt; (Rate), &lt;strong&gt;Root cause errors&lt;/strong&gt; and &lt;strong&gt;Exceptions&lt;/strong&gt; (Errors), &lt;strong&gt;Root cause latency&lt;/strong&gt; (Duration), and &lt;strong&gt;Traces&lt;/strong&gt; tabs.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;./add-filters/&#34;&gt;Add filters&lt;/a&gt; to refine the view of your data.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;./save-load-queries/&#34;&gt;Save and load queries&lt;/a&gt; to preserve and reuse filter configurations.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can use these steps in any order and move between them as many times as needed.
Depending on what you find, you may start with root spans, delve into error data, and then select &lt;strong&gt;All spans&lt;/strong&gt; to access all of the tracing data.&lt;/p&gt;




  &lt;div class=&#34;d-sm-flex flex-direction-row-reverse bg-gray-1 br-12 p-2 my-1&#34;&gt;
    &lt;img class=&#34;mb-1 lazyload&#34; data-src=&#34;/media/docs/icons/docs-play.svg&#34; width=&#34;228&#34; height=&#34;182&#34; alt=&#34;Give it a try using Grafana Play&#34;&gt;
    &lt;div&gt;
      &lt;div class=&#34;h4 pt-0 pb-half fw-500&#34;&gt;Give it a try using Grafana Play&lt;/div&gt;
      &lt;p class=&#34;pr-1 pb-half&#34;&gt;With Grafana Play, you can explore and see how it works, learning from practical examples to accelerate your development.
This feature can be seen on &lt;a href=&#34;https://play.grafana.org/a/grafana-exploretraces-app/explore&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;the Grafana Play site&lt;/a&gt;.&lt;/p&gt;
      &lt;div class=&#34;mx-auto&#34;&gt;
        &lt;a class=&#34;btn btn--primary btn--large arrow fw-600 br-8 w-175&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34; href=&#34;https://play.grafana.org/a/grafana-exploretraces-app/explore&#34;&gt;Try it&lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;

]]></content><description>&lt;h1 id="investigate-trends-and-spikes">Investigate trends and spikes&lt;/h1>
&lt;p>Grafana Traces Drilldown provides powerful tools that help you identify and analyze problems in your applications and services.&lt;/p>
&lt;p>Using these steps, you can use the tracing data to investigate issues.&lt;/p></description></item><item><title>Traces Drilldown UI reference</title><link>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/ui-reference/</link><pubDate>Fri, 03 Apr 2026 19:43:06 +0000</pubDate><guid>https://grafana.com/docs/grafana/v12.4/visualizations/simplified-exploration/traces/ui-reference/</guid><content><![CDATA[&lt;h1 id=&#34;traces-drilldown-ui-reference&#34;&gt;Traces Drilldown UI reference&lt;/h1&gt;
&lt;p&gt;Grafana Traces Drilldown helps you focus your tracing data exploration.
Some sections change based on the metric you choose.
For details on workflows, refer to &lt;a href=&#34;../investigate/analyze-tracing-data/&#34;&gt;Analyze tracing data&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img
  class=&#34;lazyload d-inline-block&#34;
  data-src=&#34;/media/docs/explore-traces/traces-drilldown-screen-parts-numbered-v1.2.png&#34;
  alt=&#34;Numbered sections of the Traces Drilldown app&#34; width=&#34;1275&#34;
     height=&#34;911&#34;/&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data source selection&lt;/strong&gt;:
At the top left, you select the data source for your traces. In this example, the data source is set to &lt;code&gt;grafanacloud-traces&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Filters&lt;/strong&gt;:
The filter bar helps you refine the data displayed.
You can select the type of trace data, either &lt;strong&gt;Root spans&lt;/strong&gt; or &lt;strong&gt;All spans&lt;/strong&gt;. You can also add specific label values to narrow the scope of your investigation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Select metric type&lt;/strong&gt;:
Choose between &lt;strong&gt;Rate&lt;/strong&gt;, &lt;strong&gt;Errors&lt;/strong&gt;, or &lt;strong&gt;Duration&lt;/strong&gt; metrics. In this example, the &lt;strong&gt;Rate&lt;/strong&gt; metric is selected, showing the number of spans per second.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Rate&lt;/strong&gt; graph (top left) shows the rate of spans over time.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Errors&lt;/strong&gt; graph (top right) displays the error rate over time, with red bars indicating errors.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Duration&lt;/strong&gt; heatmap (bottom right) visualizes the distribution of span durations and can help identify latency patterns.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Investigation-focused tabs&lt;/strong&gt;:
Each metric type has its own set of tabs that help you explore your tracing data. These tabs differ depending on the metric type you&amp;rsquo;ve selected.
For example, when you use &lt;strong&gt;Rate&lt;/strong&gt;, the investigation tabs show &lt;strong&gt;Breakdown&lt;/strong&gt;, &lt;strong&gt;Service structure&lt;/strong&gt;, &lt;strong&gt;Comparison&lt;/strong&gt;, and &lt;strong&gt;Traces&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Exceptions&lt;/strong&gt; (&lt;strong&gt;Errors&lt;/strong&gt; only): Group exception messages with counts, trend sparkline, emitting service, and last-seen.&lt;/li&gt;
&lt;li&gt;Percentiles (Duration only): Choose &lt;code&gt;p50&lt;/code&gt;, &lt;code&gt;p75&lt;/code&gt;, &lt;code&gt;p90&lt;/code&gt;, &lt;code&gt;p95&lt;/code&gt;, &lt;code&gt;p99&lt;/code&gt; for Duration views. Default: &lt;code&gt;p90&lt;/code&gt;. If you clear all, &lt;code&gt;p90&lt;/code&gt; applies automatically.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add to filters&lt;/strong&gt;:
Each attribute group includes an &lt;strong&gt;Add to filters&lt;/strong&gt; option, so you can add your selections into the current investigation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Time range selector&lt;/strong&gt;:
At the top right, you can adjust the time range for displayed data using the time picker. In this example, the time range is set to the last 24 hours. Refer to 
    &lt;a href=&#34;/docs/grafana/v11.1/dashboards/use-dashboards/#set-dashboard-time-range&#34;&gt;Set dashboard time range&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;You can also open a specific trace by ID by entering the trace ID into the &lt;strong&gt;Trace ID&lt;/strong&gt; input and pressing Enter. Refer to &lt;a href=&#34;../investigate/analyze-tracing-data/#open-a-trace-by-id&#34;&gt;Open a trace by ID&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;Use the &lt;strong&gt;Save&lt;/strong&gt; (save icon) and &lt;strong&gt;Load&lt;/strong&gt; (folder-open icon) buttons in the header to save your current filters as a named query or load a previously saved one.
The &lt;strong&gt;Save&lt;/strong&gt; button appears when at least one filter is applied.
Refer to &lt;a href=&#34;../investigate/save-load-queries/&#34;&gt;Save and load queries&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Attributes sidebar&lt;/strong&gt;:
Use the &lt;strong&gt;Attributes&lt;/strong&gt; sidebar to select and manage attributes across views. Search attributes with regular expressions. Press &lt;strong&gt;Escape&lt;/strong&gt; or click &lt;strong&gt;Clear&lt;/strong&gt; to reset the search.&lt;/p&gt;
&lt;p&gt;Click the star icon to add or remove a favorite. Drag and drop favorites to reorder them. Switch between scopes: &lt;strong&gt;Favorites&lt;/strong&gt;, &lt;strong&gt;All&lt;/strong&gt;, &lt;strong&gt;Resource&lt;/strong&gt;, &lt;strong&gt;Span&lt;/strong&gt;. A filter icon marks attributes already applied in the &lt;strong&gt;Filters&lt;/strong&gt; bar.&lt;/p&gt;
&lt;p&gt;In &lt;strong&gt;Breakdown&lt;/strong&gt; and &lt;strong&gt;Comparison&lt;/strong&gt; views, selecting an attribute sets the current &lt;strong&gt;Group by&lt;/strong&gt; attribute. In &lt;strong&gt;Trace list&lt;/strong&gt; view, select multiple attributes to add or remove table columns. The app saves favorites in your browser.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;query-result-streaming&#34;&gt;Query result streaming&lt;/h2&gt;
&lt;p&gt;When you first open Traces Drilldown, you may notice a green dot on the upper right corner of any of the metrics graphs.&lt;/p&gt;
&lt;p&gt;This green dot indicates that Traces Drilldown is displaying data that&amp;rsquo;s still being received, or streamed.
Streaming lets you view partial query results before the entire query completes.&lt;/p&gt;
&lt;h2 id=&#34;open-in-explore-app&#34;&gt;Open in Explore app&lt;/h2&gt;
&lt;p&gt;You can open a trace in the Explore app by clicking the &lt;strong&gt;Open in Explore&lt;/strong&gt; button.
This opens the trace in the Explore app, where you can use the full power of Explore to analyze the trace.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re using Explore, you can open a trace in Traces Drilldown by clicking the &lt;strong&gt;Open in Traces Drilldown&lt;/strong&gt; button.&lt;/p&gt;
]]></content><description>&lt;h1 id="traces-drilldown-ui-reference">Traces Drilldown UI reference&lt;/h1>
&lt;p>Grafana Traces Drilldown helps you focus your tracing data exploration.
Some sections change based on the metric you choose.
For details on workflows, refer to &lt;a href="../investigate/analyze-tracing-data/">Analyze tracing data&lt;/a>.&lt;/p></description></item></channel></rss>