<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Alerting fundamentals on Grafana Labs</title><link>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/</link><description>Recent content in Alerting fundamentals on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/index.xml" rel="self" type="application/rss+xml"/><item><title>Alerting on numeric data</title><link>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts/</link><pubDate>Sat, 04 Apr 2026 12:26:57 +0000</pubDate><guid>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts/</guid><content><![CDATA[&lt;h1 id=&#34;alerting-on-numeric-data&#34;&gt;Alerting on numeric data&lt;/h1&gt;
&lt;p&gt;This topic describes how Grafana managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;#alert-evaluation&#34;&gt;Alert evaluation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;#alerting-on-numeric-data&#34;&gt;Alerting on numeric data&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;alert-evaluation&#34;&gt;Alert evaluation&lt;/h2&gt;
&lt;p&gt;Grafana managed alerts query the following backend data sources that have alerting enabled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;built-in data sources or those developed and maintained by Grafana: &lt;code&gt;Graphite&lt;/code&gt;, &lt;code&gt;Prometheus&lt;/code&gt;, &lt;code&gt;Loki&lt;/code&gt;, &lt;code&gt;InfluxDB&lt;/code&gt;, &lt;code&gt;Elasticsearch&lt;/code&gt;,
&lt;code&gt;Google Cloud Monitoring&lt;/code&gt;, &lt;code&gt;Cloudwatch&lt;/code&gt;, &lt;code&gt;Azure Monitor&lt;/code&gt;, &lt;code&gt;MySQL&lt;/code&gt;, &lt;code&gt;PostgreSQL&lt;/code&gt;, &lt;code&gt;MSSQL&lt;/code&gt;, &lt;code&gt;OpenTSDB&lt;/code&gt;, &lt;code&gt;Oracle&lt;/code&gt;, and &lt;code&gt;Azure Monitor&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;community developed backend data sources with alerting enabled (&lt;code&gt;backend&lt;/code&gt; and &lt;code&gt;alerting&lt;/code&gt; properties are set in the &lt;a href=&#34;/developers/plugin-tools/reference-plugin-json&#34;&gt;plugin.json&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;metrics-from-the-alerting-engine&#34;&gt;Metrics from the alerting engine&lt;/h3&gt;
&lt;p&gt;The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes &lt;a href=&#34;../../../../administration/view-server/internal-metrics/&#34;&gt;internal metrics&lt;/a&gt;. See also, &lt;a href=&#34;../../alerting-rules/rule-list/&#34;&gt;View alert rules and their current state&lt;/a&gt;.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric Name&lt;/th&gt;
              &lt;th&gt;Type&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_alerts&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;gauge&lt;/td&gt;
              &lt;td&gt;How many alerts by state&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_request_duration&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;histogram&lt;/td&gt;
              &lt;td&gt;Histogram of requests to the Alerting API&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_active_configurations&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;gauge&lt;/td&gt;
              &lt;td&gt;The number of active, non default Alertmanager configurations for grafana managed alerts&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_rule_evaluations_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;counter&lt;/td&gt;
              &lt;td&gt;The total number of rule evaluations&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_rule_evaluation_failures_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;counter&lt;/td&gt;
              &lt;td&gt;The total number of rule evaluation failures&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_rule_evaluation_duration&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;summary&lt;/td&gt;
              &lt;td&gt;The duration for a rule to execute&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alerting_rule_group_rules&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;gauge&lt;/td&gt;
              &lt;td&gt;The number of rules&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;alerting-on-numeric-data-1&#34;&gt;Alerting on numeric data&lt;/h2&gt;
&lt;p&gt;Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.&lt;/p&gt;
&lt;h3 id=&#34;tabular-data&#34;&gt;Tabular Data&lt;/h3&gt;
&lt;p&gt;This feature is supported with backend data sources that query tabular data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.&lt;/li&gt;
&lt;li&gt;The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &amp;ldquo;Format AS&amp;rdquo; option is set to &amp;ldquo;Table&amp;rdquo; in the data source query.&lt;/li&gt;
&lt;li&gt;The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.&lt;/p&gt;
&lt;h3 id=&#34;example&#34;&gt;Example&lt;/h3&gt;
&lt;p&gt;For a MySQL table called &amp;ldquo;DiskSpace&amp;rdquo;:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Time&lt;/th&gt;
              &lt;th&gt;Host&lt;/th&gt;
              &lt;th&gt;Disk&lt;/th&gt;
              &lt;th&gt;PercentFree&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;2021-June-7&lt;/td&gt;
              &lt;td&gt;web1&lt;/td&gt;
              &lt;td&gt;/etc&lt;/td&gt;
              &lt;td&gt;3&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;2021-June-7&lt;/td&gt;
              &lt;td&gt;web2&lt;/td&gt;
              &lt;td&gt;/var&lt;/td&gt;
              &lt;td&gt;4&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;2021-June-7&lt;/td&gt;
              &lt;td&gt;web3&lt;/td&gt;
              &lt;td&gt;/var&lt;/td&gt;
              &lt;td&gt;8&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&amp;hellip;&lt;/td&gt;
              &lt;td&gt;&amp;hellip;&lt;/td&gt;
              &lt;td&gt;&amp;hellip;&lt;/td&gt;
              &lt;td&gt;&amp;hellip;&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;SQL&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT Host, Disk, CASE WHEN PercentFree &amp;lt; 5.0 THEN PercentFree ELSE 0 END FROM (
  SELECT
      Host,
      Disk,
      Avg(PercentFree)
  FROM DiskSpace
  Group By
    Host,
    Disk
  Where __timeFilter(Time)&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This query returns the following Table response to Grafana:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Host&lt;/th&gt;
              &lt;th&gt;Disk&lt;/th&gt;
              &lt;th&gt;PercentFree&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;web1&lt;/td&gt;
              &lt;td&gt;/etc&lt;/td&gt;
              &lt;td&gt;3&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;web2&lt;/td&gt;
              &lt;td&gt;/var&lt;/td&gt;
              &lt;td&gt;4&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;web3&lt;/td&gt;
              &lt;td&gt;/var&lt;/td&gt;
              &lt;td&gt;0&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;When this query is used as the &lt;strong&gt;condition&lt;/strong&gt; in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Labels&lt;/th&gt;
              &lt;th&gt;Status&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;{Host=web1,disk=/etc}&lt;/td&gt;
              &lt;td&gt;Alerting&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;{Host=web2,disk=/var}&lt;/td&gt;
              &lt;td&gt;Alerting&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;{Host=web3,disk=/var}&lt;/td&gt;
              &lt;td&gt;Normal&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;]]></content><description>&lt;h1 id="alerting-on-numeric-data">Alerting on numeric data&lt;/h1>
&lt;p>This topic describes how Grafana managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data.&lt;/p></description></item><item><title>Alertmanager</title><link>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/alertmanager/</link><pubDate>Sat, 04 Apr 2026 12:26:57 +0000</pubDate><guid>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/alertmanager/</guid><content><![CDATA[&lt;h1 id=&#34;alertmanager&#34;&gt;Alertmanager&lt;/h1&gt;
&lt;p&gt;The Alertmanager helps both group and manage alert rules, adding a layer of orchestration on top of the alerting engines. To learn more, see &lt;a href=&#34;https://prometheus.io/docs/alerting/latest/alertmanager/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus Alertmanager documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Grafana includes built-in support for Prometheus Alertmanager. By default, notifications for Grafana managed alerts are handled by the embedded Alertmanager that is part of core Grafana. You can configure the Alertmanager&amp;rsquo;s contact points, notification policies, silences, and templates from the alerting UI by selecting the &lt;code&gt;Grafana&lt;/code&gt; option from the Alertmanager drop-down.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before v8.2, the configuration of the embedded Alertmanager was shared across organizations. If you are on an older Grafana version, we recommend that you use Grafana alerts only if you have one organization. Otherwise, your contact points are visible to all organizations.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Grafana alerting added support for external Alertmanager configuration. When you add an &lt;a href=&#34;../../../../datasources/alertmanager/&#34;&gt;Alertmanager data source&lt;/a&gt;, the Alertmanager drop-down shows a list of available external Alertmanager data sources. Select a data source to create and manage alerting for standalone Cortex or Loki data sources.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 250px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif&#34;alt=&#34;Select Alertmanager&#34;width=&#34;1148&#34;height=&#34;662&#34;title=&#34;Select Alertmanager&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif&#34;
            alt=&#34;Select Alertmanager&#34;width=&#34;1148&#34;height=&#34;662&#34;title=&#34;Select Alertmanager&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;Select Alertmanager&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;You can configure one or several external Alertmanagers to receive alerts from Grafana. Once configured, both the embedded Alertmanager &lt;strong&gt;and&lt;/strong&gt; any configured external Alertmanagers will receive &lt;em&gt;all&lt;/em&gt; alerts.&lt;/p&gt;
&lt;p&gt;You can do the setup in the &amp;ldquo;Admin&amp;rdquo; tab within the Grafana v8 Alerts UI.&lt;/p&gt;
&lt;h3 id=&#34;add-a-new-external-alertmanager&#34;&gt;Add a new external Alertmanager&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;In the Grafana menu, click the Alerting (bell) icon to open the Alerting page listing existing alerts.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Admin&lt;/strong&gt; and then scroll down to the External Alertmanager section.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add Alertmanager&lt;/strong&gt; and a modal opens.&lt;/li&gt;
&lt;li&gt;Add the URL and the port for the external Alertmanager. You do not need to specify the path suffix, for example, &lt;code&gt;/api/v(1|2)/alerts&lt;/code&gt;. Grafana automatically adds this.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The external URL is listed in the table with a pending status. Once Grafana verifies that the Alertmanager is discovered, the status changes to active. No requests are made to the external Alertmanager at this point; the verification signals that alerts are ready to be sent.&lt;/p&gt;
&lt;h3 id=&#34;edit-an-external-alertmanager&#34;&gt;Edit an external Alertmanager&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Click the pen symbol to the right of the Alertmanager row in the table.&lt;/li&gt;
&lt;li&gt;When the edit modal opens, you can view all the URLs that were added.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The edited URL will be pending until Grafana verifies it again.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 650px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/ext-alertmanager-active.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/ext-alertmanager-active.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/ext-alertmanager-active.png?w=320 320w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=550 550w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=750 750w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=900 900w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=1040 1040w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=1240 1240w, /static/img/docs/alerting/unified/ext-alertmanager-active.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;External Alertmanagers&#34;width=&#34;1271&#34;height=&#34;250&#34;title=&#34;External Alertmanagers&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/ext-alertmanager-active.png&#34;
            alt=&#34;External Alertmanagers&#34;width=&#34;1271&#34;height=&#34;250&#34;title=&#34;External Alertmanagers&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;External Alertmanagers&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
]]></content><description>&lt;h1 id="alertmanager">Alertmanager&lt;/h1>
&lt;p>The Alertmanager helps both group and manage alert rules, adding a layer of orchestration on top of the alerting engines. To learn more, see &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/" target="_blank" rel="noopener noreferrer">Prometheus Alertmanager documentation&lt;/a>.&lt;/p></description></item><item><title>State and health of alerting rules</title><link>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/state-and-health/</link><pubDate>Sat, 04 Apr 2026 12:26:57 +0000</pubDate><guid>https://grafana.com/docs/grafana/v8.4/alerting/unified-alerting/fundamentals/state-and-health/</guid><content><![CDATA[&lt;h1 id=&#34;state-and-health-of-alerting-rules&#34;&gt;State and health of alerting rules&lt;/h1&gt;
&lt;p&gt;The state and health of alerting rules help you understand several key status indicators about your alerts. There are three key components: alert state, alerting rule state, and alerting rule health. Although related, each component conveys subtly different information.&lt;/p&gt;
&lt;h2 id=&#34;alerting-rule-state&#34;&gt;Alerting rule state&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Normal&lt;/strong&gt;: None of the time series returned by the evaluation engine is in a Pending or Firing state.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pending&lt;/strong&gt;: At least one time series returned by the evaluation engine is Pending.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Firing&lt;/strong&gt;: At least one time series returned by the evaluation engine is Firing.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;alert-state&#34;&gt;Alert state&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Normal&lt;/strong&gt;: Condition for the alerting rule is &lt;strong&gt;false&lt;/strong&gt; for every time series returned by the evaluation engine.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Alerting&lt;/strong&gt;: Condition of the alerting rule is &lt;strong&gt;true&lt;/strong&gt; for at least one time series returned by the evaluation engine. The duration for which the condition must be true before an alert fires, if set, is met or has exceeded.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pending&lt;/strong&gt;: Condition of the alerting rule is &lt;strong&gt;true&lt;/strong&gt; for at least one time series returned by the evaluation engine. The duration for which the condition must be true before an alert fires, if set, &lt;strong&gt;has not&lt;/strong&gt; been met.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NoData&lt;/strong&gt;: the alerting rule has not returned a time series, all values for the time series are null, or all values for the time series are zero.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Error&lt;/strong&gt;: Error when attempting to evaluate an alerting rule.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;alerting-rule-health&#34;&gt;Alerting rule health&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ok&lt;/strong&gt;: No error when evaluating an alerting rule.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Error&lt;/strong&gt;: Error when evaluating an alerting rule.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NoData&lt;/strong&gt;: The absence of data in at least one time series returned during a rule evaluation.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="state-and-health-of-alerting-rules">State and health of alerting rules&lt;/h1>
&lt;p>The state and health of alerting rules help you understand several key status indicators about your alerts. There are three key components: alert state, alerting rule state, and alerting rule health. Although related, each component conveys subtly different information.&lt;/p></description></item></channel></rss>