<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Additional configuration on Grafana Labs</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/</link><description>Recent content in Additional configuration on Grafana Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="/docs/grafana/v12.4/alerting/set-up/index.xml" rel="self" type="application/rss+xml"/><item><title>Configure roles and permissions</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-roles/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-roles/</guid><content><![CDATA[&lt;h1 id=&#34;configure-roles-and-permissions&#34;&gt;Configure roles and permissions&lt;/h1&gt;
&lt;p&gt;This guide explains how to configure roles and permissions for Grafana Alerting for Grafana OSS users. You&amp;rsquo;ll learn how to manage access using roles, folder permissions, and contact point permissions.&lt;/p&gt;
&lt;p&gt;A user is any individual who can log in to Grafana. Each user is associated with a role that includes permissions. Permissions determine the tasks a user can perform in the system. For example, the Admin role includes permissions for an administrator to create and delete users.&lt;/p&gt;
&lt;p&gt;For more information, refer to 
    &lt;a href=&#34;/docs/grafana/v12.4/administration/roles-and-permissions/#organization-roles&#34;&gt;Organization roles&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;manage-access-using-roles&#34;&gt;Manage access using roles&lt;/h2&gt;
&lt;p&gt;Grafana OSS has three roles: Admin, Editor, and Viewer.&lt;/p&gt;
&lt;p&gt;The following table describes the access each role provides for Grafana Alerting.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Role&lt;/th&gt;
              &lt;th&gt;Access&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;Viewer&lt;/td&gt;
              &lt;td&gt;Read access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences).&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Editor&lt;/td&gt;
              &lt;td&gt;Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Admin&lt;/td&gt;
              &lt;td&gt;Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning, as well as assign roles.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h2 id=&#34;assign-roles&#34;&gt;Assign roles&lt;/h2&gt;
&lt;p&gt;To assign roles, an admin needs to complete the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Administration&lt;/strong&gt; &amp;gt; &lt;strong&gt;Users and access&lt;/strong&gt; &amp;gt; &lt;strong&gt;Users, Teams, or Service Accounts&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Search for the user, team or service account you want to add a role for.&lt;/li&gt;
&lt;li&gt;Add the role you want to assign.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;manage-access-using-folder-permissions&#34;&gt;Manage access using folder permissions&lt;/h2&gt;
&lt;p&gt;You can extend the access provided by a role to alert rules and rule-specific silences by assigning permissions to individual folders.&lt;/p&gt;
&lt;p&gt;This allows different users, teams, or service accounts to have customized access to modify or silence alert rules in specific folders.&lt;/p&gt;
&lt;p&gt;Refer to the following table for details on the additional access provided by folder permissions:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Folder permission&lt;/th&gt;
              &lt;th&gt;Additional Access&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;View&lt;/td&gt;
              &lt;td&gt;No additional access: all permissions already contained in Viewer role.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Edit&lt;/td&gt;
              &lt;td&gt;Write access to alert rules and their rule-specific silences &lt;em&gt;only&lt;/em&gt; in the given folder and subfolders.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Admin&lt;/td&gt;
              &lt;td&gt;Same additional access as Edit.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;

&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;You can&amp;rsquo;t use folders to customize access to notification resources.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;To manage folder permissions, complete the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the left-side menu, click &lt;strong&gt;Dashboards&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Hover your mouse cursor over a folder and click &lt;strong&gt;Go to folder&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Manage permissions&lt;/strong&gt; from the Folder actions menu.&lt;/li&gt;
&lt;li&gt;Update or add permissions as required.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;manage-access-to-contact-points&#34;&gt;Manage access to contact points&lt;/h2&gt;
&lt;p&gt;Extend or limit the access provided by a role to contact points by assigning permissions to individual contact points.&lt;/p&gt;
&lt;p&gt;This allows different users, teams, or service accounts to have customized access to read or modify specific contact points.&lt;/p&gt;
&lt;p&gt;Refer to the following table for details on the additional access provided by contact point permissions.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Contact point permission&lt;/th&gt;
              &lt;th&gt;Additional Access&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;View&lt;/td&gt;
              &lt;td&gt;View and export contact point as well as select it on the Alert rule edit page&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Edit&lt;/td&gt;
              &lt;td&gt;Update or delete the contact point&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Admin&lt;/td&gt;
              &lt;td&gt;Same additional access as Edit and manage permissions for the contact point. User should have additional permissions to read users and teams.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;assign-contact-point-permissions&#34;&gt;Assign contact point permissions&lt;/h3&gt;
&lt;p&gt;To manage contact point permissions, complete the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the left-side menu, click &lt;strong&gt;Contact points&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Hover your mouse cursor over a contact point and click &lt;strong&gt;More&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Manage permissions&lt;/strong&gt; from the actions menu.&lt;/li&gt;
&lt;li&gt;Update or add permissions as required.&lt;/li&gt;
&lt;/ol&gt;
]]></content><description>&lt;h1 id="configure-roles-and-permissions">Configure roles and permissions&lt;/h1>
&lt;p>This guide explains how to configure roles and permissions for Grafana Alerting for Grafana OSS users. You&amp;rsquo;ll learn how to manage access using roles, folder permissions, and contact point permissions.&lt;/p></description></item><item><title>Configure RBAC</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-rbac/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-rbac/</guid><content><![CDATA[&lt;h1 id=&#34;configure-rbac&#34;&gt;Configure RBAC&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;/docs/grafana/latest/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/&#34;&gt;Role-based access control (RBAC)&lt;/a&gt; for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.&lt;/p&gt;
&lt;p&gt;A user is any individual who can log in to Grafana. Each user has a role that includes permissions. Permissions determine the tasks a user can perform in the system.&lt;/p&gt;
&lt;p&gt;Each permission contains one or more actions and a scope.&lt;/p&gt;
&lt;h2 id=&#34;role-types&#34;&gt;Role types&lt;/h2&gt;
&lt;p&gt;Grafana has three types of roles for managing access:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Basic roles&lt;/strong&gt;: Admin, Editor, Viewer, and No basic role. These are assigned to users and provide default access levels.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fixed roles&lt;/strong&gt;: Predefined groups of permissions for specific use cases. Basic roles automatically include certain fixed roles.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Custom roles&lt;/strong&gt;: User-defined roles that combine specific permissions for granular access control.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;basic-role-permissions&#34;&gt;Basic role permissions&lt;/h2&gt;
&lt;p&gt;The following table summarizes the default alerting permissions for each basic role.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Capability&lt;/th&gt;
              &lt;th style=&#34;text-align: center&#34;&gt;Admin&lt;/th&gt;
              &lt;th style=&#34;text-align: center&#34;&gt;Editor&lt;/th&gt;
              &lt;th style=&#34;text-align: center&#34;&gt;Viewer&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;View alert rules&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and delete alert rules&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;View silences&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and expire silences&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;View contact points and templates&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and delete contact points&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;View notification policies&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and delete policies&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;View mute timings&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and delete timing intervals&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;View alert enrichments&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Create, edit, and delete enrichments&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Access provisioning API&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;Export with decrypted secrets&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;✓&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
              &lt;td style=&#34;text-align: center&#34;&gt;&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;

&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Access to alert rules also requires permission to read the folder containing the rules and permission to query the data sources used in the rules.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;permissions&#34;&gt;Permissions&lt;/h2&gt;
&lt;p&gt;Grafana Alerting has the following permissions organized by resource type.&lt;/p&gt;
&lt;h3 id=&#34;alert-rules&#34;&gt;Alert rules&lt;/h3&gt;
&lt;p&gt;Permissions for managing Grafana-managed alert rules.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules:create&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Create Grafana alert rules in a folder and its subfolders. Combine this permission with &lt;code&gt;folders:read&lt;/code&gt; in a scope that includes the folder and &lt;code&gt;datasources:query&lt;/code&gt; in the scope of data sources the user can query.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read Grafana alert rules in a folder and its subfolders. Combine this permission with &lt;code&gt;folders:read&lt;/code&gt; in a scope that includes the folder.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Update Grafana alert rules in a folder and its subfolders. Combine this permission with &lt;code&gt;folders:read&lt;/code&gt; in a scope that includes the folder. To allow query modifications add &lt;code&gt;datasources:query&lt;/code&gt; in the scope of data sources the user can query.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules:delete&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Delete Grafana alert rules in a folder and its subfolders. Combine this permission with &lt;code&gt;folders:read&lt;/code&gt; in a scope that includes the folder.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;external-alert-rules&#34;&gt;External alert rules&lt;/h3&gt;
&lt;p&gt;Permissions for managing alert rules in external data sources that support alerting.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules.external:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read alert rules in data sources that support alerting (Prometheus, Mimir, and Loki).&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules.external:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Create, update, and delete alert rules in data sources that support alerting (Mimir and Loki).&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;alert-instances-and-silences&#34;&gt;Alert instances and silences&lt;/h3&gt;
&lt;p&gt;Permissions for managing alert instances and silences in Grafana.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.instances:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read alerts and silences in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.instances:create&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create silences in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.instances:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Update and expire silences in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.silences:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read all general silences and rule-specific silences in a folder and its subfolders.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.silences:create&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Create rule-specific silences in a folder and its subfolders.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.silences:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;folders:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;folders:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Update and expire rule-specific silences in a folder and its subfolders.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;external-alert-instances&#34;&gt;External alert instances&lt;/h3&gt;
&lt;p&gt;Permissions for managing alert instances in external data sources.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.instances.external:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read alerts and silences in data sources that support alerting.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.instances.external:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Manage alerts and silences in data sources that support alerting.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;contact-points&#34;&gt;Contact points&lt;/h3&gt;
&lt;p&gt;Permissions for managing contact points (notification receivers).&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:list&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;List contact points in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read contact points.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers.secrets:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Export contact points with decrypted secrets.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:create&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create a new contact points. The creator is automatically granted full access to the created contact point.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Update existing contact points.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:delete&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Update and delete existing contact points.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers:test&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Test contact points to verify their configuration. Deprecated. Use &amp;ldquo;alert.notifications.receivers.test:create&amp;rdquo;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.receivers.test:create&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:-&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Test contact points to verify their configuration. Use scope &lt;code&gt;receivers:uid:-&lt;/code&gt; to grant permission to test new integrations&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;receivers.permissions:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read permissions for contact points.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;receivers.permissions:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;receivers:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;receivers:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Manage permissions for contact points.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;notification-policies&#34;&gt;Notification policies&lt;/h3&gt;
&lt;p&gt;Permissions for managing notification policies (routing rules).&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.routes:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read notification policies.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.routes:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create new, update, and delete notification policies.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;time-intervals&#34;&gt;Time intervals&lt;/h3&gt;
&lt;p&gt;Permissions for managing mute time intervals.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.time-intervals:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read mute time intervals.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.time-intervals:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create new or update existing mute time intervals.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.time-intervals:delete&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Delete existing time intervals.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;templates&#34;&gt;Templates&lt;/h3&gt;
&lt;p&gt;Permissions for managing notification templates.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.templates:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read templates.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.templates:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create new or update existing templates.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.templates:delete&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Delete existing templates.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.templates.test:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Test templates with custom payloads (preview and payload editor functionality).&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;general-notifications&#34;&gt;General notifications&lt;/h3&gt;
&lt;p&gt;Legacy permissions for managing all notification resources.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read all templates, contact points, notification policies, and mute timings in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Manage templates, contact points, notification policies, and mute timings in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;external-notifications&#34;&gt;External notifications&lt;/h3&gt;
&lt;p&gt;Permissions for managing notification resources in external data sources.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.external:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Read templates, contact points, notification policies, and mute timings in data sources that support alerting.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.external:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;datasources:*&lt;/code&gt;&lt;br&gt;&lt;code&gt;datasources:uid:*&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Manage templates, contact points, notification policies, and mute timings in data sources that support alerting.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;provisioning&#34;&gt;Provisioning&lt;/h3&gt;
&lt;p&gt;Permissions for managing alerting resources via the provisioning API.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.provisioning:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.provisioning.secrets:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Same as &lt;code&gt;alert.provisioning:read&lt;/code&gt; plus ability to export resources with decrypted secrets.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.provisioning:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Update all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules.provisioning:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read Grafana alert rules via provisioning API. More specific than &lt;code&gt;alert.provisioning:read&lt;/code&gt;.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.rules.provisioning:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create, update, and delete Grafana alert rules via provisioning API. More specific than &lt;code&gt;alert.provisioning:write&lt;/code&gt;.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.provisioning:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read notification resources (contact points, notification policies, templates, time intervals) via provisioning API. More specific than &lt;code&gt;alert.provisioning:read&lt;/code&gt;.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.notifications.provisioning:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create, update, and delete notification resources via provisioning API. More specific than &lt;code&gt;alert.provisioning:write&lt;/code&gt;.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.provisioning.provenance:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Set provisioning status for alerting resources. Cannot be used alone. Requires user to have permissions to access resources.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;h3 id=&#34;alert-enrichments&#34;&gt;Alert enrichments&lt;/h3&gt;
&lt;p&gt;Permissions for managing alert enrichments.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Action&lt;/th&gt;
              &lt;th&gt;Applicable scope&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.enrichments:read&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Read alert enrichment configurations in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;alert.enrichments:write&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;n/a&lt;/td&gt;
              &lt;td&gt;Create, update, and delete alert enrichment configurations in the current organization.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;To help plan your RBAC rollout strategy, refer to &lt;a href=&#34;/docs/grafana/next/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/&#34;&gt;Plan your RBAC rollout strategy&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="configure-rbac">Configure RBAC&lt;/h1>
&lt;p>&lt;a href="/docs/grafana/latest/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/">Role-based access control (RBAC)&lt;/a> for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.&lt;/p></description></item><item><title>Configure Alertmanagers</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-alertmanager/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-alertmanager/</guid><content><![CDATA[&lt;h1 id=&#34;configure-alertmanagers&#34;&gt;Configure Alertmanagers&lt;/h1&gt;
&lt;p&gt;Grafana Alerting is based on the architecture of the Prometheus alerting system. Grafana sends firing and resolved alerts to an Alertmanager, which is responsible for 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/fundamentals/notifications/&#34;&gt;handling notifications&lt;/a&gt;. This architecture decouples alert rule evaluation from notification handling, improving scalability.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;/media/docs/alerting/alerting-alertmanager-architecture.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;/media/docs/alerting/alerting-alertmanager-architecture.png&#34;data-srcset=&#34;/media/docs/alerting/alerting-alertmanager-architecture.png?w=320 320w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=550 550w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=750 750w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=900 900w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=1040 1040w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=1240 1240w, /media/docs/alerting/alerting-alertmanager-architecture.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;A diagram with the alert generator and alert manager architecture&#34;width=&#34;669&#34;height=&#34;240&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/media/docs/alerting/alerting-alertmanager-architecture.png&#34;
            alt=&#34;A diagram with the alert generator and alert manager architecture&#34;width=&#34;669&#34;height=&#34;240&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;Grafana includes a built-in &lt;strong&gt;Grafana Alertmanager&lt;/strong&gt; to handle notifications. This guide shows you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use different &lt;a href=&#34;#types-of-alertmanagers-in-grafana&#34;&gt;types of Alertmanagers&lt;/a&gt; with Grafana&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;#add-an-alertmanager&#34;&gt;Add other Alertmanager&lt;/a&gt; and &lt;a href=&#34;#enable-an-alertmanager-to-receive-grafana-managed-alerts&#34;&gt;enable it to receive all Grafana-managed alerts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Use an &lt;a href=&#34;&#34;&gt;Alertmanager as a contact point&lt;/a&gt; to route specific alerts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;alertmanager-resources&#34;&gt;Alertmanager resources&lt;/h2&gt;
&lt;p&gt;It’s important to note that each Alertmanager manages its own independent alerting resources, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Contact points and notification templates&lt;/li&gt;
&lt;li&gt;Notification policies and mute timings&lt;/li&gt;
&lt;li&gt;Silences&lt;/li&gt;
&lt;li&gt;Active notifications&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use the &lt;code&gt;Choose Alertmanager&lt;/code&gt; dropdown on these pages to switch between Alertmanagers and view or manage their resources.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;/media/docs/alerting/alerting-choose-alertmanager.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;/media/docs/alerting/alerting-choose-alertmanager.png&#34;data-srcset=&#34;/media/docs/alerting/alerting-choose-alertmanager.png?w=320 320w, /media/docs/alerting/alerting-choose-alertmanager.png?w=550 550w, /media/docs/alerting/alerting-choose-alertmanager.png?w=750 750w, /media/docs/alerting/alerting-choose-alertmanager.png?w=900 900w, /media/docs/alerting/alerting-choose-alertmanager.png?w=1040 1040w, /media/docs/alerting/alerting-choose-alertmanager.png?w=1240 1240w, /media/docs/alerting/alerting-choose-alertmanager.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;A screenshot choosing an Alertmanager in the notification policies UI&#34;width=&#34;950&#34;height=&#34;117&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/media/docs/alerting/alerting-choose-alertmanager.png&#34;
            alt=&#34;A screenshot choosing an Alertmanager in the notification policies UI&#34;width=&#34;950&#34;height=&#34;117&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;h2 id=&#34;types-of-alertmanagers-in-grafana&#34;&gt;Types of Alertmanagers in Grafana&lt;/h2&gt;
&lt;p&gt;Grafana can be configured to handle alert notifications using various Alertmanagers.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Grafana Alertmanager&lt;/strong&gt;: Grafana includes a built-in Alertmanager that extends the &lt;a href=&#34;https://prometheus.io/docs/alerting/latest/alertmanager/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus Alertmanager&lt;/a&gt;. This is the default Alertmanager and is referred to as &amp;ldquo;Grafana&amp;rdquo; in the user interface. It can only handle Grafana-managed alerts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cloud Alertmanager&lt;/strong&gt;: Each Grafana Cloud instance comes preconfigured with an additional Alertmanager (&lt;code&gt;grafanacloud-STACK_NAME-ngalertmanager&lt;/code&gt;) from the Mimir (Prometheus) instance running in the Grafana Cloud Stack.&lt;/p&gt;
&lt;p&gt;The Cloud Alertmanager is available exclusively in Grafana Cloud and can handle both Grafana-managed and data source-managed alerts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Other Alertmanagers&lt;/strong&gt;: Grafana Alerting also supports sending alerts to other Alertmanagers, such as the &lt;a href=&#34;https://prometheus.io/docs/alerting/latest/alertmanager/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Prometheus Alertmanager&lt;/a&gt;, which can handle both Grafana-managed and data source-managed alerts.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Grafana Alerting supports using a combination of Alertmanagers and can &lt;a href=&#34;#enable-an-alertmanager-to-receive-grafana-managed-alerts&#34;&gt;enable other Alertmanagers to receive Grafana-managed alerts&lt;/a&gt;. The decision often depends on your alerting setup and where your alerts are generated.&lt;/p&gt;
&lt;p&gt;For example, if you already have an Alertmanager running in your on-premises or cloud infrastructure to handle Prometheus alerts, you can forward Grafana-managed alerts to the same Alertmanager for unified notification handling.&lt;/p&gt;
&lt;h2 id=&#34;add-an-alertmanager&#34;&gt;Add an Alertmanager&lt;/h2&gt;
&lt;p&gt;Alertmanagers should be configured as data sources using Grafana Configuration from the main Grafana navigation menu. To add an Alertmanager, complete the following steps.&lt;/p&gt;


&lt;div data-shared=&#34;alerts/add-alertmanager-ds.md&#34;&gt;
            &lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Connections&lt;/strong&gt; in the left-side menu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under Your connections, click &lt;strong&gt;Data sources&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter &lt;code&gt;Alertmanager&lt;/code&gt; in the search bar.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Alertmanager&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Settings&lt;/strong&gt; tab of the data source is displayed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the data source&amp;rsquo;s basic configuration options:&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Name&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;Sets the name you use to refer to the data source&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Default&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;Sets whether the data source is pre-selected for new panels and queries&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Alertmanager Implementation&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;Alertmanager implementation. &lt;strong&gt;Mimir&lt;/strong&gt;, &lt;strong&gt;Cortex,&lt;/strong&gt; and &lt;strong&gt;Prometheus&lt;/strong&gt; are supported&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Receive Grafana Alerts&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;When enabled, the Alertmanager can receive Grafana-managed alerts. &lt;strong&gt;Important:&lt;/strong&gt; This works only if receiving alerts is enabled for the Alertmanager in the Grafana Alerting Settings page&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;HTTP URL&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;Sets the HTTP protocol, IP, and port of your Alertmanager instance, such as &lt;code&gt;https://alertmanager.example.org:9093&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Access&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;Only &lt;strong&gt;Server&lt;/strong&gt; access mode is functional&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;

        
&lt;p&gt;For provisioning instructions, refer to the 
    &lt;a href=&#34;/docs/grafana/v12.4/datasources/alertmanager/&#34;&gt;Alertmanager data source documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After adding an Alertmanager, you can use the Grafana Alerting UI to manage notification policies, contact points, silences, and other alerting resources from within Grafana.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;When using Prometheus, you can manage silences in the Grafana Alerting UI. However, other Alertmanager resources such as contact points, notification policies, and templates are read-only because the Prometheus Alertmanager HTTP API does not support updates for these resources.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;When using multiple Alertmanagers, use the &lt;code&gt;Choose Alertmanager&lt;/code&gt; dropdown to switch between Alertmanagers.&lt;/p&gt;
&lt;h2 id=&#34;enable-an-alertmanager-to-receive-grafana-managed-alerts&#34;&gt;Enable an Alertmanager to receive Grafana-managed alerts&lt;/h2&gt;
&lt;p&gt;After enabling &lt;strong&gt;Receive Grafana Alerts&lt;/strong&gt; in the Data Source Settings, you must also configure the Alertmanager in the Alerting Settings page. Grafana supports enabling one or multiple Alertmanagers to receive all generated Grafana-managed alerts.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the left-side menu, click &lt;strong&gt;Alerts &amp;amp; IRM&lt;/strong&gt; and then &lt;strong&gt;Alerting&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Settings&lt;/strong&gt; to view the list of configured Alertmanagers.&lt;/li&gt;
&lt;li&gt;For the selected Alertmanager, click the &lt;strong&gt;Enable/Disable&lt;/strong&gt; button to toggle receiving Grafana-managed alerts. When activated, the Alertmanager displays &lt;code&gt;Receiving Grafana-managed alerts&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p &#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link&#34;
        href=&#34;/media/docs/alerting/grafana-alerting-settings.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload &#34;
          data-src=&#34;/media/docs/alerting/grafana-alerting-settings.png&#34;data-srcset=&#34;/media/docs/alerting/grafana-alerting-settings.png?w=320 320w, /media/docs/alerting/grafana-alerting-settings.png?w=550 550w, /media/docs/alerting/grafana-alerting-settings.png?w=750 750w, /media/docs/alerting/grafana-alerting-settings.png?w=900 900w, /media/docs/alerting/grafana-alerting-settings.png?w=1040 1040w, /media/docs/alerting/grafana-alerting-settings.png?w=1240 1240w, /media/docs/alerting/grafana-alerting-settings.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;Grafana Alerting Settings page&#34;width=&#34;2648&#34;height=&#34;962&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/media/docs/alerting/grafana-alerting-settings.png&#34;
            alt=&#34;Grafana Alerting Settings page&#34;width=&#34;2648&#34;height=&#34;962&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;All Grafana-managed alerts are forwarded to Alertmanagers marked as &lt;code&gt;Receiving Grafana-managed alerts&lt;/code&gt;.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Grafana Alerting does not support forwarding Grafana-managed alerts to the AlertManager in Amazon Managed Service for Prometheus. For more details, refer to &lt;a href=&#34;https://github.com/grafana/grafana/issues/64064&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;this GitHub issue&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;h2 id=&#34;use-an-alertmanager-as-a-contact-point-to-receive-specific-alerts&#34;&gt;Use an Alertmanager as a contact point to receive specific alerts&lt;/h2&gt;
&lt;p&gt;The previous instructions enable sending &lt;strong&gt;all&lt;/strong&gt; Grafana-managed alerts to an Alertmanager.&lt;/p&gt;
&lt;p&gt;To send &lt;strong&gt;specific&lt;/strong&gt; alerts to an Alertmanager, configure the Alertmanager as a contact point. You can then assign this contact point to notification policies or individual alert rules.&lt;/p&gt;
&lt;p&gt;For detailed instructions, refer to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/configure-notifications/manage-contact-points/integrations/configure-alertmanager/&#34;&gt;Alertmanager contact point&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/alerting-rules/create-grafana-managed-rule/#configure-notifications&#34;&gt;Configure Grafana-managed alert rules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/configure-notifications/create-notification-policy/&#34;&gt;Configure notification policies&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;manage-alertmanager-configurations&#34;&gt;Manage Alertmanager configurations&lt;/h2&gt;
&lt;p&gt;On the Settings page, you can also manage your Alertmanager configurations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Manage version snapshots for the built-in Alertmanager, which allows administrators to roll back unintentional changes or mistakes in the Alertmanager configuration.&lt;/li&gt;
&lt;li&gt;Compare the historical snapshot with the latest configuration to see which changes were made.&lt;/li&gt;
&lt;/ul&gt;
]]></content><description>&lt;h1 id="configure-alertmanagers">Configure Alertmanagers&lt;/h1>
&lt;p>Grafana Alerting is based on the architecture of the Prometheus alerting system. Grafana sends firing and resolved alerts to an Alertmanager, which is responsible for
&lt;a href="/docs/grafana/v12.4/alerting/fundamentals/notifications/">handling notifications&lt;/a>. This architecture decouples alert rule evaluation from notification handling, improving scalability.&lt;/p></description></item><item><title>Configure alert state history</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-alert-state-history/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-alert-state-history/</guid><content><![CDATA[&lt;h1 id=&#34;configure-alert-state-history&#34;&gt;Configure alert state history&lt;/h1&gt;
&lt;p&gt;Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki or Prometheus instance, or in both.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;With Prometheus, you can query the &lt;code&gt;GRAFANA_ALERTS&lt;/code&gt; metric for alert state changes in &lt;strong&gt;Grafana Explore&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;With Loki, you can query and view alert state changes in &lt;strong&gt;Grafana Explore&lt;/strong&gt; and the 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor-status/view-alert-state-history/&#34;&gt;Grafana Alerting History views&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configure-loki-for-alert-state&#34;&gt;Configure Loki for alert state&lt;/h2&gt;
&lt;p&gt;The following steps describe a basic configuration:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Loki&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The default Loki settings might need some tweaking as the state history view might query up to 30 days of data.&lt;/p&gt;
&lt;p&gt;The following change to the default configuration should work for most instances, but look at the full Loki configuration settings and adjust according to your needs.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;limits_config:
  split_queries_by_interval: &amp;#39;24h&amp;#39;
  max_query_parallelism: 32&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As this might impact the performances of an existing Loki instance, use a separate Loki instance for the alert state history.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Grafana&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The following Grafana configuration instructs Alerting to write alert state history to a Loki instance:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.state_history]
enabled = true
backend = loki

# The URL of the Loki server
loki_remote_url = http://localhost:3100

[feature_toggles]
enable = alertingCentralAlertHistory&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure the Loki data source in Grafana&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Add the 
    &lt;a href=&#34;/docs/grafana/v12.4/datasources/loki/&#34;&gt;Loki data source&lt;/a&gt; to Grafana.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If everything is set up correctly, you can access the 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor-status/view-alert-state-history/&#34;&gt;History view and History page&lt;/a&gt; to view and filter alert state history. You can also use &lt;strong&gt;Grafana Explore&lt;/strong&gt; to query the Loki instance, see 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor/&#34;&gt;Alerting Meta monitoring&lt;/a&gt; for details.&lt;/p&gt;
&lt;h2 id=&#34;configure-prometheus-for-alert-state-grafana_alerts-metric&#34;&gt;Configure Prometheus for alert state (GRAFANA_ALERTS metric)&lt;/h2&gt;
&lt;p&gt;You can also configure a Prometheus instance to store alert state changes for your Grafana-managed alert rules. However, this setup does not enable the &lt;strong&gt;Grafana Alerting History views&lt;/strong&gt;, as Loki does.&lt;/p&gt;
&lt;p&gt;Instead, Grafana Alerting writes alert state data to the &lt;code&gt;GRAFANA_ALERTS&lt;/code&gt; metric-similar to how Prometheus Alerting writes to the &lt;code&gt;ALERTS&lt;/code&gt; metric.&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;GRAFANA_ALERTS{alertname=&amp;#34;&amp;#34;, alertstate=&amp;#34;&amp;#34;, grafana_alertstate=&amp;#34;&amp;#34;, grafana_rule_uid=&amp;#34;&amp;#34;, &amp;lt;additional alert labels&amp;gt;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The following steps describe a basic configuration:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Prometheus&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Enable the remote write receiver in your Prometheus instance by setting the &lt;code&gt;--web.enable-remote-write-receiver&lt;/code&gt; command-line flag. This enables the endpoint to receive alert state data from Grafana Alerting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure the Prometheus data source in Grafana&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Add the 
    &lt;a href=&#34;/docs/grafana/v12.4/datasources/prometheus/&#34;&gt;Prometheus data source&lt;/a&gt; to Grafana.&lt;/p&gt;
&lt;p&gt;In the 
    &lt;a href=&#34;/docs/grafana/v12.4/datasources/prometheus/configure/&#34;&gt;Prometheus data source configuration options&lt;/a&gt;, set the &lt;strong&gt;Prometheus type&lt;/strong&gt; to match your Prometheus instance type. Grafana Alerting uses this option to identify the remote write endpoint.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Grafana&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The following Grafana configuration instructs Alerting to write alert state history to a Prometheus instance:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.state_history]
enabled = true
backend = prometheus
# Target data source UID for writing alert state changes.
prometheus_target_datasource_uid = &amp;lt;DATA_SOURCE_UID&amp;gt;

# (Optional) Metric name for the alert state metric. Default is &amp;#34;GRAFANA_ALERTS&amp;#34;.
# prometheus_metric_name = GRAFANA_ALERTS
# (Optional)  Timeout for writing alert state data to the target data source. Default is 10s.
# prometheus_write_timeout = 10s&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can then use &lt;strong&gt;Grafana Explore&lt;/strong&gt; to query the alert state metric. For details, refer to 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor/&#34;&gt;Alerting Meta monitoring&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;promQL&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-promql&#34;&gt;GRAFANA_ALERTS{alertstate=&amp;#39;firing&amp;#39;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h2 id=&#34;configure-loki-and-prometheus-for-alert-state&#34;&gt;Configure Loki and Prometheus for alert state&lt;/h2&gt;
&lt;p&gt;You can also configure both Loki and Prometheus to record alert state changes for your Grafana-managed alert rules.&lt;/p&gt;
&lt;p&gt;Start with the same setup steps as shown in the previous &lt;a href=&#34;#configure-loki-for-alert-state&#34;&gt;Loki&lt;/a&gt; and &lt;a href=&#34;#configure-prometheus-for-alert-state-alerts-metric&#34;&gt;Prometheus&lt;/a&gt; sections. Then, adjust your Grafana configuration as follows:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;toml&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-toml&#34;&gt;[unified_alerting.state_history]
enabled = true
backend = multiple

primary = loki
# URL of the Loki server.
loki_remote_url = http://localhost:3100

secondaries = prometheus
# Target data source UID for writing alert state changes.
prometheus_target_datasource_uid = &amp;lt;DATA_SOURCE_UID&amp;gt;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
]]></content><description>&lt;h1 id="configure-alert-state-history">Configure alert state history&lt;/h1>
&lt;p>Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki or Prometheus instance, or in both.&lt;/p></description></item><item><title>Provision Alerting resources</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/</guid><content><![CDATA[&lt;h1 id=&#34;provision-alerting-resources&#34;&gt;Provision Alerting resources&lt;/h1&gt;
&lt;p&gt;Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p&gt;
&lt;p&gt;You can import alert rules, contact points, notification policies, mute timings, and templates.&lt;/p&gt;
&lt;p&gt;You cannot edit imported alerting resources in the Grafana UI in the same way as alerting resources that were not imported. You can only edit imported contact points, notification policies, templates, and mute timings in the source where they were created. For example, if you manage your alerting resources using files from disk, you cannot edit the data in Terraform or from within Grafana.&lt;/p&gt;
&lt;h2 id=&#34;import-alerting-resources&#34;&gt;Import alerting resources&lt;/h2&gt;
&lt;p&gt;Choose from the options below to import (or provision) your Grafana Alerting resources.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/file-provisioning/&#34;&gt;Use configuration files to provision your alerting resources&lt;/a&gt;, such as alert rules and contact points, through files on disk.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;You cannot edit provisioned resources from files in the Grafana UI.&lt;/li&gt;
&lt;li&gt;Provisioning with configuration files is not available in Grafana Cloud.&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/terraform-provisioning/&#34;&gt;Terraform to provision alerting resources&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/http-api-provisioning/&#34;&gt;Alerting provisioning HTTP API&lt;/a&gt; to manage alerting resources.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;The Alerting provisioning HTTP API can be used to create, modify, and delete resources for Grafana-managed alerts.&lt;/p&gt;
&lt;p&gt;To manage resources related to data source-managed alerts, including recording rules, use the Mimir or Cortex tool.&lt;/p&gt;
&lt;p&gt;The JSON output from the majority of Alerting HTTP endpoints isn&amp;rsquo;t compatible for provisioning via configuration files.&lt;/p&gt;
&lt;p&gt;If you need the alerting resources for file provisioning, use 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/export-alerting-resources/#export-api-endpoints&#34;&gt;Export Alerting endpoints&lt;/a&gt; to return or download them in provisioning format.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;export-alerting-resources&#34;&gt;Export alerting resources&lt;/h2&gt;
&lt;p&gt;You can export both manually created and provisioned alerting resources. You can also edit and export an alert rule without applying the changes.&lt;/p&gt;
&lt;p&gt;For detailed instructions on the various export options, refer to 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/provision-alerting-resources/export-alerting-resources/&#34;&gt;Export alerting resources&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;view-provisioned-alerting-resources&#34;&gt;View provisioned alerting resources&lt;/h2&gt;
&lt;p&gt;To view your provisioned resources in Grafana, complete the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open your Grafana instance.&lt;/li&gt;
&lt;li&gt;Navigate to Alerting.&lt;/li&gt;
&lt;li&gt;Click an alerting resource folder, for example, Alert rules.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Provisioned resources are labeled &lt;strong&gt;Provisioned&lt;/strong&gt;, so that it is clear that they were not created manually.&lt;/p&gt;
]]></content><description>&lt;h1 id="provision-alerting-resources">Provision Alerting resources&lt;/h1>
&lt;p>Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Importing and exporting (or provisioning) your alerting resources in Grafana Alerting makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.&lt;/p></description></item><item><title>Configure high availability</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-high-availability/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/configure-high-availability/</guid><content><![CDATA[&lt;h1 id=&#34;configure-high-availability&#34;&gt;Configure high availability&lt;/h1&gt;
&lt;p&gt;Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.&lt;/p&gt;
&lt;figure
    class=&#34;figure-wrapper figure-wrapper__lightbox w-100p docs-image--no-shadow&#34;
    style=&#34;max-width: 750px;&#34;
    itemprop=&#34;associatedMedia&#34;
    itemscope=&#34;&#34;
    itemtype=&#34;http://schema.org/ImageObject&#34;
  &gt;&lt;a
        class=&#34;lightbox-link captioned&#34;
        href=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
        itemprop=&#34;contentUrl&#34;
      &gt;&lt;div class=&#34;img-wrapper w-100p h-auto&#34;&gt;&lt;img
          class=&#34;lazyload mb-0&#34;
          data-src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;data-srcset=&#34;/static/img/docs/alerting/unified/high-availability-ua.png?w=320 320w, /static/img/docs/alerting/unified/high-availability-ua.png?w=550 550w, /static/img/docs/alerting/unified/high-availability-ua.png?w=750 750w, /static/img/docs/alerting/unified/high-availability-ua.png?w=900 900w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1040 1040w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1240 1240w, /static/img/docs/alerting/unified/high-availability-ua.png?w=1920 1920w&#34;data-sizes=&#34;auto&#34;alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;/&gt;
        &lt;noscript&gt;
          &lt;img
            src=&#34;/static/img/docs/alerting/unified/high-availability-ua.png&#34;
            alt=&#34;High availability&#34;width=&#34;828&#34;height=&#34;262&#34;title=&#34;High availability&#34;class=&#34;docs-image--no-shadow&#34;/&gt;
        &lt;/noscript&gt;&lt;/div&gt;&lt;figcaption class=&#34;w-100p caption text-gray-13  &#34;&gt;High availability&lt;/figcaption&gt;&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;When running multiple instances of Grafana, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated by the number of running Grafana instances. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules are still be evaluated and notifications for alerts are still sent.&lt;/p&gt;
&lt;p&gt;You can find this duplication in state history and it is a good way to &lt;a href=&#34;#verify-your-high-availability-setup&#34;&gt;verify your high availability setup&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid duplicate notifications. The alertmanagers use a gossip protocol to share information between them to prevent sending duplicated notifications.&lt;/p&gt;
&lt;p&gt;Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.&lt;/p&gt;
&lt;p&gt;Alertmanagers also gossip silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.&lt;/p&gt;
&lt;h2 id=&#34;enable-alerting-high-availability-using-memberlist&#34;&gt;Enable alerting high availability using Memberlist&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Before you begin&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Since gossiping of notifications and silences uses both TCP and UDP port &lt;code&gt;9094&lt;/code&gt;, ensure that each Grafana instance is able to accept incoming connections on these ports.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To enable high availability support:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the &lt;code&gt;[unified_alerting]&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peers]&lt;/code&gt; to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, &lt;code&gt;ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094&lt;/code&gt;.
You must have at least one (1) Grafana instance added to the &lt;code&gt;ha_peers&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_listen_address]&lt;/code&gt; to the instance IP address using a format of &lt;code&gt;host:port&lt;/code&gt; (or the &lt;a href=&#34;https://kubernetes.io/docs/concepts/workloads/pods/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;Pod&amp;rsquo;s&lt;/a&gt; IP in the case of using Kubernetes).
By default, it is set to listen to all interfaces (&lt;code&gt;0.0.0.0&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_advertise_address]&lt;/code&gt; to the instance&amp;rsquo;s hostname or IP address in the format &amp;ldquo;host:port&amp;rdquo;. Use this setting when the instance is behind NAT (Network Address Translation), such as in Docker Swarm or Kubernetes service, where external and internal addresses differ. This address helps other cluster instances communicate with it. The setting is optional.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_peer_timeout]&lt;/code&gt; in the &lt;code&gt;[unified_alerting]&lt;/code&gt; section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For a demo, see this &lt;a href=&#34;https://github.com/grafana/alerting-ha-docker-examples/tree/main/memberlist&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;example using Docker Compose&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;enable-alerting-high-availability-using-redis&#34;&gt;Enable alerting high availability using Redis&lt;/h2&gt;
&lt;p&gt;As an alternative to Memberlist, you can configure Redis to enable high availability. Redis standalone, Redis Cluster and Redis Sentinel modes are supported.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Memberlist is the preferred option for high availability. Use Redis only in environments where direct communication between Grafana servers is not possible, such as when TCP or UDP ports are blocked.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Make sure you have a Redis server that supports pub/sub. If you use a proxy in front of your Redis cluster, make sure the proxy supports pub/sub.&lt;/li&gt;
&lt;li&gt;In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the &lt;code&gt;[unified_alerting]&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;ha_redis_address&lt;/code&gt; to the Redis server address or addresses Grafana should connect to. It can be a single Redis address if using Redis standalone, or a list of comma-separated addresses if using Redis Cluster or Sentinel.&lt;/li&gt;
&lt;li&gt;Optional: Set &lt;code&gt;ha_redis_cluster_mode_enabled&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; if you are using Redis Cluster.&lt;/li&gt;
&lt;li&gt;Optional: Set &lt;code&gt;ha_redis_sentinel_mode_enabled&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; if you are using Redis Sentinel. Also set &lt;code&gt;ha_redis_sentinel_master_name&lt;/code&gt; to the Redis Sentinel master name.&lt;/li&gt;
&lt;li&gt;Optional: Set the username and password if authentication is enabled on the Redis server using &lt;code&gt;ha_redis_username&lt;/code&gt; and &lt;code&gt;ha_redis_password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Optional: Set the username and password if authentication is enabled on Redis Sentinel using &lt;code&gt;ha_redis_sentinel_username&lt;/code&gt; and &lt;code&gt;ha_redis_sentinel_password&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Optional: Set &lt;code&gt;ha_redis_prefix&lt;/code&gt; to something unique if you plan to share the Redis server with multiple Grafana instances.&lt;/li&gt;
&lt;li&gt;Optional: Set &lt;code&gt;ha_redis_tls_enabled&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; and configure the corresponding &lt;code&gt;ha_redis_tls_*&lt;/code&gt; fields to secure communications between Grafana and Redis with Transport Layer Security (TLS).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;[ha_advertise_address]&lt;/code&gt; to &lt;code&gt;ha_advertise_address = &amp;quot;${POD_IP}:9094&amp;quot;&lt;/code&gt; This is required if the instance doesn&amp;rsquo;t have an IP address that is part of RFC 6890 with a default route.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For a demo, see this &lt;a href=&#34;https://github.com/grafana/alerting-ha-docker-examples/tree/main/redis&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;example using Docker Compose&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;enable-alerting-high-availability-using-kubernetes&#34;&gt;Enable alerting high availability using Kubernetes&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You can expose the Pod IP &lt;a href=&#34;https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;through an environment variable&lt;/a&gt; via the container definition.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the port 9094 to the Grafana deployment:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;ports:
  - name: grafana
    containerPort: 3000
    protocol: TCP
  - name: gossip-tcp
    containerPort: 9094
    protocol: TCP
  - name: gossip-udp
    containerPort: 9094
    protocol: UDP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the environment variables to the Grafana deployment:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;env:
  - name: POD_IP
    valueFrom:
      fieldRef:
        fieldPath: status.podIP&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a headless service that returns the Pod IP instead of the service IP, which is what the &lt;code&gt;ha_peers&lt;/code&gt; need:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;apiVersion: v1
kind: Service
metadata:
  name: grafana-alerting
  namespace: grafana
  labels:
    app.kubernetes.io/name: grafana-alerting
    app.kubernetes.io/part-of: grafana
spec:
  type: ClusterIP
  clusterIP: &amp;#39;None&amp;#39;
  ports:
    - port: 9094
  selector:
    app: grafana&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure your grafana deployment has the label matching the selector, e.g. &lt;code&gt;app:grafana&lt;/code&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add in the grafana.ini:&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;Bash&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-bash&#34;&gt;[unified_alerting]
enabled = true
ha_listen_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peers = &amp;#34;grafana-alerting.grafana:9094&amp;#34;
ha_advertise_address = &amp;#34;${POD_IP}:9094&amp;#34;
ha_peer_timeout = 15s
ha_reconnect_timeout = 2m&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;verify-your-high-availability-setup&#34;&gt;Verify your high availability setup&lt;/h2&gt;
&lt;p&gt;When running multiple Grafana instances, all alert rules are evaluated on every instance. This multiple evaluation of alert rules is visible in the 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor-status/view-alert-state-history/&#34;&gt;state history&lt;/a&gt; and provides a straightforward way to verify that your high availability configuration is working correctly.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;If using a mix of &lt;code&gt;execute_alerts=false&lt;/code&gt; and &lt;code&gt;execute_alerts=true&lt;/code&gt; on the HA nodes, since the alert state is not shared amongst the Grafana instances, the instances with &lt;code&gt;execute_alerts=false&lt;/code&gt; do not show any alert status.&lt;/p&gt;
&lt;p&gt;The HA settings (&lt;code&gt;ha_peers&lt;/code&gt;, etc.) apply only to communication between alertmanagers, synchronizing silences and attempting to avoid duplicate notifications, as described in the introduction.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;p&gt;You can also confirm your high availability setup by monitoring Alertmanager metrics exposed by Grafana.&lt;/p&gt;


&lt;div class=&#34;admonition admonition-note&#34;&gt;&lt;blockquote&gt;&lt;p class=&#34;title text-uppercase&#34;&gt;Note&lt;/p&gt;&lt;p&gt;Starting in Grafana v12.4, these metrics are prefixed with &lt;code&gt;grafana_&lt;/code&gt; (for example, &lt;code&gt;grafana_alertmanager_cluster_members&lt;/code&gt;). If you are upgrading from an earlier version, update your dashboards and alert rules accordingly.&lt;/p&gt;&lt;/blockquote&gt;&lt;/div&gt;

&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Metric&lt;/th&gt;
              &lt;th&gt;Description&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_members&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Number indicating current number of members in cluster.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_messages_received_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_messages_received_size_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total size of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_messages_sent_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages sent.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_messages_sent_size_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total number of cluster messages received.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_messages_publish_failures_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total number of messages that failed to be published.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_pings_seconds&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Histogram of latencies for ping messages.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_cluster_pings_failures_total&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;Total number of failed pings.&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;code&gt;grafana_alertmanager_peer_position&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;The position an Alertmanager instance believes it holds, which defines its role in the cluster. Peers should be numbered sequentially, starting from zero.&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;You can confirm the number of Grafana instances in your alerting high availability setup by querying the &lt;code&gt;grafana_alertmanager_cluster_members&lt;/code&gt; and &lt;code&gt;grafana_alertmanager_peer_position&lt;/code&gt; metrics.&lt;/p&gt;
&lt;p&gt;Note that these alerting high availability metrics are exposed via the &lt;code&gt;/metrics&lt;/code&gt; endpoint in Grafana, and are not automatically collected or displayed. If you have a Prometheus instance connected to Grafana, add a &lt;code&gt;scrape_config&lt;/code&gt; to scrape Grafana metrics and then query these metrics in Explore.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: grafana
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  static_configs:
    - targets:
        - grafana:3000&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;For more information on monitoring alerting metrics, refer to 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor/&#34;&gt;Alerting meta-monitoring&lt;/a&gt;. For a demo, see &lt;a href=&#34;https://github.com/grafana/alerting-ha-docker-examples/&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;alerting high availability examples using Docker Compose&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;prevent-duplicate-notifications&#34;&gt;Prevent duplicate notifications&lt;/h2&gt;
&lt;p&gt;In high-availability mode, each Grafana instance runs its own pre-configured alertmanager to handle alert notifications.&lt;/p&gt;
&lt;p&gt;When multiple Grafana instances are running, all alert rules are evaluated on each instance. By default, each instance sends firing alerts to its respective alertmanager. This results in notification handling being duplicated across all running Grafana instances.&lt;/p&gt;
&lt;p&gt;Alertmanagers in HA mode communicate with each other to coordinate notification delivery. However, this setup can sometimes lead to duplicated or out-of-order notifications. By design, HA prioritizes sending duplicate notifications over the risk of missing notifications.&lt;/p&gt;
&lt;p&gt;To avoid duplicate notifications, you can configure a shared alertmanager to manage notifications for all Grafana instances. For more information, refer to 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/configure-alertmanager/&#34;&gt;add an external alertmanager&lt;/a&gt;.&lt;/p&gt;
]]></content><description>&lt;h1 id="configure-high-availability">Configure high availability&lt;/h1>
&lt;p>Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.&lt;/p></description></item><item><title>Meta monitoring</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/meta-monitoring/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/meta-monitoring/</guid><content><![CDATA[&lt;h1 id=&#34;meta-monitoring&#34;&gt;Meta monitoring&lt;/h1&gt;
&lt;p&gt;Monitor your alerting metrics to ensure you identify potential issues before they become critical.&lt;/p&gt;
&lt;p&gt;Meta monitoring is the process of monitoring your monitoring system and alerting when your monitoring is not working as it should.&lt;/p&gt;
&lt;p&gt;In order to enable you to meta monitor, Grafana provides predefined metrics.&lt;/p&gt;
&lt;p&gt;Identify which metrics are critical to your monitoring system (i.e. Grafana) and then set up how you want to monitor them.&lt;/p&gt;
&lt;p&gt;You can use meta-monitoring metrics to understand the health of your alerting system in the following ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Optional: Create a dashboard in Grafana that uses this metric in a panel (just like you would for any other kind of metric).&lt;/li&gt;
&lt;li&gt;Optional: Create an alert rule in Grafana that checks this metric regularly (just like you would do for any other kind of alert rule).&lt;/li&gt;
&lt;li&gt;Optional: Use the Explore module in Grafana.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;metrics-for-grafana-managed-alerts&#34;&gt;Metrics for Grafana-managed alerts&lt;/h2&gt;
&lt;p&gt;To meta monitor Grafana-managed alerts, you can collect two types of metrics in a Prometheus instance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;State history metric (&lt;code&gt;GRAFANA_ALERTS&lt;/code&gt;)&lt;/strong&gt; — Exported by Grafana Alerting as part of alert state history.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scraped metrics&lt;/strong&gt; — Exported by Grafana’s &lt;code&gt;/metrics&lt;/code&gt; endpoint to monitor alerting activity and performance.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You need a Prometheus-compatible server to collect and store these metrics.&lt;/p&gt;
&lt;h3 id=&#34;grafana_alerts-metric&#34;&gt;&lt;code&gt;GRAFANA_ALERTS&lt;/code&gt; metric&lt;/h3&gt;
&lt;p&gt;If you have configured 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/configure-alert-state-history/&#34;&gt;Prometheus for alert state history&lt;/a&gt;, Grafana writes alert state changes to the &lt;code&gt;ALERTS&lt;/code&gt; metric:&lt;/p&gt;

&lt;div class=&#34;code-snippet code-snippet__mini&#34;&gt;&lt;div class=&#34;lang-toolbar__mini&#34;&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet code-snippet__border&#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-none&#34;&gt;GRAFANA_ALERTS{alertname=&amp;#34;&amp;#34;, alertstate=&amp;#34;&amp;#34;, grafana_alertstate=&amp;#34;&amp;#34;, grafana_rule_uid=&amp;#34;&amp;#34;, &amp;lt;additional alert labels&amp;gt;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;This &lt;code&gt;GRAFANA_ALERTS&lt;/code&gt; metric is compatible with the &lt;code&gt;ALERTS&lt;/code&gt; metric used by Prometheus Alerting and includes two additional labels:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A new &lt;code&gt;grafana_rule_uid&lt;/code&gt; label for the UID of the Grafana rule.&lt;/li&gt;
&lt;li&gt;A new &lt;code&gt;grafana_alertstate&lt;/code&gt; label for the Grafana alert state, which differs slightly from the equivalent Prometheus state included in the &lt;code&gt;alertstate&lt;/code&gt; label.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Alert labels are automatically converted before being written to Prometheus to ensure compatibility. Prometheus requires label names to start with a letter or underscore (&lt;code&gt;_&lt;/code&gt;), followed only by letters, numbers, or additional underscores. Invalid characters are replaced during conversion. For example, &lt;code&gt;1my-label&lt;/code&gt; becomes &lt;code&gt;_my_label&lt;/code&gt;.&lt;/p&gt;
&lt;section class=&#34;expand-table-wrapper&#34;&gt;&lt;div class=&#34;button-div&#34;&gt;
      &lt;button class=&#34;expand-table-btn&#34;&gt;Expand table&lt;/button&gt;
    &lt;/div&gt;&lt;div class=&#34;responsive-table-wrapper&#34;&gt;
    &lt;table&gt;
      &lt;thead&gt;
          &lt;tr&gt;
              &lt;th&gt;Grafana state&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;alertstate&lt;/code&gt;&lt;/th&gt;
              &lt;th&gt;&lt;code&gt;grafana_alertstate&lt;/code&gt;&lt;/th&gt;
          &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Alerting&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;firing&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;alerting&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Recovering&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;firing&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;recovering&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Pending&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;pending&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;pending&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Error&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;firing&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;error&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;NoData&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;firing&lt;/code&gt;&lt;/td&gt;
              &lt;td&gt;&lt;code&gt;nodata&lt;/code&gt;&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
              &lt;td&gt;&lt;strong&gt;Normal&lt;/strong&gt;&lt;/td&gt;
              &lt;td&gt;&lt;em&gt;(no metric emitted)&lt;/em&gt;&lt;/td&gt;
              &lt;td&gt;&lt;em&gt;(no metric emitted)&lt;/em&gt;&lt;/td&gt;
          &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;
&lt;/section&gt;&lt;p&gt;You can then query this metric like any other Prometheus metric:&lt;/p&gt;



  

  

  






  

  

  



  &lt;div class=&#34;code&#34; x-data=&#34;app_code([&amp;#34;firing-alerts&amp;#34;,&amp;#34;recovering-alerts&amp;#34;,&amp;#34;critical-alerts-in-pending&amp;#34;], false)&#34; x-init=&#34;init()&#34; data-codetoggle=&#34;true&#34;&gt;
    &lt;div class=&#34;toggle-toolbar &#34;&gt;
      &lt;div&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;firing-alerts&#39; }&#34; @click=&#34;$store.code.language = &#39;firing-alerts&#39;&#34;&gt;
              &lt;span&gt;firing-alerts&lt;/span&gt;
            &lt;/button&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;recovering-alerts&#39; }&#34; @click=&#34;$store.code.language = &#39;recovering-alerts&#39;&#34;&gt;
              &lt;span&gt;recovering-alerts&lt;/span&gt;
            &lt;/button&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;critical-alerts-in-pending&#39; }&#34; @click=&#34;$store.code.language = &#39;critical-alerts-in-pending&#39;&#34;&gt;
              &lt;span&gt;critical-alerts-in-pending&lt;/span&gt;
            &lt;/button&gt;&lt;/div&gt;
      &lt;div class=&#34;d-flex&#34;&gt;&lt;span class=&#34;code-clipboard&#34; x-ref=&#34;tooltip&#34;&gt;
          &lt;button @click=&#34;copy()&#34;&gt;
            &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
            &lt;span&gt;Copy&lt;/span&gt;
          &lt;/button&gt;
        &lt;/span&gt;
      &lt;/div&gt;
      &lt;div class=&#34;toggle-toolbar__border&#34;&gt;&lt;/div&gt;
    &lt;/div&gt;
    
    &lt;div class=&#34;code-rendered&#34; &gt;
      
&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;firing-alerts&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-firing-alerts&#34;&gt;GRAFANA_ALERTS{grafana_alertstate=&amp;#34;alerting&amp;#34;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;recovering-alerts&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-recovering-alerts&#34;&gt;GRAFANA_ALERTS{grafana_alertstate=&amp;#34;recovering&amp;#34;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;critical-alerts-in-pending&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-critical-alerts-in-pending&#34;&gt;GRAFANA_ALERTS{grafana_alertstate=&amp;#34;pending&amp;#34;, severity=&amp;#34;critical&amp;#34;}&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

    &lt;/div&gt;
  &lt;/div&gt;


&lt;h3 id=&#34;scraped-metrics&#34;&gt;Scraped metrics&lt;/h3&gt;
&lt;p&gt;To collect scraped Alerting metrics, configure Prometheus to scrape metrics from Grafana.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: grafana
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  static_configs:
    - targets:
        - grafana:3000&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The Grafana ruler, which is responsible for evaluating alert rules, and the Grafana Alertmanager, which is responsible for sending notifications of firing and resolved alerts, provide a number of metrics that let you observe them.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_alerts&#34;&gt;grafana_alerting_alerts&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of &lt;code&gt;normal&lt;/code&gt;, &lt;code&gt;pending&lt;/code&gt;, &lt;code&gt;alerting&lt;/code&gt;, &lt;code&gt;nodata&lt;/code&gt; and &lt;code&gt;error&lt;/code&gt; alerts. For example, you might want to create an alert that fires when &lt;code&gt;grafana_alerting_alerts{state=&amp;quot;error&amp;quot;}&lt;/code&gt; is greater than 0.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_alert_rules&#34;&gt;grafana_alerting_schedule_alert_rules&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the number of alert rules scheduled. An alert rule is scheduled unless it is paused, and the value of this metric should match the total number of non-paused alert rules in Grafana.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_periodic_duration_seconds_bucket&#34;&gt;grafana_alerting_schedule_periodic_duration_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the time it takes to process an individual tick in the scheduler that evaluates alert rules. If the scheduler takes longer than 10 seconds to process a tick then pending evaluations start to accumulate such that alert rules might later than expected.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_schedule_query_alert_rules_duration_seconds_bucket&#34;&gt;grafana_alerting_schedule_query_alert_rules_duration_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you how long it takes the scheduler to fetch the latest rules from the database. If this metric is elevated, &lt;code&gt;schedule_periodic_duration_seconds&lt;/code&gt; is also evaluated.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_scheduler_behind_seconds&#34;&gt;grafana_alerting_scheduler_behind_seconds&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the number of seconds that the scheduler is behind where it should be. This number increases if &lt;code&gt;schedule_periodic_duration_seconds&lt;/code&gt; is longer than 10 seconds, and decrease when it is less than 10 seconds. The smallest possible value of this metric is 0.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_notification_latency_seconds_bucket&#34;&gt;grafana_alerting_notification_latency_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the number of seconds taken to send notifications for firing and resolved alerts. This metric lets you observe slow or over-utilized integrations, such as an SMTP server that is being given emails faster than it can send them.&lt;/p&gt;
&lt;h4 id=&#34;grafana_alerting_state_history_writes_failed_total&#34;&gt;grafana_alerting_state_history_writes_failed_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of failed writes to the configured alert state history backend. It includes a &lt;code&gt;backend&lt;/code&gt; label to distinguish between different backends (such as &lt;code&gt;loki&lt;/code&gt; or &lt;code&gt;prometheus&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;For example, you might want to create an alert that fires when &lt;code&gt;grafana_alerting_state_history_writes_failed_total{backend=&amp;quot;prometheus&amp;quot;}&lt;/code&gt; is greater than 0 to detect when Prometheus remote write is failing.&lt;/p&gt;
&lt;h2 id=&#34;logs-for-grafana-managed-alerts&#34;&gt;Logs for Grafana-managed alerts&lt;/h2&gt;
&lt;p&gt;If you have configured 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/set-up/configure-alert-state-history/&#34;&gt;Loki for alert state history&lt;/a&gt;, logs related to state changes in Grafana-managed alerts are stored in the Loki data source.&lt;/p&gt;
&lt;p&gt;You can use &lt;strong&gt;Grafana Explore&lt;/strong&gt; and the Loki query editor to search for alert state changes.&lt;/p&gt;



  

  

  






  

  

  



  &lt;div class=&#34;code&#34; x-data=&#34;app_code([&amp;#34;basic-query&amp;#34;,&amp;#34;additional-filters&amp;#34;,&amp;#34;failing-rules&amp;#34;], false)&#34; x-init=&#34;init()&#34; data-codetoggle=&#34;true&#34;&gt;
    &lt;div class=&#34;toggle-toolbar &#34;&gt;
      &lt;div&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;basic-query&#39; }&#34; @click=&#34;$store.code.language = &#39;basic-query&#39;&#34;&gt;
              &lt;span&gt;basic-query&lt;/span&gt;
            &lt;/button&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;additional-filters&#39; }&#34; @click=&#34;$store.code.language = &#39;additional-filters&#39;&#34;&gt;
              &lt;span&gt;additional-filters&lt;/span&gt;
            &lt;/button&gt;&lt;button class=&#34;toggle-toolbar__item&#34; :class=&#34;{ &#39;toggle-toolbar__item-active&#39;: active === &#39;failing-rules&#39; }&#34; @click=&#34;$store.code.language = &#39;failing-rules&#39;&#34;&gt;
              &lt;span&gt;failing-rules&lt;/span&gt;
            &lt;/button&gt;&lt;/div&gt;
      &lt;div class=&#34;d-flex&#34;&gt;&lt;span class=&#34;code-clipboard&#34; x-ref=&#34;tooltip&#34;&gt;
          &lt;button @click=&#34;copy()&#34;&gt;
            &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
            &lt;span&gt;Copy&lt;/span&gt;
          &lt;/button&gt;
        &lt;/span&gt;
      &lt;/div&gt;
      &lt;div class=&#34;toggle-toolbar__border&#34;&gt;&lt;/div&gt;
    &lt;/div&gt;
    
    &lt;div class=&#34;code-rendered&#34; &gt;
      
&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;basic-query&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-basic-query&#34;&gt;{from=&amp;#34;state-history&amp;#34;} | json&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;additional-filters&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-additional-filters&#34;&gt;{from=&amp;#34;state-history&amp;#34;} | json | previous=~&amp;#34;Normal.*&amp;#34; | current=~&amp;#34;Alerting.*&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;failing-rules&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-failing-rules&#34;&gt;{from=&amp;#34;state-history&amp;#34;} | json | current=~&amp;#34;Error.*&amp;#34;&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;

    &lt;/div&gt;
  &lt;/div&gt;


&lt;p&gt;In the &lt;strong&gt;Logs&lt;/strong&gt; view, you can review details for individual alerts by selecting fields such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;previous&lt;/code&gt;: previous alert instance state.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;current&lt;/code&gt;: current alert instance state.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ruleTitle&lt;/code&gt;: alert rule title.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ruleID&lt;/code&gt; and &lt;code&gt;ruleUID&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;labels_alertname&lt;/code&gt;, &lt;code&gt;labels_new_label&lt;/code&gt;, and &lt;code&gt;labels_grafana_folder&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Additional available fields.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Alternatively, you can access the 
    &lt;a href=&#34;/docs/grafana/v12.4/alerting/monitor-status/view-alert-state-history/&#34;&gt;History page&lt;/a&gt; in Grafana to visualize and filter state changes for individual alerts or all alerts.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-mimir-managed-alerts&#34;&gt;Metrics for Mimir-managed alerts&lt;/h2&gt;
&lt;p&gt;To meta monitor Grafana Mimir-managed alerts, open source and on-premise users need a Prometheus/Mimir server, or another metrics database to collect and store metrics exported by the Mimir ruler.&lt;/p&gt;
&lt;h4 id=&#34;rule_evaluation_failures_total&#34;&gt;rule_evaluation_failures_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the total number of rule evaluation failures.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-alertmanager&#34;&gt;Metrics for Alertmanager&lt;/h2&gt;
&lt;p&gt;To meta monitor the Alertmanager, you need a Prometheus/Mimir server, or another metrics database to collect and store metrics exported by Alertmanager.&lt;/p&gt;
&lt;p&gt;For example, if you are using Prometheus you should add a &lt;code&gt;scrape_config&lt;/code&gt; to Prometheus to scrape metrics from your Alertmanager.&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;YAML&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-yaml&#34;&gt;- job_name: alertmanager
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  static_configs:
    - targets:
        - alertmanager:9093&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The following is a list of available metrics for Alertmanager.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_alerts&#34;&gt;alertmanager_alerts&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of active, suppressed, and unprocessed alerts in Alertmanager. Suppressed alerts are silenced alerts, and unprocessed alerts are alerts that have been sent to the Alertmanager but have not been processed.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_alerts_invalid_total&#34;&gt;alertmanager_alerts_invalid_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of invalid alerts that were sent to Alertmanager. This counter should not exceed 0, and so in most cases, create an alert that fires if whenever this metric increases.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notifications_total&#34;&gt;alertmanager_notifications_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you how many notifications have been sent by Alertmanager. The metric uses a label &amp;ldquo;integration&amp;rdquo; to show the number of notifications sent by integration, such as email.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notifications_failed_total&#34;&gt;alertmanager_notifications_failed_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you how many notifications have failed in total. This metric also uses a label &amp;ldquo;integration&amp;rdquo; to show the number of failed notifications by integration, such as failed emails. In most cases, use the &lt;code&gt;rate&lt;/code&gt; function to understand how often notifications are failing to be sent.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_notification_latency_seconds_bucket&#34;&gt;alertmanager_notification_latency_seconds_bucket&lt;/h4&gt;
&lt;p&gt;This metric is a histogram that shows you the amount of time it takes Alertmanager to send notifications and for those notifications to be accepted by the receiving service. This metric uses a label &amp;ldquo;integration&amp;rdquo; to show the amount of time by integration. For example, you can use this metric to show the 95th percentile latency of sending emails.&lt;/p&gt;
&lt;h2 id=&#34;metrics-for-alertmanager-in-high-availability-mode&#34;&gt;Metrics for Alertmanager in high availability mode&lt;/h2&gt;
&lt;p&gt;If you are using Alertmanager in high availability mode there are a number of additional metrics that you might want to create alerts for.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_members&#34;&gt;alertmanager_cluster_members&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the current number of members in the cluster. The value of this gauge should be the same across all Alertmanagers. If different Alertmanagers are showing different numbers of members then this is indicative of an issue with your Alertmanager cluster. You should look at the metrics and logs from your Alertmanagers to better understand what might be going wrong.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_failed_peers&#34;&gt;alertmanager_cluster_failed_peers&lt;/h4&gt;
&lt;p&gt;This metric is a gauge that shows you the current number of failed peers.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_health_score&#34;&gt;alertmanager_cluster_health_score&lt;/h4&gt;
&lt;p&gt;This metric is a gauge showing the health score of the Alertmanager. Lower values are better, and zero means the Alertmanager is healthy.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_peer_info&#34;&gt;alertmanager_cluster_peer_info&lt;/h4&gt;
&lt;p&gt;This metric is a gauge. It has a constant value &lt;code&gt;1&lt;/code&gt;, and contains a label called &amp;ldquo;peer&amp;rdquo; containing the Peer ID of each known peer.&lt;/p&gt;
&lt;h4 id=&#34;alertmanager_cluster_reconnections_failed_total&#34;&gt;alertmanager_cluster_reconnections_failed_total&lt;/h4&gt;
&lt;p&gt;This metric is a counter that shows you the number of failed peer connection attempts. In most cases you should use the &lt;code&gt;rate&lt;/code&gt; function to understand how often reconnections fail as this may be indicative of an issue or instability in your network.&lt;/p&gt;
]]></content><description>&lt;h1 id="meta-monitoring">Meta monitoring&lt;/h1>
&lt;p>Monitor your alerting metrics to ensure you identify potential issues before they become critical.&lt;/p>
&lt;p>Meta monitoring is the process of monitoring your monitoring system and alerting when your monitoring is not working as it should.&lt;/p></description></item><item><title>Performance considerations and limitations</title><link>https://grafana.com/docs/grafana/v12.4/alerting/set-up/performance-limitations/</link><pubDate>Fri, 03 Apr 2026 12:35:46 -0500</pubDate><guid>https://grafana.com/docs/grafana/v12.4/alerting/set-up/performance-limitations/</guid><content><![CDATA[&lt;h1 id=&#34;performance-considerations-and-limitations&#34;&gt;Performance considerations and limitations&lt;/h1&gt;
&lt;p&gt;Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual virtual machines max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p&gt;
&lt;p&gt;Evaluating alerting rules consumes RAM and CPU to compute the output of an alerting query, and network resources to send alert notifications and write the results to the Grafana SQL database. The configuration of individual alert rules affects the resource consumption and, therefore, the maximum number of rules a given configuration can support.&lt;/p&gt;
&lt;p&gt;The following section provides a list of alerting performance considerations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frequency of rule evaluation consideration. The &amp;ldquo;Evaluate Every&amp;rdquo; property of an alert rule controls the frequency of rule evaluation. It is recommended to use the lowest acceptable evaluation frequency to support more concurrent rules.&lt;/li&gt;
&lt;li&gt;Cardinality of the rule&amp;rsquo;s result set. For example, suppose you are monitoring API response errors for every API path, on every virtual machine in your fleet. This set has a cardinality of &lt;em&gt;n&lt;/em&gt; number of paths multiplied by &lt;em&gt;v&lt;/em&gt; number of VMs. You can reduce the cardinality of a result set - perhaps by monitoring errors-per-VM instead of for each path per VM.&lt;/li&gt;
&lt;li&gt;Complexity of the alerting query consideration. Queries that data sources can process and respond to quickly consume fewer resources. Although this consideration is less important than the other considerations listed above, if you have reduced those as much as possible, looking at individual query performance could make a difference.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each evaluation of an alert rule generates a set of alert instances; one for each member of the result set. The state of all the instances is written to the &lt;code&gt;alert_instance&lt;/code&gt; table in the Grafana SQL database. This number of write-heavy operations can cause issues when using SQLite.&lt;/p&gt;
&lt;p&gt;Grafana Alerting exposes a metric, &lt;code&gt;grafana_alerting_rule_evaluations_total&lt;/code&gt; that counts the number of alert rule evaluations. To get a feel for the influence of rule evaluations on your Grafana instance, you can observe the rate of evaluations and compare it with resource consumption. In a Prometheus-compatible database, you can use the query &lt;code&gt;rate(grafana_alerting_rule_evaluations_total[5m])&lt;/code&gt; to compute the rate over 5 minute windows of time. It&amp;rsquo;s important to remember that this isn&amp;rsquo;t the full picture of rule evaluation. For example, the load is unevenly distributed if you have some rules that evaluate every 10 seconds, and others every 30 minutes.&lt;/p&gt;
&lt;p&gt;These factors all affect the load on the Grafana instance, but you should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well.&lt;/p&gt;
&lt;h2 id=&#34;limited-rule-sources-support&#34;&gt;Limited rule sources support&lt;/h2&gt;
&lt;p&gt;Grafana Alerting can retrieve alerting and recording rules &lt;strong&gt;stored&lt;/strong&gt; in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources.&lt;/p&gt;
&lt;p&gt;It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time.&lt;/p&gt;
&lt;h2 id=&#34;prometheus-version-support&#34;&gt;Prometheus version support&lt;/h2&gt;
&lt;p&gt;The latest two minor versions of both Prometheus and Alertmanager are supported. We cannot guarantee that older versions work.&lt;/p&gt;
&lt;p&gt;As an example, if the current Prometheus version is &lt;code&gt;2.31.1&lt;/code&gt;, &amp;gt;= &lt;code&gt;2.29.0&lt;/code&gt; is supported.&lt;/p&gt;
&lt;h2 id=&#34;the-grafana-alertmanager-can-only-receive-grafana-managed-alerts&#34;&gt;The Grafana Alertmanager can only receive Grafana managed alerts&lt;/h2&gt;
&lt;p&gt;Grafana cannot be used to receive external alerts. You can only send alerts to the Grafana Alertmanager using Grafana managed alerts.&lt;/p&gt;
&lt;p&gt;You have the option to send Grafana-managed alerts to an external Alertmanager, you can find this option in the Admin tab on the Alerting page.&lt;/p&gt;
&lt;p&gt;For more information, refer to &lt;a href=&#34;https://github.com/grafana/grafana/issues/73447&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;this GitHub issue&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;high-load-on-database-caused-by-a-high-number-of-alert-instances&#34;&gt;High load on database caused by a high number of alert instances&lt;/h2&gt;
&lt;p&gt;If you have a high number of alert rules or alert instances, the load on the database can get very high.&lt;/p&gt;
&lt;p&gt;By default, Grafana performs one SQL update per alert rule after each evaluation, which updates all alert instances belonging to the rule.&lt;/p&gt;
&lt;p&gt;You can change this behavior by disabling the &lt;code&gt;alertingSaveStateCompressed&lt;/code&gt; feature flag. In this case, Grafana performs a separate SQL update for each state change of an alert instance. This configuration is rarely recommended, as it can add significant database overhead for alert rules with many instances.&lt;/p&gt;
&lt;h3 id=&#34;save-state-periodically&#34;&gt;Save state periodically&lt;/h3&gt;
&lt;p&gt;You can also reduce database load by writing states periodically instead of after every evaluation.&lt;/p&gt;
&lt;p&gt;There are two approaches for periodic state saving:&lt;/p&gt;
&lt;h4 id=&#34;compressed-periodic-saves&#34;&gt;Compressed periodic saves&lt;/h4&gt;
&lt;p&gt;You can combine compressed alert state storage with periodic saves by enabling both &lt;code&gt;alertingSaveStateCompressed&lt;/code&gt; and &lt;code&gt;alertingSaveStatePeriodic&lt;/code&gt; feature toggles together.&lt;/p&gt;
&lt;p&gt;This approach groups all alert instances by rule UID and compresses them together for efficient storage.&lt;/p&gt;
&lt;p&gt;When both feature toggles are enabled, Grafana will save compressed alert states at the interval specified by &lt;code&gt;state_periodic_save_interval&lt;/code&gt;. Note that in compressed mode, the &lt;code&gt;state_periodic_save_batch_size&lt;/code&gt; setting is ignored as the system groups instances by rule UID rather than by batch size.&lt;/p&gt;
&lt;h4 id=&#34;batch-based-periodic-saves&#34;&gt;Batch-based periodic saves&lt;/h4&gt;
&lt;p&gt;Alternatively, you can use batch-based periodic saves without compression:&lt;/p&gt;
&lt;p&gt;This approach processes individual alert instances in batches of a specified size.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Enable the &lt;code&gt;alertingSaveStatePeriodic&lt;/code&gt; feature toggle.&lt;/li&gt;
&lt;li&gt;Disable the &lt;code&gt;alertingSaveStateCompressed&lt;/code&gt; feature toggle.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By default, it saves the states every 5 minutes to the database and on each shutdown. The periodic interval
can also be configured using the &lt;code&gt;state_periodic_save_interval&lt;/code&gt; configuration flag. During this process, Grafana deletes all existing alert instances from the database and then writes the entire current set of instances back in batches in a single transaction.
Configure the size of each batch using the &lt;code&gt;state_periodic_save_batch_size&lt;/code&gt; configuration option.&lt;/p&gt;
&lt;h5 id=&#34;jitter-for-batch-based-periodic-saves&#34;&gt;Jitter for batch-based periodic saves&lt;/h5&gt;
&lt;p&gt;To further distribute database load, you can enable jitter for periodic state saves by setting &lt;code&gt;state_periodic_save_jitter_enabled = true&lt;/code&gt;. When jitter is enabled, instead of saving all batches simultaneously, Grafana spreads the batch writes across a calculated time window of 85% of the save interval.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How jitter works:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Calculates delays for each batch: &lt;code&gt;delay = (batchIndex * timeWindow) / (totalBatches - 1)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Time window uses 85% of save interval for safety margin&lt;/li&gt;
&lt;li&gt;Batches are evenly distributed across the time window&lt;/li&gt;
&lt;li&gt;All operations occur within a single database transaction&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Configuration example:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&#34;code-snippet &#34;&gt;&lt;div class=&#34;lang-toolbar&#34;&gt;
    &lt;span class=&#34;lang-toolbar__item lang-toolbar__item-active&#34;&gt;ini&lt;/span&gt;
    &lt;span class=&#34;code-clipboard&#34;&gt;
      &lt;button x-data=&#34;app_code_snippet()&#34; x-init=&#34;init()&#34; @click=&#34;copy()&#34;&gt;
        &lt;img class=&#34;code-clipboard__icon&#34; src=&#34;/media/images/icons/icon-copy-small-2.svg&#34; alt=&#34;Copy code to clipboard&#34; width=&#34;14&#34; height=&#34;13&#34;&gt;
        &lt;span&gt;Copy&lt;/span&gt;
      &lt;/button&gt;
    &lt;/span&gt;
    &lt;div class=&#34;lang-toolbar__border&#34;&gt;&lt;/div&gt;
  &lt;/div&gt;&lt;div class=&#34;code-snippet &#34;&gt;
    &lt;pre data-expanded=&#34;false&#34;&gt;&lt;code class=&#34;language-ini&#34;&gt;[unified_alerting]
state_periodic_save_jitter_enabled = true
state_periodic_save_interval = 1m
state_periodic_save_batch_size = 100&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Performance impact:&lt;/strong&gt;
For 2000 alert instances with 1-minute interval and 100 batch size:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creates 20 batches (2000 ÷ 100)&lt;/li&gt;
&lt;li&gt;Spreads writes across 51 seconds (85% of 60s)&lt;/li&gt;
&lt;li&gt;Batch writes occur every ~2.68 seconds&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This helps reduce database load spikes in environments with high alert cardinality by distributing writes over time rather than concentrating them at the beginning of each save cycle.&lt;/p&gt;
&lt;p&gt;The time it takes to write to the database periodically can be monitored using the &lt;code&gt;state_full_sync_duration_seconds&lt;/code&gt; metric
that is exposed by Grafana.&lt;/p&gt;
&lt;p&gt;If Grafana crashes or is force killed, then the database can be up to &lt;code&gt;state_periodic_save_interval&lt;/code&gt; seconds out of date.
When Grafana restarts, the UI might show incorrect state for some alerts until the alerts are re-evaluated.
In some cases, alerts that were firing before the crash might fire again.
If this happens, Grafana might send duplicate notifications for firing alerts.&lt;/p&gt;
&lt;h2 id=&#34;alert-rule-migrations-for-grafana-1160&#34;&gt;Alert rule migrations for Grafana 11.6.0&lt;/h2&gt;
&lt;p&gt;When you upgrade to Grafana 11.6.0, a migration is performed on the &lt;code&gt;alert_rule_versions&lt;/code&gt; table. If you experience a 11.6.0 upgrade that causes a migration failure, then your &lt;code&gt;alert_rule_versions&lt;/code&gt; table has too many rows. To fix this, you need to truncated the &lt;code&gt;alert_rule_versions&lt;/code&gt; table for the migration to complete.&lt;/p&gt;
]]></content><description>&lt;h1 id="performance-considerations-and-limitations">Performance considerations and limitations&lt;/h1>
&lt;p>Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual virtual machines max out. This topic discusses performance considerations resulting from multi-dimensional alerting.&lt;/p></description></item></channel></rss>