Overview
This document provides an overview of topics related to RabbitMQ monitoring. Monitoring RabbitMQ and applications that use it is critically important. Monitoring helps detect issues before they affect the rest of the environment and, eventually, the end users.
Many aspects of the system can be monitored. This guide will group them into a handful of categories:
What is monitoring, what common approaches to it exist and why it is important.
Built-in and external monitoring options
What infrastructure and kernel metrics are important to monitor
What RabbitMQ metrics are available:
Node metrics
Queue metrics
Cluster-wide metrics
How frequently should monitoring checks be performed?
Application-level metrics
How to approach node health checking and why it's more involved than a single CLI command.
Log aggregation
Command-line based observer tool
Log aggregation across all nodes and applications is closely related to monitoring and also mentioned in this guide.
A number of popular tools, both open source and commercial, can be used to monitor RabbitMQ. Prometheus and Grafana are one highly recommended option.
What is Monitoring?
In this guide we define monitoring as a process of capturing the behaviour of a system via health checks and metrics over time. This helps detect anomalies: when the system is unavailable, experiences an unusual load, exhausted of certain resources or otherwise does not behave within its normal (expected) parameters. Monitoring involves collecting and storing metrics for the long term, which is important for more than anomaly detection but also root cause analysis, trend detection and capacity planning.
Monitoring systems typically integrate with alerting systems. When an anomaly is detected by a monitoring system an alarm of some sort is typically passed to an alerting system, which notifies interested parties such as the technical operations team.
Having monitoring in place means that important deviations in system behavior, from degraded service in some areas to complete unavailability, is easier to detect and the root cause takes much less time to find. Operating a distributed system is a bit like trying to get out of a forest without a GPS navigator device or compass. It doesn’t matter how brilliant or experienced the person is, having relevant information is very important for a good outcome.
Health Checks’ Role in Monitoring
A Health check is the most basic aspect of monitoring. It involves a command or set of commands that collect a few essential metrics of the monitored system over time and test them. For example, whether RabbitMQ’s Erlang VM is running is one such check. The metric in this case is “is an OS process running?”. The normal operating parameters are “the process must be running”. Finally, there is an evaluation step.
Of course, there are more varieties of health checks. Which ones are most appropriate depends on the definition of a “healthy node” used. So, it is a system- and team-specific decision. RabbitMQ CLI tools provide commands that can serve as useful health checks. They will be covered later in this guide.
While health checks are a useful tool, they only provide so much insight into the state of the system because they are by design focused on one or a handful of metrics, usually check a single node and can only reason about the state of that node at a particular moment in time. For a more comprehensive assessment, collect more metrics over time. This detects more types of anomalies as some can only be identified over longer periods of time. This is usually done by tools known as monitoring tools of which there are a grand variety. This guides covers some tools used for RabbitMQ monitoring.
System and RabbitMQ Metrics
Some metrics are RabbitMQ-specific: they are collected and reported by RabbitMQ nodes. In this guide we refer to them as “RabbitMQ metrics”. Examples include the number of socket descriptors used, total number of enqueued messages or inter-node communication traffic rates. Others metrics are collected and reported by the OS kernel. Such metrics are often called system metrics or infrastructure metrics. System metrics are not specific to RabbitMQ. Examples include CPU utilisation rate, amount of memory used by processes, network packet loss rate, et cetera. Both types are important to track. Individual metrics are not always useful but when analysed together, they can provide a more complete insight into the state of the system. Then operators can form a hypothesis about what’s going on and needs addressing.
Infrastructure and Kernel Metrics
First step towards a useful monitoring system starts with infrastructure and kernel metrics. There are quite a few of them but some are more important than others. Collect the following metrics on all hosts that run RabbitMQ nodes or applications:
CPU stats (user, system, iowait & idle percentages)
Memory usage (used, buffered, cached & free percentages)
Virtual Memory statistics (dirty page flushes, writeback volume)
Disk I/O (operations & amount of data transferred per unit time, time to service operations)
Free disk space on the mount used for the node data directory
File descriptors used by beam.smp vs. max system limit
TCP connections by state (ESTABLISHED, CLOSE_WAIT, TIME_WAIT)
Network throughput (bytes received, bytes sent) & maximum network throughput)
Network latency (between all RabbitMQ nodes in a cluster as well as to/from clients)
There is no shortage of existing tools (such as Prometheus or Datadog) that collect infrastructure and kernel metrics, store and visualise them over periods of time.
Frequency of Monitoring
Many monitoring systems poll their monitored services periodically. How often that’s done varies from tool to tool but usually can be configured by the operator.
Very frequent polling can have negative consequences on the system under monitoring. For example, excessive load balancer checks that open a test TCP connection to a node can lead to a high connection churn. Excessive checks of channels and queues in RabbitMQ will increase its CPU consumption. When there are many (say, 10s of thousands) of them on a node, the difference can be significant.
The recommended metric collection interval is 15 second. To collect at an interval which is closer to real-time, use 5 second - but not lower. For rate metrics, use a time range that spans 4 metric collection intervals so that it can tolerate race-conditions and is resilient to scrape failures.
For production systems a collection interval of 30 or even 60 seconds is recommended. Prometheus exporter API is designed to be scraped every 15 seconds, including production systems.
Management UI and External Monitoring Systems
RabbitMQ comes with a management UI and HTTP API which exposes a number of RabbitMQ metrics for nodes, connections, queues, message rates and so on. This is a convenient option for development and in environments where external monitoring is difficult or impossible to introduce.
However, the management UI has a number of limitations:
The monitoring system is intertwined with the system being monitored
A certain amount of overhead
It only stores recent data (think hours, not days or months)
It has a basic user interface
Its design emphasizes ease of use over best possible availability.
Management UI access is controlled via the RabbitMQ permission tags system (or a convention on JWT token scopes)
Long term metric storage and visualisation services such as Prometheus and Grafana or the ELK stack are more suitable options for production systems. They offer:
Decoupling of the monitoring system from the system being monitored
Lower overhead
Long term metric storage
Access to additional related metrics such as Erlang runtime ones
More powerful and customizable user interface
Ease of metric data sharing: both metric state and dashbaords
Metric access permissions are not specific to RabbitMQ
Collection and aggregation of node-specific metrics which is more resilient to individual node failures
RabbitMQ provides first class support for Prometheus and Grafana as of 3.8. It is recommended for production environments.
RabbitMQ Metrics
The RabbitMQ management plugin provides an API for accessing RabbitMQ metrics. The plugin will store up to one day’s worth of metric data. Longer term monitoring should be accomplished with an external tool.
This section will cover multiple RabbitMQ-specific aspects of monitoring.
Monitoring of Clusters
When monitoring clusters it is important to understand the guarantees provided by the HTTP API. In a clustered environment every node can serve metric endpoint requests. Cluster-wide metrics can be fetched from any node that can contact its peers. That node will collect and combine data from its peers as needed before producing a response.
Every node also can serve requests to endpoints that provide node-specific metrics for itself as well as other cluster nodes. Like with infrastructure and OS metrics, node-specific metrics must be collected for each node. Monitoring tools can execute HTTP API requests against any node.
As mentioned earlier, inter-node connectivity issues will affect HTTP API behaviour. Choose a random online node for monitoring requests. For example, using a load balancer or round-robin DNS.
Some endpoints perform operations on the target node. Node-local health checks is the most common example. Those are an exception, not the rule.
Cluster-wide Metrics
Cluster-wide metrics provide a high level view of cluster state. Some of them describe interaction between nodes. Examples of such metrics are cluster link traffic and detected network partitions. Others combine metrics across all cluster members. A complete list of connections to all nodes would be one example. Both types are complimentary to infrastructure and node metrics.
GET /api/overview is the HTTP API endpoint that returns cluster-wide metrics.
Metric JSON field name
Cluster name cluster_name
Cluster-wide message rates message_stats
Total number of connections object_totals.connections
Total number of channels object_totals.channels
Total number of queues object_totals.queues
Total number of consumers object_totals.consumers
Total number of messages (ready plus unacknowledged) queue_totals.messages
Number of messages ready for delivery queue_totals.messages_ready
Number of unacknowledged messages queue_totals.messages_unacknowledged
Messages published recently message_stats.publish
Message publish rate message_stats.publish_details.rate
Messages delivered to consumers recently message_stats.deliver_get
Message delivery rate message_stats.deliver_get.rate
Other message stats message_stats.*
(see this document)
Node Metrics
There are two HTTP API endpoints that provide access to node-specific metrics:
GET /api/nodes/{node} returns stats for a single node
GET /api/nodes returns stats for all cluster members
The latter endpoint returns an array of objects. Monitoring tools that support (or can support) that as an input should prefer that endpoint since it reduces the number of requests. When that’s not the case, use the former endpoint to retrieve stats for every cluster member in turn. That implies that the monitoring system is aware of the list of cluster members.
Most of the metrics represent point-in-time absolute values. Some, represent activity over a recent period of time (for example, GC runs and bytes reclaimed). The latter metrics are most useful when compared to their previous values and historical mean/percentile values.
Metric JSON field name
Total amount of memory used mem_used
Memory usage high watermark mem_limit
Is a memory alarm in effect? mem_alarm
Free disk space low watermark disk_free_limit
Is a disk alarm in effect? disk_free_alarm
File descriptors available fd_total
File descriptors used fd_used
File descriptor open attempts io_file_handle_open_attempt_count
Sockets available sockets_total
Sockets used sockets_used
Message store disk reads message_stats.disk_reads
Message store disk writes message_stats.disk_writes
Inter-node communication links cluster_links
GC runs gc_num
Bytes reclaimed by GC gc_bytes_reclaimed
Erlang process limit proc_total
Erlang processes used proc_used
Runtime run queue run_queue
Individual Queue Metrics
Individual queue metrics are made available through the HTTP API via the GET /api/queues/{vhost}/{qname} endpoint.
Metric JSON field name
Memory memory
Total number of messages (ready plus unacknowledged) messages
Number of messages ready for delivery messages_ready
Number of unacknowledged messages messages_unacknowledged
Messages published recently message_stats.publish
Message publishing rate message_stats.publish_details.rate
Messages delivered recently message_stats.deliver_get
Message delivery rate message_stats.deliver_get.rate
Other message stats message_stats.*
(see this document)
Application-level Metrics
A system that uses messaging is almost always distributed. In such systems it is often not immediately obvious which component is misbehaving. Every single part of the system, including applications, should be monitored and investigated.
Some infrastructure-level and RabbitMQ metrics can show presence of an unusual system behaviour or issue but can’t pin point the root cause. For example, it is easy to tell that a node is running out of disk space but not always easy to tell why. This is where application metrics come in: they can help identify a run-away publisher, a repeatedly failing consumer, a consumer that cannot keep up with the rate, even a downstream service that’s experiencing a slowdown (e.g. a missing index in a database used by the consumers).
Some client libraries and frameworks provide means of registering metrics collectors or collect metrics out of the box. RabbitMQ Java client and Spring AMQP are two examples. With others developers have to track metrics in their application code.
What metrics applications track can be system-specific but some are relevant to most systems:
Connection opening rate
Channel opening rate
Connection failure (recovery) rate
Publishing rate
Delivery rate
Positive delivery acknowledgement rate
Negative delivery acknowledgement rate
Mean/95th percentile delivery processing latency
Health Checks
A health check is a command that tests whether an aspect of the RabbitMQ service is operating as expected. Health checks are executed periodically by machines or interactively by operators.
There is a series of health checks that can be performed, starting with the most basic and very rarely producing false positives, to increasingly more comprehensive, intrusive, and opinionated that have a higher probability of false positives. In other words, the more comprehensive a health check is, the less conclusive the result will be.
Health checks can verify the state of an individual node (node health checks), or the entire cluster (cluster health checks).
Individual Node Checks
This section covers several examples of node health check. They are organised in stages. Higher stages perform more comprehensive and opinionated checks. Such checks will have a higher probability of false positives. Some stages have dedicated RabbitMQ CLI tool commands, other scan involve extra tools.
While the health checks are ordered, a higher number does not mean a check is “better”.
The health checks can be used selectively and combined. Unless noted otherwise, the checks should follow the same monitoring frequency recommendation as metric collection.
Stage 1
The most basic check ensures that the runtime is running and (indirectly) that CLI tools can authenticate to it.
Except for the CLI tool authentication part, the probability of false positives can be considered approaching 0 except for upgrades and maintenance windows.
rabbitmq-diagnostics ping performs this check:
rabbitmq-diagnostics -q ping
Stage 2
A slightly more comprehensive check is executing rabbitmq-diagnostics status status:
This includes the stage 1 check plus retrieves some essential system information which is useful for other checks and should always be available if RabbitMQ is running on the node (see below).
rabbitmq-diagnostics -q status
This is a common way of sanity checking a node. The probability of false positives can be considered approaching 0 except for upgrades and maintenance windows.
Stage 3
Includes previous checks and also verifies that the RabbitMQ application is running (not stopped with rabbitmqctl stop_app or the Pause Minority partition handling strategy) and there are no resource alarms.
rabbitmq-diagnostics -q alarms
rabbitmq-diagnostics check_running is a check that makes sure that the runtime is running and the RabbitMQ application on it is not stopped or paused.
rabbitmq-diagnostics check_local_alarms checks that there are no local alarms in effect on the node. If there are any, it will exit with a non-zero status.
The two commands in combination deliver the stage 3 check:
rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms
The probability of false positives is low. Systems hovering around their high runtime memory watermark will have a high probability of false positives. During upgrades and maintenance windows can raise significantly.
Specifically for memory alarms, the GET /api/nodes/{node}/memory HTTP API endpoint can be used for additional checks. In the following example its output is piped to jq:
curl --silent -u guest:guest -X GET http://127.0.0.1:15672/api/nodes/rabbit@hostname/memory | jq
The breakdown information it produces can be reduced down to a single value using jq or similar tools:
curl --silent -u guest:guest -X GET http://127.0.0.1:15672/api/nodes/rabbit@hostname/memory | jq “.memory.total.allocated”
rabbitmq-diagnostics -q memory_breakdown provides access to the same per category data and supports various units:
rabbitmq-diagnostics -q memory_breakdown --unit “MB”
Stage 4
Includes all checks in stage 3 plus a check on all enabled listeners (using a temporary TCP connection).
To inspect all listeners enabled on a node, use rabbitmq-diagnostics listeners:
rabbitmq-diagnostics -q listeners
rabbitmq-diagnostics check_port_connectivity is a command that performs the basic TCP connectivity check mentioned above:
rabbitmq-diagnostics -q check_port_connectivity
The probability of false positives is generally low but during upgrades and maintenance windows can raise significantly.
Stage 5
Includes all checks in stage 4 plus checks that there are no failed virtual hosts.
rabbitmq-diagnostics check_virtual_hosts is a command checks whether any virtual host dependencies may have failed. This is done for all virtual hosts.
rabbitmq-diagnostics -q check_virtual_hosts
The probability of false positives is generally low except for systems that are under high CPU load.
Stage 6
Includes all checks in stage 5 plus checks all channel and queue processes on the target node for aliveness.
The combination of rabbitmq-diagnostics check_port_connectivity and rabbitmq-diagnostics node_health_check is the closest alternative to this check currently available.
This combination of commands includes all checks up to and including stage 4. It will also check all channel and queue processes on the target queue for aliveness:
rabbitmq-diagnostics -q check_port_connectivity &&
rabbitmq-diagnostics -q node_health_check
The probability of false positives is moderate for systems under above average load or with a large number of queues and channels (starting with 10s of thousands).
Optional Check 1
This check verifies that an expected set of plugins is enabled. It is orthogonal to the primary checks.
rabbitmq-plugins list --enabled is the command that lists enabled plugins on a node:
rabbitmq-plugins -q list --enabled --minimal
A health check that verifies that a specific plugin, rabbitmq_shovel is enabled and running:
rabbitmq-plugins -q is_enabled rabbitmq_shovel
The probability of false positives is generally low but raises in environments where environment variables that can affect rabbitmq-plugins are overridden.
Command-line Based Observer Tool
rabbitmq-diagnostics observer is a command-line tool similar to top, htop, vmstat. It is a command line alternative to Erlang’s Observer application. It provides access to many metrics, including detailed state of individual runtime processes:
Runtime version information
CPU and schedule stats
Memory allocation and usage stats
Top processes by CPU (reductions) and memory usage
Network link stats
Detailed process information such as basic TCP socket stats
and more, in an interactive ncurses-like command line interface with periodic updates.
Here are some screenshots that demonstrate what kind of information the tool provides.
An overview page with key runtime metrics:
rabbitmq-diagnostics observer overview
Memory allocator stats:
rabbitmq-diagnostics memory breakdown
A client connection process metrics:
rabbitmq-diagnostics connection process
Monitoring Tools
The following is an alphabetised list of third-party tools commonly used to collect RabbitMQ metrics. These tools vary in capabilities but usually can collect both infrastructure-level and RabbitMQ metrics.
Note that this list is by no means complete.
Monitoring Tool Online Resource(s)
AppDynamics AppDynamics, GitHub
AWS CloudWatch GitHub
collectd GitHub
DataDog DataDog RabbitMQ integration, GitHub
Ganglia GitHub
Graphite Tools that work with Graphite
Munin Munin docs, GitHub
Nagios GitHub
Nastel AutoPilot Nastel RabbitMQ Solutions
New Relic NewRelic Plugins, GitHub
Prometheus Prometheus guide, GitHub
Zabbix Zabbix by HTTP, Zabbix by Agent, Blog article
Zenoss RabbitMQ ZenPack, Instructional Video
Log Aggregation
Logs are also very important in troubleshooting a distributed system. Like metrics, logs can provide important clues that will help identify the root cause. Collect logs from all RabbitMQ nodes as well as all applications (if possible).
Getting Help and Providing Feedback
If you have questions about the contents of this guide or any other topic related to RabbitMQ, don’t hesitate to ask them on the RabbitMQ mailing list.
Help Us Improve the Docs ❤️
If you’d like to contribute an improvement to the site, its source is available on GitHub. Simply fork the repository and submit a pull request. Thank you!