Specialised plugins for AWS, Hadoop, Big Data & NoSQL technologies, written by a former Clouderan (Cloudera was the first Hadoop Big Data vendor) and ex-Hortonworks consultant.
Supports most major open source NoSQL technologies, Pub-Sub / Message Buses, CI, Web and Linux based infrastructure, including:
check_yum.py
for RHEL / CentOS yum security updatesSupports a a wide variety of compatible Enterprise Monitoring systems.
Most enterprise monitoring systems come with basic generic checks, while this project extends their monitoring capabilities significantly further in to advanced infrastructure, application layer, APIs etc.
If running against services in Cloud or Kubernetes, just target the load balancer address or the Kubernetes Service or Ingress addresses.
Also useful to be run on the command line for testing or in scripts for dependency availability checking, and comes with a selection of advanced HAProxy configurations for these technologies to make monitoring and scripting easier for clustered technologies.
Fix requests, suggestions, updates and improvements are most welcome via Github issues or pull requests (in which case GitHub will give you credit and mark you as a contributor to the project :) ).
Hari Sekhon
Cloud & Big Data Contractor, United Kingdom
make update
if updating and not just git pull
as you will often need the latest library submodules and probably new upstream libraries too.make
Execute each program on the command line with --help
to see its options.
All plugins and their pre-compiled dependencies can be found ready-to-run on DockerHub, if you have Docker installed, fetch this project like so:
docker pull harisekhon/nagios-plugins
List all plugins:
docker run harisekhon/nagios-plugins
Run any given plugin by suffixing it to the docker run
command:
docker run harisekhon/nagios-plugins <program> <args>
eg.
docker run harisekhon/nagios-plugins check_ssl_cert.pl --help
There are also :centos
(:latest
), :alpine
, :debian
, :fedora
and :ubuntu
tagged docker images available, as well as :python
and :perl
only images.
You should tag the build locally as :stable
or date-time stamped and run off that tag to avoid it getting auto-replaced by newer :latest
builds, to control updates to suit your schedule and prevent random delays from docker run
s pulling down newer builds from DockerHub.
curl -L https://git.io/nagios-plugins-bootstrap | sh
or
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
make build zookeeper
Now run any plugin with --help
to find out which switches to use.
Make sure to read Detailed Build Instructions further down for more information.
After the make
build has finished, if you want to make self-contained versions of all the perl scripts with all dependencies included for copying around, run:
make fatpacks
The self-contained scripts will be available in the fatpacks/
directory.
There are over 400 programs in this repo so these are just some of the highlights.
check_hadoop_*.pl/py
- various Apache Hadoop monitoring utilities for HDFS, YARN and MapReduce (both MRv1 & MRv2):
check_hbase_*.py/pl
- various Apache HBase monitoring utilities using Thrift + Stargate APIs, checks Masters / Backup Masters, RegionServers, table availability (exists, is enabled, and has minimum number of column families), number of expected table regions, unassigned table regions, regions stuck in transition, region count balance across RegionServers, requests per sec balance across RegionServers, compaction in progress (by table and by regionserver), number of regions in transition, longest current region migration time, hbck status for any inconsistencies, cell content vs optional regex + thresholds, table write and read back of unique generated values with write/read/delete latency checks against all detected column families, table write spray and read back of unique values across all regions for all column families with write/read/delete latency checks, gather metricscheck_atlas_*.py
- Apache Atlas metadata server instance status, as well as metadata entity checks including entity existence, state=ACTIVE, expected type, expected tags are assigned to entity (eg. PII - important because Ranger ACLs to allow or deny access to data can be assigned based on tags)check_ranger_*.pl/.py
- Apache Ranger checks:
check_ambari_*.pl
- Apache Ambari API checks for Hadoop clusters written running the standard open source Hortonworks distribution - checks the service status, node(s) status, stale configs, cluster alerts summary, host alerts summary, cluster health report, hdfs rack resilience configured (checks more than 1 rack configured, finds nodes with default rack configured), kerberos enabled, cluster version, service config compatible with stack and clusterAttivio, Blue Talon, Datameer, Platfora, Zaloni plugins are also available for those proprietary products related to Hadoop.
check_cloudera_manager_*.pl
- Hadoop cluster checks via Cloudera Manager API - checks states and health of cluster services/roles/nodes, management services, config staleness, Cloudera Enterprise license expiry, Cloudera Manager and CDH cluster versions, utility switches to list clusters/services/roles/nodes as well as list users and their role privileges, fetch a wealth of Hadoop & OS monitoring metrics from Cloudera Manager and compare to thresholds. Disclaimer: I worked for Cloudera, but seriously CM collects an impressive amount of metrics making check_cloudera_manager_metrics.pl alone a very versatile program from which to create hundreds of checks to flexibly alert oncheck_ambari_*.pl
in the Hadoop Ecosystem section abovecheck_mapr*.pl
- Hadoop cluster checks via MapR Control System API - checks services and nodes, MapR-FS space (cluster and per volume), volume states, volume block replication, volume snapshots and mirroring, MapR-FS per disk space utilization on nodes, failed disks, CLDB heartbeats, MapR alarms, MapReduce mode and memory utilization, disk and role balancer metrics. These are noticeably faster than running equivalent maprcli commands (exceptions: disk/role balancer use maprcli).check_ibm_biginsights_*.pl
- Hadoop cluster checks via IBM BigInsights Console API - checks services, nodes, agents, BigSheets workbook runs, dfs paths and properties, HDFS space and block replication, BI console version, BI console applications deployedcheck_hiveserver2*
- Apache Hive - HiveServer2 LLAP Interactive server status and uptime, peer count, check for a specific peer host fqdn via regex and a basic beeline connection trivial query testcheck_apache_drill_*.py/.pl
- Apache Drill checks:
check_presto_*.py
- Presto SQL DB
check_zookeeper.pl
- Apache ZooKeeper server checks, multiple layers: "is ok" status, is writable (quorum), operating mode (leader/follower vs standalone), gather statisticscheck_zookeeper_*znode*.pl
- ZooKeeper znode checks using ZK Perl API, useful for HBase, Kafka, SolrCloud, Hadoop HDFS & Yarn HA (ZKFC) and any other ZooKeeper-based service. Very versatile with multiple optional checks including data vs regex, json field extraction, ephemeral status, child znodes, znode last modified agecheck_consul_*.py
- Consul API write / read back, arbitrary key-value content checks, cluster leader election, number of cluster peers, service leader election, versioncheck_vault_*.py
- Hashicorp's Vault API checks - health checks is initialized, is not standby, is vault sealed / unsealed, time skew between Vault server and local, is high availability enabled, is current leader, is leader found, versioncheck_aws_s3_file.pl
- check for the existence of any arbitrary file on AWS S3, eg. to check backups have happened or _SUCCESS placeholder files are present for a jobcheck_aws_access_keys_age.py
- checks for AWS access key age greater than N days to delete/rotate old keys as per best practice (optionally only alerts for active keys)check_aws_access_keys_disabled.py
- checks for AWS disabled access keys that should be removedcheck_aws_api_ping.py
- simple yes/no check for AWS API access, can be used to test access key credentials and as a dependency check for all other AWS checkscheck_aws_cloudtrails_enabled.py
- checks Cloud Trails have logging enabled, multi-region and logfile validation. Optionally check only a single named cloud trailcheck_aws_cloudtrails_event_selectors.py
- checks Cloud Trails have at least one event selector each with management and read+write logging. Optionally check only a single named cloud trailcheck_aws_ec2_instance_count.py
- checks the number of running instances with optional range thresholdscheck_aws_ec2_instance_states.py
- checks the state of all EC2 instances, outputting totals and checking warning thresholds for each status typecheck_aws_password_policy.py
- checks the AWS password policy including minimum length, maximum age, password reuse count, uppercase/lowercase/numbers/symbols and whether users are allowed to change their passwordscheck_aws_root_account.py
- checks the AWS root account has MFA enabled and no access keys as per best practicecheck_aws_user_last_used.py
- checks if a given AWS IAM user account has been used within the last N days (eg. if root account was recently used this may indicate a security breach or is at the very least against best practice)check_aws_users_unused.py
- detects old AWS IAM user accounts that haven't been used in the last N days, either passwords nor access keys, and should probably be removedcheck_aws_users_password_last_used.py
- detects AWS IAM user accounts that haven't had their passwords used in N days and should probably be removedcheck_aws_users_mfa_enabled.py
- checks all AWS user accounts with passwords have MFA enabledcheck_docker_*.py
- Docker API checks including API ping, counts of running / paused / stopped / total containers with thresholds, specific container status by name or id, images count with thresholds, specific image:tag availability including size and checksum, counts of networks / volumes with thresholds, docker engine versioncheck_docker_swarm_*.py
- Docker Swarm API checks including is swarm enabled, swarm node status, is the node a swarm manager, swarm service status including number of live replicas / tasks and if the service was updated recently, counts of services, swarm manager and worker nodes with thresholds, swarm errors, swarm versioncheck_mesos_*.pl
- Mesos master health API, master & slaves state information including leader and versions, activated & deactivated slaves, number of Chronos jobs, master & slave metrics. Warning: Mesos & Mesosphere DC/OS is legacy semi-proprietary - major momentum has shifted to the open source Kubernetes projectcheck_kubernetes_*.py
- Kubernetes API health and versionIf running docker checks from within the nagios plugins docker image then you will need to expose the socket within the container, like so:
docker run -v /var/run/docker.sock:/var/run/docker.sock harisekhon/nagios-plugins check_docker_container_status.py -H unix:///var/run/docker.sock --container myContainer
OK: Docker container 'myContainer' status = 'running', started at '2020-06-03T14:03:09.78303932Z' | query_time=0.0038s
See also DockerHub build status nagios plugin further down in the CI section.
check_elasticsearch_*.pl/.py
- Elasticsearch cluster state, shards, replicas, number of nodes & data nodes online, shard and disk % balance between nodes, single node ok, specific node found in cluster state, slow tasks, pending tasks, elasticsearch / lucene versions, per index existence / shards / replicas / settings / age, stats per cluster / index / node, X-Pack license expiry and features enabled
check_logstash_*.py
- Logstash status, uptime, hot threads, plugins, version, number of pipelines online, specific pipeline online and optionally its number of workers, if its dead letter queue is enabled, outputs pipeline batch size and delaycheck_solr*.pl
- checks for Apache Solr and SolrCloud including API write/read/delete, arbitrary Solr queries vs num matching documents, API ping, Solr Core Heap / Index Size / Number of Docs for a given Solr Collection, and thresholds in ms against all Solr API operations as well as perfdata for graphing, as well as SolrCloud ZooKeeper content checks for collection shards and replicas states, number of live nodes in SolrCloud cluster, overseer, SolrCloud config and Solr metrics.check_cassandra_*.pl
/ check_datastax_opscenter_*.pl
- Apache Cassandra and DataStax OpsCenter monitoring, including Cassandra cluster nodes, token balance, space, heap, keyspace replication settings, alerts, backups, best practice rule checks, DSE hadoop analytics service status and both nodetool and DataStax OpsCenter collected metricscheck_memcached_*.pl
- Memcached API writes/reads/deletes with timings, check specific key's value against regex or value range, number of current connections, gather statisticscheck_couchdb_*.py
- Apache CouchDB API checks including server status, database exists, doc and deleted doc counts, data size, compaction running, versioncheck_riak_*.pl
- Riak API writes/reads/deletes with timings, check a specific key's value against regex or value range, check all riak diagnostics, check node states, check all nodes agree on ring status, gather statistics, alert on any single statcheck_redis_*.pl
- Redis API writes/reads/deletes with timings, check specific key's value against regex or value range, replication slaves I/O, replicated writes (write on master -> read from slave), publish/subscribe, connected clients, validate redis.conf against running server to check deployments or remote compliance checks, gather statistics, alert on any single statcheck_mysql_query.pl
- flexible free-form MySQL SQL queries - can check almost anything - obsoleted a dozen custom MySQL plugins and prevented writing many more. Tested against many versions of MySQL and MariaDB. You may also be interested in Percona's pluginscheck_mysql_config.pl
- detect differences in your /etc/my.cnf and running MySQL config to catch DBAs making changes to running databases without saving to /etc/my.cnf or backporting to Puppet. Can also be used to remotely validate configuration compliance against a known good baseline. Tested against many versions of MySQL and MariaDBThese programs check these message brokers end-to-end via their API, by acting as both a producer and a consumer and checking that a unique generated message passes through the broker cluster and is received by the consumer at the other side successfully. They report the publish, consumer and total timings taken, against which thresholds can be applied, and are also available as perfdata for graphing.
check_kafka.py/.pl
- Kafka brokers API write & read back with configurable topics/partition and producer behaviour for acks, sleep, retries, backoff, can also lists topics and partitions. See Kafka Scala Nagios Plugin for a version with Kerberos supportcheck_redis_publish_subscribe.pl
- Redis publish-subscribe API write & read back with configurable subscriber wait. See other Redis checks under NoSQLcheck_rabbitmq*.py
- RabbitMQ brokers AMQP API write & read back with configurable vhost, exchange, exchange type, queue, routing key, durability, RabbitMQ 'confirms' protocol extension & standard AMQP transactions support. Checks via the RabbitMQ management API include aliveness queue health test, built-in health checks, cluster name, vhost, exchange with optional validation of exchange type (direct, fanout, headers, topic) and durability (true/false), user auth and permissions tags, stats db event queuecheck_jenkins_*.py
- Jenkins checks include job build status, color, health report score, build time, age since last completed build, if job is set to buildable, job count total or per view, number of running builds, queued builds, executors, node count, offline nodes, jenkins mode, is security enabled, if a given node is online and its number of executors, if a given plugin is enabled and if there are available plugin updates individually or overall, with perfdata for relevant metrics like build time, jobs/nodes/executors/plugins/plugin updates, running/queued build counts and query timingscheck_travis_ci_last_build.py
- Travis CI repo's last build status - includes showing build number, build duration with optional thresholds, start/stop date & time, if there are currently any builds in progress and perfdata for graphing last build time and number of builds in progress. Verbose mode gives the commit details as well such as commit id and messagecheck_gocd_*.py
- GoCD server and agent health, pipeline, stage and job level status checks, and number of agents online, enabled, and in different statescheck_dockerhub_repo_build_status.py
- DockerHub Automated Build status check for a given DockerHub repository's latest build or latest build for a given tag. Returns status and tag of last build along with perfdata for graphing build latency (time between build creation and completion) and query timing. Optionally also returns in verbose mode what triggered the build (webhook, revision control change, API / website trigger), created and last updated date timestamps and build URL to investigatecheck_selenium_*.py
- checks Selenium Grid Hub / Node or Selenoid status is ready, queue size, available node counts for all or specific browser types, and that browsers are available and fully functional by requesting a browser (eg. chrome or firefox), querying a url and optionally performing a content or regex match on the returned web pagecheck_git_*
- checks a Git checkout is valid, up to date with upstream remote/origin, has no uncommitted changes (staged or unstaged), no untracked files, isn't dirty, is in the right branch, isn't remote, isn't detached, is / isn't bare. Useful for monitoring deployment servers running off Git checkouts (a common scenario for things like PuppetMasters, Ansible AWX / Tower etc) to ensure your automation is deploying the right thing and that any ad-hoc modifications and tests have been properly backported to Gitcheck_ssl_cert.pl
- SSL certificate checker - checks certificate expiry (days), validates domain, chain of trust, SNI, wildcard domains, SAN certs with multi-domain support. Chain of Trust support is important when building your JKS or certificate bundles to include intermediate certs otherwise certain mobile devices don't validate the SSL even though it may work in your desktop browsercheck_dns.pl
- advanced DNS query checker supporting NS records for your public domain name, MX records for your mail servers, SOA, SRV, TXT as well as A and PTR records. Can optionally specify --expected
literal or --regex
results (which is anchored for security) for strict validation to ensure all records returned are expected and authorized. The record, type and result(s) are output along with the DNS query timing perfdata for graphing DNS performancecheck_whois.pl
- check domain expiry days left and registration details match expectedcheck_pingdom_*.py
- checks Pingdom statuses, response times, last checked times, SMS creditscheck_puppet.rb
- thorough, find out when Puppet stops properly applying manifests, if it's in the right environment, if it's --disabled, right puppet version etccheck_disk_write.pl
- canary write test, catches partitions getting auto-remounted read-only by Linux when it detects underlying storage I/O errors (often caused by malfunctioning block devices, raid arrays, failing disks), see also check_linux_disk_mounts_read_only.py
check_linux_*.pl/.py
- checks RAM used, CPU context switches, system file descriptors, interface errors / promiscous mode / duplex / speed / MTU / stats, load normalized per CPU core (more useful than the default check_load plugin which would need different configs for heterogenous hardware), timezone settings, users / groups present (eg. PAM/LDAP/SSSD integration is working), duplicate UID/GIDs (helps detects rogue uid 0 accounts and more common LDAP vs local id range overlap misconfigurations), check groups.allow contains only specific groups, huge pages are disabled (often recommended for Big Data and NoSQL systems such as Hadoop and MongoDB), any read-only filesystems (more of a sweep than check_disk_write.pl
, with include/exclude regex)check_ssh_login.pl
- performs a full SSH login with username & password, good for testing your Dell DRAC / HP iLO infrastructure is properly secured and accessible. Also works for your Linux servers and even Mac OSXolder/check_*raid.py
- RAID controller / array checks for 3ware, LSI MegaRaid / Dell PERC controllers (they're rebranded from LSI), and Linux software MD Raid. I also recommend the widely used Dell OpenManage Checkcheck_*_version*.pl/.py
- checks running versions of software - originally written to detect version inconsistencies across large clusters of servers and failed/partial upgrades across large automated infrastructures. Now also used to check Docker tagged images contain the right versions of the expected software (which double validates that other programs in this and other github repos have actually been tested against all the expected versions)check_cluster_version.pl
can be used to tie together versions returned from many different servers (by passing it their outputs via Nagios macros) to ensure a cluster is all running the same version of software even if you don't enforce a particular --expected
version on individual systemscheck_yum.py/.pl
- widely used yum security updates checker for RHEL 5 - 8 systems dating back to 2008. You'll find forks of this around including NagiosExchange but please re-unify on this central updated version. Also has a Perl version which is a newer straight port with nicer more concise code and better library backing as well as configurable self-timeout. For those running Debian-based systems like Ubuntu see check_apt
from the nagios-plugins-basic
package.... and there are many more plugins than we have space to list here, have a browse!
This code base is under active development and there are many more cool plugins pending import.
See also other 3rd Party Nagios Plugins you might be interested in.
These allow you to use any standard nagios plugin with other non-Nagios style monitoring systems by prefixing the nagios plugin command with these programs, which will execute and translate the outputs:
adapter_csv.py
- executes and translates output from any standard nagios plugin to CSV formatadapter_check_mk.py
- executes and translates output from any standard nagios plugin to Check_MK local plugin formatadapter_geneos.py
- executes and translates output from any standard nagios plugin to Geneos CSV formatAll plugins come with --help
which lists all options as well as giving a program description, often including a detailed account of what is checked in the code. You can also find example commands in the tests/
directory.
Environment variables are supported for convenience and also to hide credentials from being exposed in the process list eg. $PASSWORD
. These are indicated in the --help
descriptions in brackets next to each option and often have more specific overrides with higher precedence eg. $ELASTICSEARCH_HOST
takes priority over $HOST
, $REDIS_PASSWORD
takes priority over $PASSWORD
etc.
Make sure to run the automated build or install the required Perl CPAN / Python PyPI modules first before calling --help
.
Perl HTTP Rest-based plugins have implicit Kerberos support via LWP as long as the LWP::Authen::Negotiate CPAN module is installed (part of the automated make
build). This will look for a valid TGT in the environment ($KRB5CCNAME
) and if found will use it for SPNego.
Most Python HTTP Rest-based plugins for technologies / APIs with authentication have a --kerberos
switch which supercedes --username/--password
and can be used on technologies that support SPNego Kerberos authentication. It will either use the TGT from the environment cache ($KRB5CCNAME
) or if $KRB5_CLIENT_KTNAME
is present will kinit from the keytab specified in $KRB5_CLIENT_KTNAME
to a unique path per plugin to prevent credential cache clashes if needing to use different credentials for different technologies.
Automating Kerberos Tickets - if running a plugin by hand for testing then you can initiate your TGT via the kinit
command before running the plugin but if running automatically in a monitoring server then use the standard k5start
kerberos utility as a service to auto-initiate and auto-renew your TGT so that your plugins always authenticate to Kerberized services with a current valid TGT.
To use different kerberos credentials per plugin you can export KRB5CCNAME=/different/path
before executing each plugin, and have multiple k5start
instances maintaining each one.
Testing high availability and multi-master setups is best done through a load balancer.
HAProxy configurations are provided for all the major technologies under the haproxy-configs/ directory for many of the technologies tested in this project, including:
The following is pulled from my DevOps Python Tools repo (currently one of my favourite repos):
find_active_server.py
- returns the first available healthy server or determines the active master in high availability setups. Configurable tests include socket, http, https, ping, url with optional regex content match and is multi-threaded for speed. Useful for pre-determining a server to be passed to tools that only take a single --host
argument but for which the technology has later added multi-master support or active-standby masters (eg. Hadoop, HBase) or where you want to query cluster wide information available from any online peer (eg. Elasticsearch, RabbitMQ clusters). This is downloaded from my DevOps Python Tools repo as part of the build and placed at the top level. It has the ability to extend any nagios plugin to support multiple hosts in a generic way if you don't have a front end load balancer to run the check through. Example usage:./check_elasticsearch_cluster_status.pl --host $(./find_active_server.py --http --port 9200 node1 node2 node3)
There are now also simplified subclassed programs so you don't have to figure out the switches for more complex services like Hadoop and HBase, just provide hosts as simple arguments and they'll return the current active master!
find_active_hadoop_namenode.py
find_active_hadoop_yarn_resource_manager.py
find_active_hbase_master.py
find_active_hbase_thrift.py
find_active_hbase_stargate.py
find_active_cassandra.py
find_active_apache_drill.py
find_active_impala.py
find_active_presto_coordinator.py
find_active_kubernetes_api.py
find_active_oozie.py
find_active_solrcloud.py
find_active_elasticsearch.py
These are especially useful for ad-hoc scripting or quick command line tests.
Strict validations include host/domain/FQDNs using TLDs which are populated from the official IANA list. This is done via the Lib and PyLib submodules for Perl and Python plugins respectively - see those repos for details on configuring to permit custom TLDs like .local
or .intranet
(both already supported by default as they're quite common customizations).
Most of the plugins I've read from Nagios Exchange and Monitoring Exchange (now Icinga Exchange) in the last decade have not been of the quality required to run in production environments I've worked in (ever seen plugins written in Bash with little validation, or mere 200-300 line plugins without robust input/output validation and error handling, resulting in "UNKNOWN: (null)" when something goes wrong - right when you need them - then you know what I mean). That prompted me to write my own plugins whenever I had an idea or requirement.
That naturally evolved in to this, a relatively Advanced Collection of Nagios Plugins, especially when I began standardizing and reusing code between plugins and improving the quality of all those plugins while doing so.
--verbose
levels & --debug
mode--warning/--critical
thresholds with range support, in form of min:max
(@
prefix inverts to expect value outside of this range)$USERNAME
and $PASSWORD
environment variables as well as more specific overrides (eg. $MYSQL_USERNAME
, $REDIS_PASSWORD
) to give administrators the option to avoid leaking --password
credentials in the process list for all users to seeSeveral plugins have been merged together and replaced with symlinks to the unified plugins bookmarking their areas of functionality, similar to some plugins from the standard nagios plugins collection.
Some plugins such as those relating to Redis and Couchbase also have different modes and expose different options when called as different program names, so those symlinks are not just cosmetic. An example of this is write replication, which exposes extra options to read from a slave after writing to the master to check that replication is 100% working.
Perl ePN optimization is not supported at this time as I was running 13,000 production checks per Nagios server years ago (circa 2010) without ePN optimization - it's not worth the effort and isn't available in any of the other languages anyway.
Python plugins are all pre-byte-compiled as part of the automated build.
Modern scaling should be done using distributed computing, open source examples include Icinga2 and Shinken. Shinken's documentation cites an average 4 core server @ 3Ghz as supporting 150,000 checks per 5 minutes, which aligns with my own experience with Nagios Core. Using the latest hardware and proper setup could probably result in even higher scale before having to move to distributed monitoring architecture.
Having written a large number of Nagios Plugins in the last 10 years in a variety of languages (Python, Perl, Ruby, Bash, VBS) I abstracted out common components of a good robust Nagios Plugin program in to libraries of reusable components that I leverage very heavily in all my modern plugins and other programs found under my other repos here on GitHub, which are now mostly written in Perl or Python using these custom libraries, for reasons of both concise rapid development and speed of execution.
These libraries enables writing much more thoroughly validated production quality code, to achieve in a quick 200 lines of Perl or Python what might otherwise take 2000-3000 lines to do properly (including some of the more complicated supporting code such as robust validation functions with long complex regexs with unit tests, configurable self-timeouts, warning/critical threshold range logic, common options and generated usage, multiple levels of verbosity, debug mode etc), dramatically reducing the time to write high quality plugins down to mere hours and at the same time vastly improving the quality of the final code through code reuse, as well as benefitting from generic future improvements to the underlying libraries.
This gives each plugin the misleading appearance of being very short, because only the some of the very core logic of what you're trying to achieve is displayed in the plugin itself, mostly composition of utility functions, and the error handling is often handled in custom libraries too, so it may appear that a simple one line field extraction or 'curl()' or 'open_file()' utility function call has no error handling at all around it but under the hood the error handling is handled inside the function inside a library, same for HBase Thrift API connection, Redis API connection etc so the client code as seen in the top level plugins knows it succeeded or otherwise the framework would have errored out with a specific error message such as "connection refused" etc... there is a lot of buried error checking code and a lot of utility functions so many operations become one-liners at the top level instead of huge programs that are hard to read and maintain.
I've tried to keep the quality here high so a lot of plugins I've written over the years haven't made it in to this collection, there are a lot still pending import, a couple others check_nsca.pl
and check_syslog-ng_stats.pl
are in the more/
directory until I get round to reintegrating and testing them with my current framework to modernize them, although they should still work with the tiny utils.pm from the standard nagios plugins collection.
I'm aware of Nagios::Plugin but my libraries have a lot more utility functions and I've written them to be highly convenient to develop with.
Some older plugins may not adhere to all of the criteria above so most have been filed away under the older/
directory (they were used by people out there in production so I didn't want to remove them entirely). Older plugins also indicate that I haven't run or made updates to them in a few years so they're in basic maintenance mode and may require minor tweaks or updates.
If you're new remember to check out the older/
directory for more plugins that are less current but that you might find useful such as RAID checks for Linux MD Raid, 3ware / LSI MegaRaid / Dell Perc Raid Controllers (which are actually rebranded LSI MegaRaid so you can use the same check - I also recommend the widely used Dell OpenManage Check).
Feedback, Feature Requests, Improvements and Patches are welcome.
Patches are accepted in the form of Github pull requests, for which you will receive attribution automatically as Github tracks these merges.
Please raise a Github Issue ticket for if you need updates, bug fixes or new features. Github pull requests are more than welcome.
Since there are a lot of programs covering a lot of different technologies in this project, so remember to look at the software versions each program was written / tested against (documented in --help for each program, also found near the top of the source code in each program). Newer versions of software seem to change a lot these days especially in the Big Data & NoSQL space so plugins may require updates for newer versions.
Please make sure you have run make update
first to pull the latest updates including library sub-modules and build the latest CPAN / PyPI module dependencies, (see Quick Setup above).
Make sure you run the code by hand on the command line with -v -v -v
for additional debug output and paste the full output in to the issue ticket. If you want to anonymize your hostnames/IP addresses etc you may use the scrub.pl
tool found in my DevOps Perl Tools repo.
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
make build
Some plugins like check_yum.py
can be copied around independently but most newer more sophisticated plugins require the co-located libraries I've written so you should git clone && make
on each machine you deploy this code to or just use the pre-built Docker image which has all plugins and dependencies inside.
You may need to install the GNU make system package if the make
command isn't found (yum install make
/ apt-get install make
)
To build just the Perl or Python dependencies for the project you can do make perl
or make python
.
If you only want to use one plugin, you can do make perl-libs
or make python-libs
and then just install the potential one or two dependencies specific to that one plugin if it has any, which is much quicker than building the whole project.
make
builds will install yum rpms / apt debs dependencies automatically as well as a load of Perl CPAN & Python PyPI libraries. To pick and choose what to install follow the Manual Build section instead
This has become quite a large project and will take at least 10 minutes to build. The build is automated and tested on RHEL / CentOS 5/6/7 & Debian / Ubuntu systems.
Make sure /usr/local/bin is in your $PATH
when running make as otherwise it'll fail to find cpanm
The automated build will use 'sudo' to install required Perl CPAN & Python PyPI libraries to the system unless running as root or it detects being inside Perlbrew or VirtualEnv. If you want to install some of the common Perl / Python libraries such as Net::DNS and LWP::* using your OS packages instead of installing from CPAN / PyPI then follow the Manual Build section instead.
Download the Nagios Plugins, Lib and Pylib git repos as zip files:
https://github.com/HariSekhon/nagios-plugins/archive/master.zip
https://github.com/HariSekhon/lib/archive/master.zip
https://github.com/HariSekhon/pylib/archive/master.zip
Unzip all and move Lib and Pylib to the lib
and pylib
folders under nagios plugins.
unzip nagios-plugins-master.zip
unzip pylib-master.zip
unzip lib-master.zip
mv -v nagios-plugins-master nagios-plugins
mv -v pylib-master pylib
mv -v lib-master lib
mv -vf pylib nagios-plugins/
mv -vf lib nagios-plugins/
Proceed to install CPAN and PyPI modules for whichever programs you want to use using your usual procedure - usually an internal mirror or proxy server to CPAN and PyPI, or rpms / debs (some libraries are packaged by Linux distributions).
All CPAN modules are listed in setup/cpan-requirements*.txt
and lib/setup/cpan-requirements*.txt
.
All PyPI modules are listed in requirements.txt
and pylib/requirements.txt
.
Internal PyPI Mirror example (JFrog Artifactory, CloudRepo or similar):
sudo pip install --index https://host.domain.com/api/pypi/repo/simple --trusted host.domain.com -r requirements.txt -r pylib/requirements.txt
Proxy example:
sudo pip install --proxy hari:mypassword@proxy-host:8080 -r requirements.txt -r pylib/requirements.txt
The automated build also works on Mac OS X but you will need to download and install Apple XCode development libraries. I also recommend you get HomeBrew to install other useful tools and libraries you may need like OpenSSL, Snappy and MySQL for their development headers and tools such as wget (these packages are automatically installed if Homebrew is installed on Mac OS X):
brew install openssl snappy mysql wget
To avoid the following cpan install error:
fatal error: 'openssl/opensslv.h' file not found
#include <openssl/opensslv.h>
specify the path to the OpenSSL lib installed by HomeBrew:
sudo OPENSSL_INCLUDE=/usr/local/opt/openssl/include OPENSSL_LIB=/usr/local/opt/openssl/lib cpan Crypt::SSLeay
Ensure the DBI
cpan module is installed as well as the openssl
HomeBrew package.
You may need to add the OpenSSL library path explicitly in mysql_config
to avoid the following error:
Checking if libs are available for compiling...
Can't link/include C library 'ssl', 'crypto', aborting.
Once you're sure that OpenSSL is installed via HomeBrew (done as part of the automated build), find mysql_config
and edit the line
libs="-L$pkglibdir"
to
libs="-L$pkglibdir -L/usr/local/opt/openssl/lib"
You may get errors trying to install to Python library paths even as root on newer versions of Mac, sometimes this is caused by pip 10 vs pip 9 and downgrading will work around it:
sudo pip install --upgrade pip==9.0.1
make
sudo pip install --upgrade pip
make
If you want to use any of the ZooKeeper content znode based checks (eg. for HBase / SolrCloud etc) based on check_zookeeper_znode.pl or any of the check_solrcloud_*_zookeeper.pl programs you will also need to install the zookeeper libraries which has a separate build target due to having to install C bindings as well as the library itself on the local system. This will explicitly fetch the tested ZooKeeper 3.4.8, you'd have to update the ZOOKEEPER_VERSION
variable in the Makefile if you want a different version.
make zookeeper
This downloads, builds and installs the ZooKeeper C bindings which Net::ZooKeeper needs. To clean up the working directory afterwards run:
make clean-zookeeper
Fetch my library repos which are included as submodules (they're shared between this and other repos containing various programs I've written over the years).
git clone https://github.com/harisekhon/nagios-plugins
cd nagios-plugins
git submodule init
git submodule update
Then install the Perl CPAN and Python PyPI modules as listed in the next sections.
For Mac OS X see the Mac OS X section from Automated Build instructions.
If installing the Perl CPAN or Python PyPI modules via your package manager or by hand instead of via the Automated Build From Source section, then read the requirements.txt and setup/cpan-requirements.txt files for the lists of Python PyPI and Perl CPAN modules respectively that you need to install.
You can install the full list of CPAN modules using this command:
sudo cpan $(sed 's/#.*//' setup/cpan-requirements*.txt lib/setup/cpan-requirements*.txt)
and install the full list of PyPI modules using this command:
sudo pip install -r requirements.txt -r pylib/requirements.txt
check_zookeeper_znode.pl
check_zookeeper_child_znodes.pl
check_hbase_*_znode.pl
check_solrcloud_*_zookeeper.pl
The above listed programs require the Net::ZooKeeper Perl CPAN module but this is not a simple cpan Net::ZooKeeper
, that will fail. Follow these instructions precisely or debug at your own peril:
# install C client library
export ZOOKEEPER_VERSION=3.4.8
[ -f zookeeper-$ZOOKEEPER_VERSION.tar.gz ] || wget -O zookeeper-$ZOOKEEPER_VERSION.tar.gz http://www.mirrorservice.org/sites/ftp.apache.org/zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz
tar zxvf zookeeper-$ZOOKEEPER_VERSION.tar.gz
cd zookeeper-$ZOOKEEPER_VERSION/src/c
./configure
make
sudo make install
# now install Perl module using C library with the correct linking
cd ../contrib/zkperl
perl Makefile.PL --zookeeper-include=/usr/local/include/zookeeper --zookeeper-lib=/usr/local/lib
LD_RUN_PATH=/usr/local/lib make
sudo make install
After this check it's properly installed by doingperl -e "use Net::ZooKeeper"
which should return no errors if successful.
Some plugins, especially ones under the older/ directory such as those that check 3ware/LSI raid controllers, SVN, VNC etc require external binaries to work, but the plugins will tell you if they are missing. Please see the respective vendor websites for 3ware, LSI etc to fetch those binaries and then re-run those plugins.
The check_puppet.rb
plugin uses Puppet's native Ruby libraries to parse the Puppet config and as such will only be run where Puppet is properly installed.
The check_logserver.py
"Syslog to MySQL" plugin will need the Python MySQL module to be installed which you should be able to find via your package manager. If using RHEL/CentOS do:
sudo yum install MySQL-python
or try install via pip, but this requires MySQL to be installed locally in order to build the Python egg...
sudo easy_install pip
sudo pip install MySQL-python
Run make update
. This will git pull and then git submodule update which is necessary to pick up corresponding library updates.
If you update often and want to just quickly git pull + submodule update but skip rebuilding all those dependencies each time then run make update-no-recompile
(will miss new library dependencies - do full make update
if you encounter issues).
There are full multi-level suites of tests against this repository and its libraries.
Continuous Integration is run on this repo with tests for success and failure scenarios:
--help
generation etc.To trigger all tests run:
make test
which will start with the underlying libraries, then move on to top level integration tests and finally functional tests using docker containers if docker is available.
If you encounter the following error when trying to use check_kafka.pl
:
Can't locate auto/NetAddr/IP/InetBase/AF_INET6.al in @INC
This is an upstream bug related to autoloader, which you can work around by editing NetAddr/IP/InetBase.pm
and adding the following line explicitly near the top just after package NetAddr::IP::InetBase;
:
use Socket;
On Linux this is often at /usr/local/lib64/perl5/NetAddr/IP/InetBase.pm
and on Mac /System/Library/Perl/Extras/<version>/NetAddr/IP/InetBase.pm
.You can find the location using perl_find_library_path.pl NetAddr::IP::InetBase
or perl_find_library_path.sh NetAddr::IP::InetBase
from the DevOps Perl Tools and DevOps Bash Tools repos.
You may also need to install Socket6 from CPAN.
This fix is now fully automated in the Make build by patching the NetAddr/IP/InetBase.pm
file and always including Socket6 in dependencies (UPDATE: this fix is broken on recent versions of Mac due to the addition of System Integrity Protection which doesn't allow editing the system files even with sudo to root - a workaround would be to install the libraries to a local PERLBREW and fix there).
Alternatively you can try the Python version check_kakfa.py
which works in similar fashion.
The MongoDB Perl driver from CPAN doesn't seem to compile properly on RHEL5 based systems. PyMongo rewrite was considered but the extensive library of functions results in better code quality for the Perl plugins, it's easier to just upgrade your OS to RHEL6.
The MongoDB Perl driver does compile on RHEL6 but there is a small bug in the Readonly CPAN module that the MongoDB CPAN module uses. When it tries to call Readonly::XS, a MAGIC_COOKIE mismatch results in the following error:
Readonly::XS is not a standalone module. You should not use it directly. at /usr/local/lib64/perl5/Readonly/XS.pm line 34.
The workaround is to edit the Readonly module and comment out the eval 'use Readonly::XS'
on line 33 of the Readonly module.
This is located here on Linux:
/usr/local/share/perl5/Readonly.pm
and here on Max OS X:
/Library/Perl/5.16/Readonly.pm
Recent version(s) of IO::Socket::SSL (2.020) seem to fail to respect options to ignore self-signed certs. The workaround is to create the hidden touch file below in the same top-level directory as the library to make this it include and use Net::SSL instead of IO::Socket::SSL.
touch .use_net_ssl
If you end up with an error like:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:765)
It can be caused by an issue with the underlying Python + libraries due to changes in OpenSSL and certificates. One quick fix is to do the following:
sudo pip uninstall -y certifi &&
sudo pip install certifi==2015.04.28
Kafka Scala Nagios Plugin - Scala version of the Python and Perl Kafka plugins found here, build provides a self-contained jar with Kerberos support.
DevOps Bash Tools - 550+ DevOps Bash Scripts, Advanced .bashrc
, .vimrc
, .screenrc
, .tmux.conf
, .gitconfig
, CI configs & Utility Code Library - AWS, GCP, Kubernetes, Docker, Kafka, Hadoop, SQL, BigQuery, Hive, Impala, PostgreSQL, MySQL, LDAP, DockerHub, Jenkins, Spotify API & MP3 tools, Git tricks, GitHub API, GitLab API, BitBucket API, Code & build linting, package management for Linux / Mac / Python / Perl / Ruby / NodeJS / Golang, and lots more random goodies
SQL Scripts - 100+ SQL Scripts - PostgreSQL, MySQL, AWS Athena, Google BigQuery
Templates - dozens of Code & Config templates - AWS, GCP, Docker, Jenkins, Terraform, Vagrant, Puppet, Python, Bash, Go, Perl, Java, Scala, Groovy, Maven, SBT, Gradle, Make, GitHub Actions Workflows, CircleCI, Jenkinsfile, Makefile, Dockerfile, docker-compose.yml, M4 etc.
Kubernetes configs - Kubernetes YAML configs - Best Practices, Tips & Tricks are baked right into the templates for future deployments
DevOps Python Tools - 80+ DevOps CLI tools for AWS, GCP, Hadoop, HBase, Spark, Log Anonymizer, Ambari Blueprints, AWS CloudFormation, Linux, Docker, Spark Data Converters & Validators (Avro / Parquet / JSON / CSV / INI / XML / YAML), Elasticsearch, Solr, Travis CI, Pig, IPython
DevOps Perl Tools - 25+ DevOps CLI tools for Hadoop, HDFS, Hive, Solr/SolrCloud CLI, Log Anonymizer, Nginx stats & HTTP(S) URL watchers for load balanced web farms, Dockerfiles & SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles, Pig Latin, Neo4j, InfluxDB), Ambari FreeIPA Kerberos, Datameer, Linux...
HAProxy Configs - 80+ HAProxy Configs for Hadoop, Big Data, NoSQL, Docker, Elasticsearch, SolrCloud, HBase, Cloudera, Hortonworks, MapR, MySQL, PostgreSQL, Apache Drill, Hive, Presto, Impala, ZooKeeper, OpenTSDB, InfluxDB, Prometheus, Kibana, Graphite, SSH, RabbitMQ, Redis, Riak, Rancher etc.
Dockerfiles - 50+ DockerHub public images for Docker & Kubernetes - Hadoop, Kafka, ZooKeeper, HBase, Cassandra, Solr, SolrCloud, Presto, Apache Drill, Nifi, Spark, Mesos, Consul, Riak, OpenTSDB, Jython, Advanced Nagios Plugins & DevOps Tools repos on Alpine, CentOS, Debian, Fedora, Ubuntu, Superset, H2O, Serf, Alluxio / Tachyon, FakeS3
The DevOps Python Tools & DevOps Perl Tools repos contain over 100 useful programs including:
anonymize.pl
/ anonymize.py
- anonymizes configs / logs for posting online - replaces hostnames/domains/FQDNs, IPs, passwords/keys in Cisco/Juniper configs, custom extensible phrases like your name or your company name
validate_json/yaml/ini/xml/avro/parquet.py
- validates JSON, YAML, INI (Java Properties), XML, Avro, Parquet including directory trees, standard input and even optionally 'single quoted json' and multi-record bulk JSON data formats as found in MongoDB and Hadoop / Big Data systems.
PySpark Avro / CSV / JSON / Parquet data converters
code reCaser for SQL / Pig / Neo4j / Hive HQL / Cassandra / MySQL / PostgreSQL / Impala / MSSQL / Oracle / Dockerfiles
Hive / Pig => Elasticsearch / SolrCloud indexers
Hadoop HDFS performance debugger, native checksum extractor, HDFS file retention & snapshot retention policy scripts, HDFS file stats, XML & running Hadoop cluster config differ
watch_url.pl
- debugs load balanced web farms via multiple queries to a URL - returns HTTP status codes, % success across all requests, timestamps, round trip times, and optionally the output
tools for Ambari, Pig, Hive, Spark + IPython Notebook, Solr CLI
Ambari Blueprints tool & templates
AWS CloudFormation templates
DockerHub API tools including more search results and fetching repo tags (not available in official Docker tooling)
My Perl library - used throughout this code as a submodule to make the programs in this repo short
My Python library - Python version of the above library, also heavily leveraged to keep programs in this repo short
The biggest advantage of Nagios-compatible monitoring systems is the multitude of Nagios Plugins created by domain experts in each field to monitor almost everything out there.
Nagios Plugins are widely and freely available across the internet, especially at the original Nagios Exchange and the newer Icinga Exchange (at which you'll see this project at the top of the most viewed list).
The following enterprise monitoring systems are compatible with Nagios Plugins and this project:
Nagios Core - the original widely used open source monitoring system that set the standard
check_linux_*
/ older/check_*raid*.py
)Nagios IX - commercial version of the open source Nagios Core with more features and enterprise support. Most people using Nagios use the free version since most of the benefit is in the plugins themselves.
Icinga2 - popular open source Nagios fork rewritten with more features, Icinga retains the all important Nagios Plugin compatibility, but also has native distributed monitoring, rule based configuration, a REST API and native metrics graphing integrations via Graphite, InfluxDB and OpenTSDB to create graphs from the plugins' perfdata
Shinken - open source modular Nagios reimplementation in Python, Nagios config compatible, with distributed monitoring architecture, high availability, host / service discovery, forwards plugins' metrics perfdata via the Graphite protocol to Graphite or InfluxDB, see documentation
Centreon - open source french Nagios-compatible monitoring solution, can forward plugin's metrics perfdata to Graphite or InfluxDB via Graphite protocol, see documentation
Naemon - open source Nagios-forked monitoring solution, using Thruk as its GUI and PNP4Nagios for graphing the plugins' metrics perfdata
OpenNMS - open source enterprise-grade network management platform with native graphing, geo-mapping, Grafana integration, as well as Elasticsearch event forwarder integration for Kibana search/visualization. OpenNMS can execute Nagios Plugins via NRPE, see NRPEMonitor documentation
Pandora FMS - open source distributed monitoring solution with flexible dashboarding, graphing, SLA reporting and Nagios Plugin compatibility via Nagios wrapper for agent plugin
Sensu - open-core distributed monitoring system, compatible with both Nagios and Zabbix plugins. Enterprise Edition contains metrics graphing integrations for Graphite, InfluxDB or OpenTSDB to graph the plugins' metrics perfdata
Check_MK - open-core Nagios-based monitoring solution with rule-based configuration, service discovery and agent-based multi-checks integrating MRPE - MK's Remote Plugin Executor. See adapter_check_mk.py
which can run any Nagios Plugin and convert its output to Check_MK local check format. Has built-in metrics graphing via PNP4Nagios, Enterprise Edition can send metrics to Graphite and InfluxDB via the Graphite protocol, see documentation
ZenOSS - open-core monitoring solution that can run Nagios Plugins, see documentation
OpsView Monitor - commercial Nagios-based monitoring distribution with native metrics graphing via Graph Center as well as InfluxDB integration via InfluxDB Opspack
OP5 Monitor - commercial Nagios-based monitoring distribution including metrics graphing via PNP4Nagios, has InfluxDB integration
GroundWork Monitor - commercial Nagios-based monitoring distribution with RRD metrics graphing and InfluxDB integration
Geneos - proprietary non-standard monitoring, was used by a couple of banks I worked for. Geneos does not follow Nagios standards so integration is provided via adapter_geneos.py
which if preprended to any standard nagios plugin command will execute and translate the results to the CSV format that Geneos expects, so Geneos can utilize any Nagios Plugin using this program
SolarWinds - proprietary monitoring solution but can take Nagios Plugins, see doc
Microsoft SCOM - Microsoft Systems Center Operations Manager, can run Nagios Plugins as arbitrary Unix shell scripts with health / warning / error expression checks, see the Microsoft technet documentation
Zabbix - open source monitoring solution with in-built graphing, distributed monitoring and auto discovery but unfortunately not Nagios Plugin compatible, some integration can be done via a wrapper script (we tried exactly this in 2012 but didn't like it enough to switch from Nagios), see this community forum thread for more information and code
OpsGenie - proprietary non-standard monitoring, cannot execute Nagios Plugins but has integration with existing Nagios, Icinga2 and Prometheus
Many monitoring systems will already auto-graph the performance metric data from these nagios plugins via PNP4Nagios but you can also forward it to newer more specialised metrics monitoring and graphing systems such as Graphite, InfluxDB, OpenTSDB and Prometheus (this last one is the most awkward as it requires pull rather than passively receiving).
The above list of enterprise monitoring systems documents each one's integration capabilities with links to their documentation.
You can also execute these Nagios Plugins outside of any nagios-compatible monitoring server and forward just the metrics to the major metrics monitoring systems using the following tools:
input.exec
with data_format="nagios"
and pass the Nagios Plugin perfdata to InfluxDB, Graphite, Prometheus and OpenTSDB - see the InfluxDB data input formats documentationYou may also be interested in the feature matrix on the Wikipedia page - Comparison of network monitoring systems.
This is a list of the best and most interesting 3rd party plugins, several of which I have used or come across over the years that deserve mention, usually due to their better quality than the typical Nagios Exchange / Monitoring Exchange plugins.
一、 Nagios 概述 1 、简介 Nagios是插件式的结构,它本身没有任何监控功能,所有的监控都是通过插件进行的,因此其是高度模块化和富于弹性的。Nagios监控的对象可分为两类:主机和服务。主机通常指的是物理主机,如服务器、路由器、工作站和打印机等,这里的主机也可以是虚拟设备,如xen虚拟出的Linux系统;而服务通常指某个特定的功能,如提供http服务的httpd进程等。而为
检查命令配置文件 /etc/nagios-plugins/config/ 97.6.1. check_ping nagios check_ping命令使用方法 具体如下: -H 主机地址 -w WARNING 状态: 响应时间(毫秒),丢包率 (%) 阀值 -c CRITICAL状态: 响应时间(毫秒),丢包率 (%) 阀值 -p 发送的包数
nagios-plugins 1.4.16插件详解 一、check_apt 作用:debain相关系统的更新机制检查 update更新软件列表信息,包括版本,依赖关系等 upgrade在不改变现有软件设置的基础上更新软件 dist-upgrade会改变配置文件,改变旧的依赖关系 语法:check_apt [[-d|-u|-U]opts] [-n] [-t timeout] >#check_
make[4]: *** [localcharset.o] Error 1 make[4]: Leaving directory `/home/oldboy/tools/nagios-plugins-1.4.16/gl' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/oldboy/tools/nagio
安装nagios-plugins插件make时遇到的error error内容: check_http.c: In function ‘process_arguments’: check_http.c:312: error: ‘ssl_version’ undeclared (first use in this function
nagios-plugins插件是在Nagios(Nagios Core)安装完毕的基础上进行的,Nagios Core的安装详情请移步到:[Nagios监控之Nagios Core安装]http://4709096.blog.51cto.com/4699096/1702251 1.编译安装 # tar -zxvf nagios-plugins-1.4.14.tar.gz # cd nagios-
搭建nagios 执行以下步骤来安装必备软件包。 yum install -y gcc glibc glibc-common wget unzip httpd php gd gd-devel 下载源码 cd /tmp wget https://github.com/NagiosEnterprises/nagioscore/archive/nagios-4.3.2.tar.gz tar xzf na
(1)找到nagios-plugins-1.4.15目录即nagios-plugins的安装编译目录 #cd nagios-plugins-1.4.15 (2)重新设置编译参数 # ./configure --with-snmpget-command=/usr/bin/snmpwalk --with-snmpgetnext-command=/usr/bin/snmpwalk (3)编译但千万别编译
系统环境:Ubuntu 12.04 tar xzf nagios-plugins-1.4.15.tar.gz cd nagios-plugins-1.4.15 编译并安装插件 ./configure --prefix=/usr/local/nagios/ --with-cgiurl=/nagios/cgi-bin --with-htmurl=/nagios --with-nagios-user=n
nagios-plugins-rabbitmq 使用 perl 语言写的,这里需要安装perl 环境 wget -c http://xrl.us/cpanm -O /usr/bin/cpanm chmod +x /usr/bin/cpanm Download check_rabbitmq Nagios Plugin cd ~ wget --no-check-certificate https://
Nagios是一个监视系统运行状态和网络信息的监视系统。Nagios能监视所指定的本地或远程主机以及服务,同时提供异常通知功能等 Nagios可运行在Linux/Unix平台之上,同时提供一个可选的基于浏览器的WEB界面以方便系统管理人员查看网络状态,各种系统问题,以及日志等等。 Nagios 有一个 Windows 下的客户端: http://www.oschina.net/p/nsclient
Nagios WAS 是个用来监控 IBM 的 WebSphere 应用服务器的 Nagios 插件。监控的内容包括: MonitorJvmHeapsize MonitorJdbcConnectionPools MonitorThreadPools MonitorLiveSessions
这是一个 Nagios 的插件用来对 Ceph 集群的运行状态进行监控。
FAN是 “Fully Automated Nagios” 的缩写。FAN的包含了由nagios社区提供的所有nagios工具包,同时FAN提供iso镜像,使得nagios安装非常容易。 FAN是基于centos 系统之上的。 FAN包含以下工具: Nagios: Core monitoring application(核心监视应用) Nagios plugins: plugins to
这是一个用来在 Nagios 平台上监控 JBoss 应用服务器的插件,通过 MBean 的小型收集器,这是一个基于 Perl 的 Nagios 插件,你可以轻松的监控 JBoss 服务器中输出的 JMX 值,在 Nagios 服务器上无需安装 JDK 或者是 JBoss。
Logstash 中有两个 output 插件是 nagios 有关的。outputs/nagios 插件发送数据给本机的 nagios.cmd 管道命令文件,outputs/nagios_nsca 插件则是 调用 send_nsca 命令以 NSCA 协议格式把数据发送给 nagios 服务器(远端或者本地皆可)。 Nagios.Cmd nagios.cmd 是 nagios 服务器的核心组件。