Icinga 2 v2.4.1 bugfix release

Icinga StickerThis release fixes a problem when using recurring downtimes (“ScheduledDowntime”) causing Icinga 2 to crash on startup. There are further fixes for old compilers on Debian Squeeze, Ubuntu Precise and RHEL 6. The API setup wizard does not overwrite existing certificates anymore. The node setup wizards also incorporate the NodeName and ZoneName constant by default, not the previously used FQDN.

The ITL CheckCommand ‘running_kernel’ now allows you to optionally use the ‘running_kernel_use_sudo’ attribute. One further addition are global constants to fetch PlatformName. PlatformVersion, PlatformKernel and PlatformKernelVersion.

A common problem which we’ve analysed in our community support channels is the usage of existing SSL certificates with the Icinga 2 API. In case you are encountering the SSL error “SSL3_READ_BYTES:sslv3 alert unsupported certificate” when querying the API using curl or a modern browser, please ensure that the host’s SSL certificate version is 3, not 1. More details on the mailing lists.

Icinga 2 v2.4.1 packages should be available soon, meanwhile make sure to check the Changelog below.





  • ITL
    • Add running_kernel_use_sudo option for the running_kernel check
  • Configuration
    • Add global constants: `PlatformName`. `PlatformVersion`, `PlatformKernel` and `PlatformKernelVersion`
  • CLI
    • Use NodeName and ZoneName constants for ‘node setup’ and ‘node wizard’


  • Feature 10622: Add by_ssh_options argument for the check_by_ssh plugin
  • Feature 10693: Add running_kernel_use_sudo option for the running_kernel check
  • Feature 10716: Use NodeName and ZoneName constants for ‘node setup’ and ‘node wizard’


  • Bug 10528: Documentation example in “Access Object Attributes at Runtime” doesn’t work correctly
  • Bug 10615: Build fails on SLES 11 SP3 with GCC 4.8
  • Bug 10632: “node wizard” does not ask user to verify SSL certificate
  • Bug 10641: API setup command incorrectly overwrites existing certificates
  • Bug 10643: Icinga 2 crashes when ScheduledDowntime objects are used
  • Bug 10645: Documentation for schedule-downtime is missing required paremeters
  • Bug 10648: lib/base/process.cpp SIGSEGV on Debian squeeze / RHEL 6
  • Bug 10661: Incorrect web inject URL in documentation
  • Bug 10663: Incorrect redirect for stderr in /usr/lib/icinga2/prepare-dirs
  • Bug 10667: Indentation in command-plugins.conf
  • Bug 10677: node wizard checks for /var/lib/icinga2/ca directory but not the files
  • Bug 10690: CLI command ‘repository add’ doesn’t work
  • Bug 10692: Fix typos in the documentation
  • Bug 10708: Windows setup wizard crashes when InstallDir registry key is not set
  • Bug 10710: Incorrect path for icinga2 binary in development documentation
  • Bug 10720: Remove –master_zone from –help because it is currently not implemented

Icinga 2 v2.4.0 & Icinga Web 2 v2.1.0 released

We’ve come a long way… after months of hard development we’re proud to release Icinga 2 v2.4.0 and Icinga Web 2 v2.1.0.


Icinga 2 v2.4.0

icinga2_apiIcinga 2 v2.4 feels even bigger than our first v2.0 release 1.5 years ago. “We want an API” you said – and we sat down in April and started design one. Kicking off the development in June, it’s been 5 months and 3 developers working full-time on Icinga 2 v2.4.

We’ve put a lot of effort into designing and refining a unified REST API allowing you to create configuration objects at runtime without a restart (e.g. a host auto-discovered from config management, cloud, etc) and also delete them. You can also modify existing configuration objects at runtime. All object updates are synchronised in cluster zones. In case you’re using a configuration tool for deployment you can manage configuration packages and stages.

icinga2_api_dashingWhile there are existing interfaces to query the current object states we’ve now added the full capabilities of exposing all object attributes to the user, helped with (complex) filters and joins to limit the output. List all services for hosts in a hostgroup, but only if they are in a critical state – a breeze to interact with.

You want to schedule maintenance downtimes or acknowledge problems at once? The former external commands have been revamped into Actions providing clear-cut interfaces and feedback on errors.

Event streams allow you to subscribe to specific core events, be it check results, notifications, acknowledgements. Forward these events to your umbrella monitoring system applications and use these metrics for your integration with other tools.

You can also fetch the runtime state of the Icinga 2 daemon and its features gaining insights on what’s going on. There’s also support for executing expressions and fetching the type hierarchy of config objects if you are planning to implement your own API client.

icinga-studioYou can use the icinga2 console to connect to the API and fetch the check result and its executed command line for example. The Icinga Studio application provides a GUI to fetch all objects from the API. And yet, use Dashing on your monitoring dashboards. If you want to start programming your own API clients, we’ve made sure to add programmatic examples for your convenience.

The Icinga 2 API uses HTTPS with basic auth or client SSL certificates and fully supports IPv4 and IPv6 similar to the Icinga 2 cluster. You can even set permissions for API users for specific URL endpoints with optional object filters. That way for example scripts may only schedule a downtime, or users are limited to hosts only in a specific host group.

icinga2_grafanaIcinga 2 v2.4 also introduces a new Graphite schema revamped from community feedback. Tackling configuration errors with dynamic apply rules is now helped with Icinga 2 script debugger.

Grab a coffee, or two, and get into the details in the API documentation. If you can’t wait to put Icinga 2 v2.4 in production – packages for all distributions should be available soon. Meanwhile you can test-drive the Icinga 2 API using Docker and Vagrant.


Icinga Web 2 v2.1.0

icingaweb2_dashboard_overviewThe 10th Open Source Monitoring Conference (OSMC) starts this week and team Icinga is attending for the 7th time. After our exciting release of Icinga Web 2 v2.0.0 right before Icinga Camp Portland the developers have been working hard to resolve bugs and also refine the UI styling once more.
This includes an enhanced service- and hostdetail area and a redefined table control element. A clear CSS structure makes the implementation of individual themes and styles even simpler.

Icinga Web 2 v2.1.0 is ready for download – and we’ll sure have it as live demo at our OSMC Icinga booth.



Icinga 2 v2.4.0 Changelog

Icinga Web 2 v2.1.0 Changelog

Icinga 2 v2.4: New Graphite Schema

icinga2_graphite_web_treeThe Graphite feature in Icinga 2 is pretty cool and simple – just enable the feature and point it to your Graphite Carbon Cache listening address. Graphite 0.9.14 was released just a few days ago :)

We’ve received lots of nice feedback on this but also several feature requests which involve breaking changes. Be it the default escaping of metrics (the dash becoming an underscore), turning off additional meta data or a different tree schema.

While working on a Graphite module for Icinga Web 2 we’ve evaluated all these changes and came to the conclusion that we need to change the Graphite tree and layout.



Prefix for hosts:


Prefix for services:


Metrics are written as follows underneath the perfdata level.


With enable_send_thresholds = true (default is false) Icinga 2 will add the following threshold values.


With enable_send_metadata = true (default is false) Icinga 2 will add


Sending thresholds and meta data must be enabled as this additional data caused problems in the past in large scale environments.



Time-series databases generally store metrics but no additional meta data as for example PNP4Nagios does to select specific templates for graph representation. PNP uses additional XML files generated on each update. We’ve come up with a solution for that by changing the host and service prefix and adding the CheckCommand for proper template selection by user interfaces.

Furthermore, all services are located underneath “services” on the host to allow easier selection from the applications pulling the data from the Graphite Web API.


Metric escaping

The following characters are escaped with an underscore in prefix labels: whitespace, dots, /, \. Performance data labels won’t escape the dot allowing a more selective representation of multiple metric levels returned by the plugin. If your host or service name contains dots, they still will be escaped in order to forbid multiple tree levels and ensure that external applications may properly select the objects.



There is no direct migration path although you can still use the old schema. To prevent unwanted data corruption the new schema is located underneath “icinga2” while the old schema is using “icinga”. In order to restore the old legacy schema, you’ll need to adopt the GraphiteWriter configuration:

object GraphiteWriter "graphite" {

  enable_legacy_mode = true

  host_name_template = "icinga.$host.name$"
  service_name_template = "icinga.$host.name$.$service.name$"

Note: The legacy mode will be removed in future feature releases.


Test-drive Icinga 2 and Graphite

The easiest way is to use 2 Docker Containers: One for Graphite and icinga/icinga2. I’ve found docker-graphite-statsd which works like a charm also for Icinga 2 development tests.

docker run -d --name graphite --restart=always -p 9090:80 -p 2003:2003 hopsoft/graphite-statsd

The most recent icinga2 Docker container build provides additional options to enable and configure the Graphite feature by passing these environment variables:


These two containers must be linked together in order to let Icinga 2 write to port 2003 and Graphite read from port 2003 in two different containers. graphite is the name of the previously started Graphite container.
The following example uses the Docker IP address assigned on OSX ( Adjust this for your needs where the Graphite Container is actually listening on.

docker run -d -ti --name icinga2 -p 3080:80 --link graphite:graphite -e ICINGA2_FEATURE_GRAPHITE=1 -e ICINGA2_FEATURE_GRAPHITE_HOST="" -e ICINGA2_FEATURE_GRAPHITE_PORT=2003 icinga/icinga2

Navigate to (adjust for your container) and select “icinga2” from the tree navigation. You can watch the screencast over here :)


Test-drive the Icinga 2 API

The Icinga 2 API release is near – and so is OSMC where we will have demo setups with us. Users keep asking about how to already play and test-drive the Icinga 2 REST API, the answer is fairly simple:

  • Read the snapshot docs (we update them frequently, so make sure to check for changes)
  • Install Vagrant, Docker or fetch the snapshot packages directly

The benefit of using the Docker Container or the Vagrant Boxes – you’ll get everything pre-installed and pre-configured already to play with.



docker run -d -ti --name icinga2-api -p 4080:80 -p 4665:5665 icinga/icinga2

The container initialisation takes ~1 minute.
Example for Docker on OSX (change the IP address to your localhost):

curl -k -s -u root:icinga '' | python -m json.tool

The container sources are located here, if you prefer to build it locally.



Both boxes icinga2x and icinga2x-cluster come with pre-configured Icinga 2 API.

git clone https://github.com/Icinga/icinga-vagrant.git
cd icinga-vagrant/icinga2x

vagrant up
curl -k -s -u root:icinga '' | python -m json.tool



In case you have everything up and running (Packages, Vagrant or Docker) clone the Dashing demo and edit the icinga2 job API credentials here.

git clone https://github.com/Icinga/dashing-icinga2
cd dashing-icinga2

The following example uses the Docker IP address on OSX and the mapped port to 4665.

vim jobs/icinga2.rb

$api_url_base = ""

$api_username = "dashing"

$api_password = "icinga2ondashingr0xx"


Check the screencast to see Docker, Icinga 2 API and Dashing in action on my Macbook Pro :)

Screen Shot 2015-11-07 at 19.36.58

Icinga 2 API: Event Streams

icinga2_apiOne thing for the Icinga 2 v2.4 REST API which was not implemented yet when we were at Icinga Camp Portland: Event Streams.

So what’s the deal with it anyways? Imagine that you want to connect to the API and receive check results, downtimes, comments, acknowledgements, etc for your own backend. Which in turn would mean you could proxy-forward these metrics to your own backend (ElasticSearch, Redis, MySQL, Graphite, InfluxDB, Logstash, Graylog, etc) or tool-stack (StackStorm, PagerDuty, etc.). Or correlate multiple events into one action (auto-acknowledge a problem based on multiple check results?).

There are many possible use cases – if you are familiar with Icinga 1.x and the “oscp” commands to forward check result events to your umbrella monitoring (SCOM, Tivoli, etc), the API event streams are a yet better replacement too. It will also help troubleshooting running check – fetch a live stream of check result events and analyse your problem :)

Send a POST request to /v1/events and pass the following as URL parameter:

  • queue name
  • types (one or more event types, e.g. CheckResult, Notification, etc)
  • filter (match on event. attributes)

More details can be found in the snapshot documentation. Ready to test-drive the API? The Vagrant boxes are running the latest snapshots allowing to easily connect to the REST API – check the screencast :-)

$ git clone https://github.com/Icinga/icinga-vagrant.git && cd icinga-vagrant
$ cd icinga2x
$ vagrant up
$ curl -k -s -u root:icinga -H 'Accept: application/json' -X POST ''

$ curl -k -s -u root:icinga -H 'Accept: application/json' -X POST '!=ServiceOK'


Update 2015-11-06: Added required Accept header to curl examples.