Just in the short amount of time I have been researching actual usage of Thruk, I have found GitHub issues and forum posts asking how to set it up multi-tenant installations and share Thruk with clients without exposing other clients' data.
There's a few ways to go about this. Fortunately, the crew over at Consol Labs has prepared Open Monitoring Distribution. This isn't really a distribution, but is an RPM (or other) package you can install over a CentOS machine to easily get a Nagios/Naemon/Icinga 2 monitoring server setup.
In this lab, we will pretend to be an MSP that needs to monitor client infrastructure with Naemon and expose the data to their clients. We will use a CentOS 7 machine with OMD installed to setup 3 Naemon servers with Thruk. The first Naemon/Thruk server will be the "master" MSP console that can see all monitored hosts across all clients. The other two will be for Client A and Client B. We won't dive in to adding hosts and services to monitor, but focus on getting the platform up and running. Client A and Client B will have their Livestatus exposed to the master site, who will add those backends to Thruk. Sounds easy enough, right? Let's find out!
Installing OMD and Creating Our First Site
From a fresh installation of CentOS 7, we will need to install a few packages to get OMD installed.
# OMD requires some dependencies found in EPEL yum -y install epel-release # Install the OMD repo rpm -Uvh "https://labs.consol.de/repo/stable/rhel7/i386/labs-consol-stable.rhel7.noarch.rpm" # Install OMD yum -y install omd # Thruk will not work with SELinux, so we will go ahead and disable it setenforce 0 sed -i.bak 's/enforcing/disabled/' /etc/selinux.config
It's as easy as that!
One of the things that I feel OMD is missing is a quickstart guide. However, they do have a command reference which I refer to when playing around with OMD. If you have already looked at the reference, you may notice that it has a "configuration" style syntax --
omd [verb] [noun].
OMD uses a site concept -- this means that you create an independently managed site that does not care about the existence of the others and can easily be manipulated without affecting the others. With that being said, let's create our first site:
omd create xyzmsp
This has done a few things for us. If you cat out the contents of your passwd file, you will notice that there is a new user associated with the site. It also adds a "directory" in Apache. You can navigate to
https://yoursite/xyzmsp to see your new site -- after you start the site:
omd start xyzmsp
Getting into your site with the admin user is easy:
user: omdadmin password: omd
Creating Client Sites and a Simple Monitoring Check
Now we need to do this for each of our clients:
omd create clienta omd create clientb omd start clienta omd start clientb
For the sake of brevity, we will create a check on each of our clients (and master console). We will monitor the localhost, but call it something different on each one.
Config Tool > Object Settings > Create a new host >
Using the below as an example, create a simple check for each site.
save to: /hosts.cfg host_name: ClientAHost alias: ClientAHost address: 127.0.0.1 use: check_mk_host contact_groups: check_mk
When done, hit
apply at the bottom and then
save & reload at the top. Afterwards, click on the
hosts tab and verify that the host is in monitoring. Now we just need a way for the xyzmsp site to listen to the other sites.
We will need to enable and add livestatus on each of our sites in order to see what is going on. We can use the super-easy omd config tool:
omd config xyzmsp omd config clienta omd config clientb
For each, go to
LIVESTATUS_TCP (accept the warning and stop the site if prompted), and specify the
LIVESTATUS_TCP_PORT. I say this because you cannot have all three sites listening on the same port. I left my xyzmsp as the default and then incremented 1 port for each client.
Afterwards, start the three sites and then log into the "master" site:
omd start xyzmsp omd start clienta omd start clientb
Inside of the xyzmsp site, go to
Config Tool >
Backend / Sites >
add new connection.
section fields, use the client name. Leave the
type as livestatus, and for the connection use
127.0.0.1:6558 -- or whichever ports you decided to use -- and then save the changes.
If everything is done correctly, you should be able to see all of the hosts under the "master" site.
Let's not stop here -- afterall, we don't want the clients to have admin privileges. They only need readonly access to view host status and maybe view some PNP4Nagios graphs.
Setting up Read Only Access for the clients
Now whether or not you are familiar with a base Thruk installation or not, I will walk you through how OMD sets it up. Per the OMD Documentation, each site has it's own filesystem layout. We can find this in
/opt/omd/sites/sitename. In a default Thruk installation, you will find that you can specify apache users who has certain levels of access to Thruk in the
/etc/thruk/cgi.cfg file. In case you didn't look at the previous link, OMD has created an
etc directory (among others) for each of our sites. And just like a default Thruk installation, we can find the config file in
/opt/omd/sites/sitename/etc/thruk/. Within the file
cgi.cfg, we can configure a readonly account to have access to Thruk by using the
authorized_for_read_only = clientacontact
Afterwards, we can use
htpasswd to create an Apache user matching the name we used in the above directive.
htpasswd /opt/opmd/sites/sitename/etc/htpasswd clientacontact
Now, if this seems a bit archaic and tedious for say a low-level implementation technician to do, never fear!
Within the Thruk UI for the site (continuing with the Client A example), we can create a new user AND grant them access. Nifty, huh?
Config Tool >
User settings > type a new username in the
username field > specify a password, and leave all authorization settings to
no and click
Next, we can change the
Configuration Type in the drop-down at the top-right of the page to
CGI & Access. Under the
authorized_for_read_only field, select the new user and click the
>> button to move them to the
There you have it! An easily scalable monitoring portal. Well I say "scalable". Please note that we have only scratched the surface -- there's a lot more that goes into a Nagios/Naemon/Icinga2 monitoring system -- various scripts, nrpe configurations, etc... However, this should get you going in a pinch if you do not have to monitor an absurd amount of hosts. If that is the case, you will want to look at scaling out. My initial thoughts on this are to use a Docker/Kubernetes setup and spin up monitoring servers on demand and have the backends automatically added to the "master" installation. But -- that's a topic for another day!