Valid HTML 4.01 Transitional

Procedure to Install from SuSE Build Service

James F. Carter <>, 2012-09-13

We want to monitor the health of our servers in the machine room, and particularly, their temperatures. We want some automated assistance to make this happen.

A web search about temperature monitoring reveals three keystone packages:

The request was to install on all Linux (SuSE) servers the monitoring packages Cacti and/or Munin, and to configure the servers appropriately (i.e. load device drivers) to make available the various sensor information.

Jimc can do this easily and promptly. But suppose jimc is not around? The purpose of this web page is to document the procedure for a moderately complex installation like this one.

Obtaining the Packages

The first step is to search for the packages on the SuSE Build Service, This site is a repo (RPM software repository) for standard packages that would not fit in the main distro, for developers' projects to distribute specialized packages, and for supporting non-SuSE RPM-based distros such as Fedora and CentOS. If the Build Service doesn't have it, try Packman and if they don't have it, Google is your friend :-) Non-SuSE packages should be stored in the Mathnet repo, not SuSE-build.

On the SuSE-build start page, click the wrench icon and pick your distro by version, and mark the checkbox for development and language packages if those are going to be relevant. Type in an appropriate keyword. It's Ajax and is not too bright; if you press return you will confuse the Javascript. Let's look at ipmitool because it is a very simple example.

The searcher will go direct to the package if unique, or will give you a choice list. On the package page, Show Other Versions (don't try to Direct Install on your desktop workstation as an ordinary user). Find your distro version. Ipmitool is in the official release, but often you will have to click on Show Unstable Packages and guess which version seems most trustworthy.

Now on (currently hosted on Sunset), change directory to /h1/www/htdocs/Distro-repo (currently a symlink to /s1/SuSE). In that directory you will find a file called, which sets some shell variables with directory and command names. Do so:


You can view the file to see what variables are available. You need to update this file if the repo is moved elsewhere.

Now use the snarf build service script to download the packages. With the -p option it will download to the Mathnet repo instead of SuSE-build. You may download multiple packages in one execution. Mathnet policy is to download both the i586 and x86_64 architecture versions of all packages (excluding noarch). On the distro host type the command $sbs, then the URLs (space separated). On your web browser right click on the 32bit or 64bit or noarch links for the package, and select Copy Link Address, and paste into your xterm on the distro host (and a space). Here's an example for ipmitool:


The script will download the packages and file them in the correct directories in the local repo. At the end of each download it prints Saved in $pkgfile, using $pkgfile to represent the filename.

Finding Dependencies

Next step is to deal with dependencies. I hope to automate this someday, but so far that's vaporware. Use this command line:

rpm -q --requires -p $pkgfile |& less

It will show a list of requirements, from which you can infer the package names. For libraries you can do something like this, e.g. for ipmitool's requirement for

(result:) /lib/ (goody, it's already installed)
rpm -qf /lib/
(result:) libopenssl1_0_0-1.0.0c-18.42.1.i586 (from this package)

In the requirement list you can ignore all the rpmlib items, and you will quickly learn which libraries are standard (in this case, libc, libm and libreadline). Infer the dependent packages, check if they're already installed or already in one of the local repos (SuSE-distro, SuSE-build, Mathnet), and for those that aren't, find and download those, and analyse dependencies recursively. For some nightmare packages you'll end up downloading 30 dependencies or more, but ipmitool is easy, with all dependencies already available.

Rebuilding the Repository and Testing

Now rebuild the repo metadata like this. In you will see that you have variables for the repo directories and for the make repo script:

DISPLAY='' $mkr $bs

At the end it has to sign the metadata backbone file with GPG and it will want the root password for that. I prefer the textbased query so I suppress X-Windows as shown. If you prefer the X-Windows pop-up box, don't say DISPLAY=''.

Now it would be a good idea to test the installation of your keystone package(s). It's most efficient, i.e. gives you the most useful error messages, if you install all the keystone packages at once. Pick a machine, and do:

zypper refresh #(downloads the new metadata)
zypper install ipmitool

If you got all the dependencies, the package(s) will be installed. If not, it will announce missing dependencies, but stupidly it will only show one per command-line package. If it says Obscure-Package cannot be provided without a reason, but you know you downloaded it, try adding that package explicitly on the command line, and it will tell you what recursive dependency is missing. Go back and obtain missing dependencies, recursively. The operative word fragment here is curse.

Mathnet Package Management

While it's OK to run Zypper by hand on one machine for testing, the right way is to add your new keystone packages to Mathnet's package management file, /m1/custom/mathnet.sel. Then the installation process can be automated, and the package will not be deleted in cleanup, and will be preserved in upgrades.

Content Issues: Choosing Between Cacti and Munin

Web resources:

What Can Be Monitored?

Windows XP (and above) can be monitored via SNMP. See the FAQ on Munin, referring to How to Monitor Windows (but Cacti also can poll via SNMP, and the Linux SNMP daemon can provide useful information also). There are monitoring plugins for various parameters in the Linux /proc filesystem. IPMI data can also be retrieved with SNMP or with native plugins, which is our main use case.

About RRDtool

Both packages use RRDtool to create the graphs. They perodically poll the hosts being monitored, from a master site, store the results, and create graphs or textual displays on demand. RRDtool has its own database engine specialized for circular data, i.e. it is configured to hold samples for a prespecified interval such as one week, and newly added samples overwrite the oldest ones. Multiple archives (RRA) with different intervals are allowed. RRDtool also has a graph creation engine, which the front end scripts can call.

When storing data, RRDtool can be executed separately on each host data point, or can run as a daemon, with journalling. Data extraction is handled per event, e.g. per graph. Communication with the daemon is normally via a UNIX domain socket, but a TCP socket can be used (with no authentication and with optional hostbased access restriction).

About Cacti

Required packages for Cacti:

About Munin

Details on Munin: