Wednesday, December 19, 2007

MDC presentations available

Anas asked me to make my Management Developers Conference presentations available, so here they are.

Web Service Management On Rails

In the first one, WS-Management On Rails, covers the beauty of accessing WS-Management and WS-CIM functionality through Ruby. The code follows the DMTF Technologies Diagram and consits of
  • rcim for the CIM Infrastructure layer
  • This implements the CIM metamodel of classes, properties and qualifiers.
  • mofgen to generate WS-CIM bindings
  • Mofgen is an extension to the Cimple MOF parser. It generates Openwsman client bindings for CIM classes from the class description contained within a MOF file.
  • rwscim for the CIM Schema class hierachy
  • This puts a wrapper around the bindings generated by mofgen, makes them available as a single Ruby module and ensures the correct class hierachy.
And here is a git repository containing a Rails application showing all this in action.

Web Service Management Application Enablement

Web Service Management Application Enablement is about using WS-Management as a transport layer for remote application access. Instead of implementing a separate daemon, protocol and data model, riding the WS-Management horse gives all of this almost for free. And its more secure. The dynamic plugin model provided on the Openwsman server side makes this particularly easy. The presentation shows how to plan and implement such a plugin and gives two examples. openwsman-yast for a simple, RPC-type approach and openwsman-hal which follows the WS-Management resource model.

Tuesday, December 18, 2007

Report from Management Developers Conference

About Management Developers Conference

Management Developers Conference (ManDevCon, MDC) is the annual conference of the Distributed Management Task Force (DMTF).
The DMTF is the leading industry organization for interoperable management standards and initiatives. Mostly known for their Common Information Model (CIM) and the Web Services for Management (WS-Management) standards.
The full conference schedule can be viewed here.

I already had the opportunity to attend this conference last year. This year, I was accepted as a speaker with two presentations about WS-Management.

Conference overview

The conference has three blocks, one for learning ('university day'), one for demo and interop ('interop lab') and one for presentations.

It was interesting to see how the conference topics changed year over year. Last year, protocols and APIs were still under discussion. In 2006, the WS-Management and WSDM (OASIS Web Services Distributed Management) protocols were still competing. This year, working implementations of various standards dominated.
From a protocol perspective, WS-Management is the clear winner with virtually every systems vendor showing implementations. Microsofts adaption of WS-Management for all remote management on Windows (WS-Management comes build into Vista and is available as an add-on to Server 2003 and XP) was probably the driving force here. Openwsman, an open source implementation of WS-Management provided by Intel, is also picked up by lots of embedded vendors.

The interop lab revolved around implementations for CDM, DASH and SMASH.

CDM, the Common Diagnostic Model, is a CIM extension for diagnostic instrumentation. Its primary use is for vendor-agnostic remote health evaluation for hardware. Hewlett-Packard uses this extensively for their systems and requires each of their component suppliers for test routines available through CDM.
DASH (Desktop and mobile Architecture for System Hardware) and SMASH (Systems Management Architecture for Server Hardware) target management and monitoring of hardware components based on the WS-Management protocol.

Attended presentations

  • Opentestman

  • Opentestman is validation test suite for ws-man, ws-cim and dash-1.0. Its a (wild) mixture of bash scripts and java based utility tools. Tests are described in xml-based 'profile definition documents' (PDD), making the tests data-driven. It currently covers all mandatory and recommended features of the WS-Management and WS-CIM standards. More than 160 test cases exist for all 14 DASH 1.0 profiles. [DASH 1.1 was released early December]
    [Hallway discussions showed, that the current implementation of Opentestman is in urgent need of refactorization. So don't look too close at the code or it might hurt your eyes.]

  • ITSM and CIM

  • ITSM, Information Technology Service Management, can be described as managing the systems management. The presentation gave an overview on existing technologies and asked for participation to model this topic in CIM. Currently, several (policy/modeling) standards exist for this topic, e.g. Cobit (Control Objectives for Information and Related Technology; mostly US, covering business and process mgmt), ITIL (Information Technology Infrastructure Library; mostly Europe, covering service and process mgmt) and CIM (resource mgmt). IT process management has seen a big push recently. Lots of tools and companies appeared in the last couple of years offering services.
    With SML, a service modeling language exists. Other areas like availability management, performance/capacity management or event/incident/problem management do not have any established standard.

  • Using the CIM Statistical Model to Monitor Datapresentation

  • Brad Nicholes from Novell showed recent work to integrate existing open source solutions (using non-standard models and protocols) with CIM.
    Ganglia, a "scalable distributed monitoring system for high-performance computing systems such as clusters and Grids" uses rrdtool (round robin database tool) to view stastistical data with different granularity.
    One feature of Ganglia is to provide trending information (as opposed to simple alerting) to support capacity planning.
    Ganglia consists of a statitics gathering agent (gmond) running on every client. These agents are grouped in clusters, sharing all information within the cluster to ensure failover capabilities. The statistics aggregation agents (gmetad) run on specific managment servers, reporting to an apache web frontend.
    Brad has defined a CIM model and implemented CIM providers to access the data. Its basically rrdtool access, thereby drastically reducing the amount of data transported over CIM.

  • CIM Policy Language

  • This was a report from the DMTF policy working group defining CIM-SPL.
    SPL, the simplified policy language, defines more than 100 operators to express relations (examples given: os 'runsOn' host, os 'hasA' firewall) and actions (Update of CIM properties, execution of CIM methods).
    There exists a cli tool and an Eclipse plugin for developing and testing policies. The Apache Imperius project is about to release a sample implementation. Similar plans exist for the Pegasus CIMOM.

  • Nagios through CIM

  • This was another example of bringing open source, but non-standard implementations and CIM together.
    Nagios is a very popular monitoring and alerting framework. It comes with a rich set of data gathering plugins, available on
    Intel has developed an adapter layer to expose Nagios data through CIM. One can also mix a traditional CIM provider with a Nagios plugin, filling only particular properties from the plugin.
    The source code is not available publically (yet...).

  • Cimple and Brevity

  • Cimple and Brevity are code generator tools making it easier to develop CIM providers and tools. Cimple is a CIM provider generator. It takes a CIM class description (MOF file) as input and generates stubs for a CMPI provider. This way, a developer does not have to fight with the provider API but can concentrate on the instrumentation part. [The amount of code generated is still huge. For SLE11, Python providers are the better choice for most cases.]
    Brevity tries to ease writing client tools. For people developing in C or C++, Brevity is worth a look.
    [For modern scripting languages, better bindings exist. E.g powerCIM for Python and rwscim for Ruby.]

  • Management Frameworks

  • This talk was meant as a call for help to collaborate on a client framework standard. There are sufficient standards and implementations for getting instrumenting managed devices. But on the management application side, everyone reinvents the wheel.
    Mergers drive this on the side of traditional (closed source) vendors, else they end up with lots of different APIs.
    The proposed 'integrated framework and repository for end-to-end device view' consists of an 'agent tier' (instrumentation), a 'service tier' (see below) and an 'application tier' (API for management applications).
    Services can be divided into infrastructure (discovery, collectors (caching), notifications) and core services (data model, topology, policy, scheduling, security, framework service management, domain specific services).
    This is ongoing work sponsored by Sun Microsystems looking for further participation.

  • openwsman

  • Openwsman is an open source implementation of the WS-Management and WS-CIM protocol standards. Its currently at version 1.5.1 with 1.6.0 scheduled for end of year and 2.0 end of march '08.
    It consists of a generic library, a client library and a server library and daemon. The daemon can be used in parallel to existing CIMOM implementations, translating between WS-CIM and CIM/XML. The mod_wsman plugin for Apache provides co-existance of WS-Management and the Apache web server through the same port.
    Main features for next years 2.0 release are
    • full compliance to the specification (The current WS-Management specification is still not final)

    • WS-Eventing (asynchronous indications, for alerting etc.)

    • A binary interface to sfcb (to connect to cim providers without a cimom)

    • better support for embedded devices

    • Filtering (CQL, cimom query language; WQL, WS-Management query language, xpath, xml query language)

Wednesday, December 05, 2007

Mapping the IT Universe

The annual Management Developers Conference organized by the DMTF started yesterday with the University Day.

DMTF (Distributed Management Task Force) is an industry organization leading the development, adoption and promotion of interoperable management standards and initiatives. Its mission is no less than Mapping the IT Universe by standardizing an object-oriented model (CIM) and related protocols (WBEM).

The conference was opened by a reception celebrating 15 years of DMTF and 10 years of CIM. Winston Bumpus gave a short overview on the history of the DMTF.

The DMTF was founded in 1992 as the Desktop Management Task Force, focussing on standards for managing desktop PCs. Two years later, the Desktop Management Interface (DMI) was published and quickly adopted. After releasing DMI 2.0 in August 1996, their mission was accomplished and the board considered closing the DMTF.

At that point, Patrick Thompson from Microsoft proposed to extend the management standardization beyond desktops and to cover the complete IT landscape. The original proposal already contained the key aspects and architectural components which are still valid today:

  • HMMS (Hypermedia Management Schema) — CIM today

  • HMOM (Hypermedia Object Manager) — CIMOM today

  • HMMP (Hypermedia Management Protocol) — CIM/XML over HTTP today

Initially a gang of five, namely BMC, Compaq, Intel, Microsoft and Sun accepted the proposal and continued funding the DMTF. In a tour de force with biweekly meetings over a period of 6 months the DMTF was able to present the Common Information Model 1.0 (CIM) in April 1997. It only covered the object-oriented modelling without any transportation protocol. This was added another year later (August 1998) with the Web Based Enterprise Management (WBEM) standard.

In 1999, the DMTF was renamed to Distributed Management Task Force, keeping the acronym (and all the advertising materials).

Today more than 200 companies with over 4000 participants contribute to the ongoing standardization efforts. In the 'Industry Showcase' and 'Interop Lab' rooms of the Conference, a wide variety of devices, tools and applications based on CIM are shown.

With the broad acception of Web Services for Management (WS-Management) true interoperable systems management now becomes a reality. Implementation range from baseboard management controllers (see here for drivers) and embedded devices to Open Source stacks and Microsoft Windows.

Monday, December 03, 2007

Memories from the past

I am in the heart of Silicon Valley visiting the Management Developers Conference which starts on Monday. More on that in a later post.

The first day I visited the Computer History Museum (CHM) with its marvelous collection of historic computers and parts. The majority of which is stored in the archive, vacuumed and wrapped in plastics preserved for future generations. Only a small fraction of artifacts is on display, dubbed visible storage.

Here one can see parts of the original ENIAC computer, a real IBM System/360, the Apollo Guidance Computer or a ZUSE Z23. Too bad I didn't bring my camera.

Whats unique about this museum are the - excuse me - human artifacts. Those guys and gals still living in Silicon Valley who designed and hacked the early machines. I really enjoyed a guided tour given by Ray Peck which was sprinkled with background information and anecdotes. Just wonderful.
Next was a live demonstration of the PDP-1 restoration project. One could see a 1961 computer up and running, demoed by Peter Samson and Lyle Bickley. They both hacked the PDP-1 during their student time at MIT. Peter is the original author of the PDP-1 music program and gave an example of his work. Hilarious !

On my way out, I picked up a free copy of Core, the museums biannual publication. The article about rescued treasures was most interesting, showing how challenging preserving history can be.

To quote from the museums flyer: "It's ironic that in an industry so concerned with memory, how quickly we forget."

Powered by ScribeFire.

Monday, August 13, 2007

Look who's sponsoring Ruby

Last weekend saw the Ruby Hoedown conference at RedHats Raleigh Headquarter, listing Microsoft as a sponsor. Interesting.

For those of you wondering Why Ruby ?, look at the conference website.
The Ruby language is growing exponentially, partially because it offers more flexibility than other more common languages.
Now add Suns support for Ruby last year, the famous Ruby on Rails web development framework and broad platform support, this language is still HOT.

Friday, July 27, 2007

Metadata as a Service

OpenSUSE bug 276018 got me into thinking about software repositories and data transfer again.

Problem statement

Software distribution in the internet age goes away from large piles of disks, CDs or DVD and moves towards online distribution servers providing software from a package repository. The next version of OpenSUSE, 10.3, will be distributed as a 1-CD installation with online access to more packages.
Accessing a specific package means the client needs to know whats available and if a package has dependencies to other packages. This information is kept in a table of contents of the repository, usually referred to as metadata.
First time access to a repository requires download of all metadata by the client. If the repository changes, i.e. packages get version upgrades, large portions of the metadata have to be downloaded again - refreshed.

The EDOS project proposes peer-to-peer networks for distributing repository data.

But how much of this metadata is actually needed ? How much bandwidth is wasted by downloading metadata that gets outdated before first use ?

And technology moves on. Network speeds raise, available bandwidth explodes, internet access is as common as TV and telephone in more and more households. Internet flatrates and always on will be as normal as electrical power coming from the wall socket in a couple of years. At the same time CPUs get more powerful and memory prices are on a constant decrease.

But the client systems can't keep up since customers don't buy a new computer every year. The improvements in computing power, memory, and bandwidth are mostly on the server side.

And this brings me to Metadata as a Service.

Instead of wasting bandwidth for downloading and client computing power for processing the metadata, the repository server can provide a WebService, handling most of the load. Clients only download what they actually need and cache as they feel appropriate.

Client tools for software management are just frontends for the web service. Searching and browsing is handled on the server where load balancing and scaling are well understood and easily handled.

This could even be driven further by doing all the repository management server-side. Clients always talk to the same server which knows the repositories the client wants to access and also tracks software installed on the client. Then upgrade requests can be handled purely by the server, making client profile uploads obsolete. Certainly the way to go for mobile and embedded devices.
Google might offer such a service - knowing all the software installed on a client is certainly valuable data for them.

Just a thought ...

Wednesday, July 18, 2007

Hackweek aftermath

Novell Hackweek left me with a last itch to scratch -- Cornelius' proposal of a Ycp To Ruby translator.

Earlier this year, I already added XML output to yast2-core which came in very handy for this project. Using the REXML stream listener to code the translator was the fun part of a couple of late night hacks.

The result is a complete syntax translator for all YaST client and module code. The generated Ruby code is nicely indented and passes the Ruby syntax checker.

Combined with Duncans Ruby-YCP bindings, translating ycp to Ruby should be quite useful as we try to provide support for more widespread scripting languages.

The translator is available at and requires a recent version of yast2-core, which supports XML output and the '-x' parameter of ycpc.
Then run
  ycpc -c -x file.ycp -o file.xml

to convert YCP code to XML.
Now use the xml-ruby translator as
  cd yxmlconv
  ruby src/converter.rb file.xml > file.rb

Translating e.g /usr/share/YaST2/modules/Arch.ycp

module "Arch";
// local variables
string _architecture = nil;
string _board_compatible = nil;
string _checkgeneration = "";
boolean _has_pcmcia = nil;
boolean _is_laptop = nil;
boolean _is_uml = nil;
boolean _has_smp = nil;
// Xen domain (dom0 or domU)
boolean _is_xen = nil;
// Xen dom0
boolean _is_xen0 = nil;
/* ************************************************************ */
/* system architecture                                          */
 * General architecture type
global string architecture () {
    if (_architecture == nil)
        _architecture = (string)SCR::Read(.probe.architecture);
    return _architecture;

outputs the following Ruby code
module Arch
  require 'ycp/SCR'
  _architecture = nil
  _board_compatible = nil
  _checkgeneration = ""
  _has_pcmcia = nil
  _is_laptop = nil
  _is_uml = nil
  _has_smp = nil
  _is_xen = nil
  _is_xen0 = nil

  def architecture(  )
    if ( _architecture == nil ) then
      _architecture = Ycp::Builtin::Read( ".probe.architecture" )
    return _architecture
Preserving the comments from the ycp code would be nice -- for next Hackweek.
Btw, it's fairly straightforward to change the translator to output e.g. Python or Java or C# or ...

Tuesday, July 17, 2007

Smolt - Gathering hardware information

LWN pointed me to this mail from Fedoraproject inviting other distrubtion to participate in the Smolt project. Smolt is used to gather hardware data from Linux systems and makes it available for browsing.
They currently have data from approx. 80000 systems, mostly x86, which hopefully will grow in the future. The device and system statistics are quite interesting to browse. Besides hardware, smolt also tracks the system language, kernel version, swap size etc. It also tries to make an educated guess on desktop vs. server vs. laptop - typically a blurred area for Linux systems.

Once they offer an online API for direct access to the smolt server database, this really will be quite useful.

Monday, July 16, 2007

EDOS Project

Michael Schröders hackweek project is based on using well-known mathematical models for describing and solving package dependencies: Satisfiability - SAT
Apparently, some research on this topic was done before. The oldest mentioning of SAT for packaging dependencies I found is a paper from Daniel Burrows dating ca. mid-2005. Daniel is the author of the aptitude package manager and certainly knows the topic of dependency hell inside out.

However, the most interesting link Google revealed, was the one to the EDOS project.
EDOS is short for Environment for the development and Distribution of Open Source software and is funded by the European Commission with 2.2 million euros. The project aims to study and solve problems associated with the production, management and distribution of open source software packages.
Its four main topics of research are:

  • Dependencies With a formal approach to management of software dependencies, it should be possible to manage the complexity of large free and open source package-based software distributions. The project already produced a couple of publications and tools, but I couldn't find links to source code yet.
  • Downloading The problem of huge and frequently changing software repositories might be solvable with P2P distribution of code and binaries.
  • Quality assurance All software projects face the dilemma between release often - release early and system quality. One can either
    • reduce system quality
    • or reduce the number of packages
    • or accept long delays before final release of high quality system
    EDOS wants to develop a testing framework and quality assurance portal to make distribution quality better and measurable.
  • Metrics and Evaluation The decision between old, less features, more stable vs. new, more features, more bugs should be better reasoned by defining parameters to characterize distributions, distribution edition and distribution customization.

Interesting stuff for a lot of distributions out there ...

Monday, July 02, 2007

openwsman-yast now returns proper datatypes

After five days of hacking last week, a final itch was left which needed scratching. The YaST openwsman plugin only passed strings back and forth, losing all the type information present in the YCP result value. So I added some code to convert basic YCP types to XML (in the plugin) and from XML to Ruby (on the client side). Now the result of a web service call to YaST can be processed directly in Ruby. Here's a code example showing the contents of /proc/modules on a remote machine.
require 'rwsman'
require 'yast'
client = 'http', '', 8889, '/wsman', 'user', 'password')
options =
schema = YaST::SCHEMA
uri = schema + "/YCP"
options.property_add( "ycp", "{ return SCR::Read( .proc.modules ); }" )
result = client.invoke( uri, "eval", options )
modhash = YaST.decode_result( result ) # hash of { modulename => { size=>1234, used=>3 } }
Supported are void, bool, integer, float, string, symbol, path, term, list, and map -- should be sufficient for most of YaST. The YaST class is here. You need at least version 1.1.0 of openwsman and openwsman-yast, both available on the openSUSE build service. And, btw, source code for openwsman-yast is now hosted on

Thursday, June 28, 2007

Remote management with Rails

The Rails demo for remote systems management with WS-Man is available at the openwsman web site.
Just follow the install and configure instructions. In short you need
  • openwsman
    An open source implementation of the ws-management standard.
  • rwsman
    Ruby bindings for openwsman client operations.
  • Ruby On Rails
    Web development that doesn't hurt
  • Railsapp
    Rails demo application for rwsman
Once everything is properly installed, start the Rails web server with ruby script/server. Now point your browser to http://localhost:3000 and you'll see the startup page. Click on the text, then click on Discover and the Discovery page will appear.

Look closely at the Actions line for each host and you'll notice the YaST action for the openSUSE client. This client has my openwsman-yast plugin installed.
The demo application allows to start and stop the desktop (the xdm service to be precise) and to switch the desktop environment between KDE and GNOME. YaST operations

Doc has videotaped a demo, you can find it in the blog.

YaST as a WebService

Thanks to openwsman and openSUSE hack week, Linux systems with YaST installed can now be remotely controlled via a WebService.

My idea is now available as a package in the openSUSE build service.

Today I itend to use the openwsman ruby bindings and its Rails demo application to show true remote management.

Stay tuned ...

Friday, June 22, 2007

A clean start

So, here it is now, my shiny new blog space. But how to start ? What to blog about first ? Sometimes the small things are the hardest ... But slashdot to the rescue. This post gave me a good idea for a good start.

How does YOUR keyboard look like ?

Those of you having a cleaning woman wiping the keyboard once per week can stop reading now. All the others, wanting to get rid of THIS sight read on ! I will show you how to make your keyboard shiny-and-almost-new by putting it into the dishwasher.

Using the dishwasher for keyboard cleaning

The following description is for simple Cherry keyboards, other brands might need a different approach. With the right tools and technique, this should work for any kind of keyboard. Here's a picture of the dirty keyboard I'm going to disassemble.
Putting the complete keyboard into the dishwasher might work, but after all its just the keys which need cleaning, not the electronics, cable or key mechanics. To start disassembly, turn it around to get access to the notches holding the case together. The upper and the lower case of the keyboard are held together with a number of L shaped notches (see picture below for a close-up), which have to be bend aside. (Is notch the right word for this ? Maybe a native speaker can come up with a better word.) Go and grab your toolbox and find a flat screwdriver or use a simple scissors, as I do. Be careful not to break it. The Cherry keyboard has four notches on the upper and five on the lower side. There are also three small ones in the middle, but these usually pop open without the need for a tool. Now the upperside of the keyboard can be lifted to open the case. As you can see, the upperside holds all the keycaps, the underside contains the mechanics and electronics. A lot of dirt usually accumulates on the black rubber mat which is used instead of coil springs you'll find in older (or more expensive) keyboards. Just take the rubber map out and clean it with a damp cloth. Below the rubber map, two plastic sheets with a metal layers (forming a capacitor) appear. The plastic sheets are wedged by the small circuit board in the upper right corner. Further disassembly needs a T8 Torx screwdriver. Removing the plastic sheets reveals a metal plate. This simply gives the keyboard some weight and keeps the downside from breaking if keys are pushed too hard. The metal plate is not fixed to the case and can easily been taken out. Ready for the dishwasher. Better use the economy setting, this should keep the washing temperature low enough to prevent the plastic from melting. Although normal dishes get dried to 'cupboard ready', water will still be hidden in the keycaps. Simply put the keyboard to a dry place for a couple of hours to let the remaining water evaporate. Reassembling is easy. Put the metal plate in, the plastic sheets, screw the electronics back in (ensure that the plastic sheet is below the circuit board), and put the rubber mat on top. Be careful and don't force it. All pieces have holes and guidance support from the underside of the keyboard case. As the last step, put both sides of the keyboard back together and press gently. You should hear a noticable 'click' as the notches snap back it. Thats it, now enjoy your shiny-and-almost-new keyboard !