IBM Spectrum Conductor for Containers

IBM® Spectrum Conductor for Containers is a server platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image repository, a management console, and monitoring frameworks.

For my upcoming AdminCamp 2017 session later this year, I wanted to put together a nice session about Docker in general as well as Kubernetes and how to orchestrate containers without creating .yaml files.
IBM® Spectrum Conductor for Containers is installed as part of and is the foundation for all the new and upcoming contaierized stuff in Connections 6.

But it is also available as a standalone component.  There are also other graphical tools that work on top of Docker ( and Kubernetes ) but I thought that it is a good idea to use IBM® Spectrum Conductor for Containers.
The installation is more or less just running a Docker container and sit and wait. That is, what I thought.

After reading through the documentation, I decided to use RHEL 7.2 in a VM on ESXi 6.5. I wanted to document the installation process and all the configurations to give attendees a step-by-step instruction how to setup and configure the OS, install additional software like Docker and finally prepare the configuration for the CfC installer. It is all in the installation guide provided by IBM, but I like to have it in one textfile where I just need to copy the commands into the Linux console instead of jumping back and forth in the HTML document.

After configuring the system and tweaking here and there, I tried the install with 1 CPU / 4 GB resulting in a hang in the middle of the installation process.
The installer does not give you any hint, what went wrong. And also the logs are not very helpful.

Next attempt was 2 CPU / 8 GB. It went a bit further in the installation process, but hung at a different point then. Also here, no hint from the installer or in the logs

Final try was 4 CPU / 8 GB. Now the installation finished and I could open the dashboard.

This stuff is the foundation for Connections Next and I can live with the requirements regarding CPU / RAM.

If you just want to use Docker with Kubernetes plus one of the other UI tools, then you are good with a “normal” sized VM ( 1 CPU / 4 GB ). This will also be part of my Docker session at AdminCamp 2017.

Notes FP8 (IF1) might stop your custom sidebar plugins from working

We got a call from one of our customers reporting a defect in our midpoints doc.Store sidebar plugin. It worked in Notes 9.0.1FP7 but stopped working after the upgrade to FP8.

I was able to reproduce in our development environment. In the error log, we saw the following error message

at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.ViewEntry.getDocument(Unknown Source)
at de.midpoints.docstore.notes.model.DocStoreDocumentCollectionBuilder.calculateDocumentCollection(Unknown Source)
at de.midpoints.docstore.notes.views.DocStoreView$ Source)
at Source)

I was able to find a fix for this particular issue. But there is also an entry in the German Notes Forum reporting similar defects after the upgrade.

I opened a PMR with IBM. IBM is already aware of the issue. According to IBM support, a fix is supposed to be shipped with FP9.

IBM support also proposed a workaround:

The issue does not occur when using the Notes.jar of the 901FP7 with the 901FP8 installation.

Some error messages from the lower levels of the JavaStack:

java.lang.ClassCastException: lotus.domino.local.View
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Database.getView(Unknown Source)

java.lang.ClassCastException: lotus.domino.local.Document
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Database.getDocumentByUNID(Unknown

java.lang.ClassCastException: lotus.domino.local.Item
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Document.getItems(Unknown Source)

The issue is tracked at IBM under SPR # RGAUALXF5R / APAR LO92228

Fun with IBM Traveler and Java

Today I stumbled upon a very strange behaviour of some Java code, and I do not have any clue about the why.

I am parsing the response (text) file from the “tell traveler show user” command.
The response file is written to the system temp directory and contains all information that you would also see when you invoke the command on the server console. No problem so far.

The response file contains a section that lists all mail file replicas for the user.

IBM Traveler has validated that it can access the database mail/ukrause.nsf.
Monitoring of the database for changes is enabled.
Encrypting, decrypting and signing messages are enabled because the Notes ID is in the mail file or the ID vault.

Canonical Name: CN=Ulrich Krause/O=singultus
Internet Address:
Home Mail Server: CN=serv01/O=singultus
Home Mail File: mail/ukrause.nsf
Current Monitor Server: CN=serv01/O=singultus Release 9.0.1FP8
Current Monitor File: mail/ukrause.nsf
Mail File Replicas:
[CN=serv02/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
[CN=serv01/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none

Notes ID: Mail File contains the Notes ID which was last updated by CN=serv01/O=singultus on Tuesday, June 16, 2015 1:09:16 PM CEST.

If a server for a replica is down or not reachable the output looks like this

IBM Traveler has validated that it can access the database mail/ukrause.nsf.
Monitoring of the database for changes is enabled.
Encrypting, decrypting and signing messages are enabled because the Notes ID is in the mail file or the ID vault.

Canonical Name: CN=Ulrich Krause/O=singultus
Internet Address:
Home Mail Server: CN=serv01/O=singultus
Home Mail File: mail/ukrause.nsf
Current Monitor Server: CN=serv01/O=singultus Release 9.0.1FP8
Current Monitor File: mail/ukrause.nsf
Mail File Replicas:
[CN=serv01/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
[CN=serv02/O=singultus, mail/ukrause.nsf] is not reachable, status(0x807) “The server is not responding. The server may be down or you may be experiencing network problems. Contact your system administrator if this problem persists.”.

Notes ID: Mail File contains the Notes ID which was last updated by CN=serv01/O=singultus on Tuesday, June 16, 2015 1:09:16 PM CEST.

Here is the code fragment that I use to parse the response file. I am using a LineIterator.



public class UserFileParser {

	private String			filename;
	private LineIterator		lineIterator;

	public void process() {
		try {
			lineIterator = FileUtils.lineIterator(new File(filename));

			while (lineIterator.hasNext()) {
				String line = lineIterator.nextLine().trim();


The expected behaviour is that the code will print every line inside the response file to the server console. So far for the theory.

BUT … the code behaves different if the response file contains information about not reachable replicas or not.
I have tested the code in Eclipse on a Windows 10 client without any issues. The problem only exists on the server when the code is executed from within a DOTS task.

If the response file lists all replicas as reachable, the code works as expected. I can see all lines printed to the console.
If the response file contains information about a replica that is not reachable, the code stops after reading

Current Monitor File: mail/ukrause.nsf

It does not get to

Mail File Replicas:

By the way, it does not make any difference, if I use any other kind of reader.

I have changed my code to

	public void process() {
		String line = "";
		try {
			br = new BufferedReader(new FileReader(new File(filename)));
			while ((line = br.readLine().trim()) != null) { 

Now I get a NullPointerException, but also the code stops at exact the same line in the response file. I all replicas are reachable, no NPE.

at de.eknori.dots.provider.parser.UserFileParser.process(

I have already investigated the 2 response files for hidden characters and stuff, but cannot see anything that would explain this behaviour.

From the data in the response file you can see that I have FP8 (Beta) installed; I have not yet checked with FP7, but I expect the same weirdness.

U P D A T E:

FP7 shows the same behaviour.

I have tried reading the file char by char

Reader reader = new InputStreamReader(new FileInputStream(filename), "UTF-8");
Integer i;

while ((i = != -1) {

and, indeed, there is a -1 value for i in the middle of the file.


So, no surprise, that all readers stop to read past this char.

Verse on premises – first impression

IBM promised to deliver Verse On Premises ( VoP ) on 30-DEC-2016. And they did.

I tried to download it; all I got was

24 hours later, the download worked.

The “installation” is a mixture of click and install and copy files from here to there. I expected something like IBM Notes Traveler installer.

I had VoP installed as a Beta user, so installation was not a big deal. IBM changes the target location and you have to uninstall the beta installation first; no surprise is that.

There is an online documentation on how to install VoP. See

After “installing” VoP, I was eager to see, how it looks and feels. As I already said, I had BETA 3 of VoP installed.

BETA 3 worked OK so far; not all features were in place and also the design of the calender and what else still was iNotes.

So I expected something new in this area.

This is, what I saw, when I opened VoP

So, fact is: Apart from the issue with showing the content of my mailfile ( we have had this in Beta 2 ), the calendar is still iNotes.

Needless to say that this is a bit of a disappointment.

Also, you will never get a real VoP in the future. For files and contacts, you need a Connections installation. OK, this can be done on premises, but is a huge overhead. For some features, Watson is needed. And there is no WoP available. Will it ever be? I doubt.

VoP is a big disappointment. I already uninstalled it from my Domino. iNotes as it is works for me.

At least, IBM delivered something on 30-Dec-2016 as promised.

Domino JNA vs. IBM Domino Java API

Today, I finally found the time to take a closer look into the Domino JNA project. The project has been created by Karsten Lehmann, mindoo.

The goal of the project is to provide low level API methods that can be used in Java to speed-up the retrival of data; especially, if you have to deal with lots of data, this can be a significant performance impact.

My szenarion is the following:

Read entries from a Notes Application into a Java bean and add data from another document in another application to the bean.The number of documents in the source application can be from 1 – n. I do not “own” the design of the source database, so I cannot modify it.

In this article, I will concentrate on reading the source application in the fastest way possible. I will show, how I do it right now and also, how you can use Domino JNA.

Here is a small piece of Java to measure the duration of data retrival.

public class StopWatch {

	private long	startTime	= 0;
	private long	stopTime	= 0;
	private boolean	running		= false;

	public void start() {
		this.startTime = System.nanoTime();
		this.running = true;

	public void stop() {
		this.stopTime = System.nanoTime();
		this.running = false;

	// elaspsed time in milliseconds
	public long getElapsedTime() {
		long elapsed;
		if (running) {
			elapsed = (System.nanoTime() - startTime);
		} else {
			elapsed = (stopTime - startTime);
		return elapsed;

	// elaspsed time in seconds
	public long getElapsedTimeSecs() {
		long elapsed;
		if (running) {
			elapsed = ((System.nanoTime() - startTime) / 1000);
		} else {
			elapsed = ((stopTime - startTime) / 1000);
		return elapsed;

I am using “fakenames.nsf” in my sample code. You can download the two sample databases fakenames.nsf and fakenames-views.nsf from this URL:

Next, place them in the data folder of your IBM Notes Client.

here is the code, that I use in my application. It uses a NotesNavigator to traverse the view. It then opens the underlying document for each entry found using entry.getDocument() and prints values for some items to the console.

I need to do it this way, because I need the contents of some items to identify the document in the other database. Unfortunately not all needed values are in the view. So just reading the column values is not an option.

import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;
import lotus.domino.View;
import lotus.domino.ViewEntry;
import lotus.domino.ViewNavigator;

public class Domino {

	public static void main(String[] args) {
		try {
			StopWatch stopWatch = new StopWatch();
			Session session = NotesFactory.createSession();
			Database dbData = session.getDatabase("", "fakenames.nsf");
			View view = dbData.getView("People");
			ViewNavigator navUsers = null;
			ViewEntry vweUser = null;
			ViewEntry vweTemp = null;
			Document docUser = null;

			navUsers = view.createViewNav();
			navUsers.setEntryOptions(ViewNavigator.VN_ENTRYOPT_NOCOUNTDATA + ViewNavigator.VN_ENTRYOPT_NOCOLUMNVALUES);

			vweUser = navUsers.getFirst();

			navUsers.setCacheGuidance(Integer.MAX_VALUE, ViewNavigator.VN_CACHEGUIDANCE_READSELECTIVE);

			while (vweUser != null) {
				docUser = vweUser.getDocument();
				        docUser.getItemValueString("lastname") + ", " + docUser.getItemValueString("firstname"));
				vweTemp = navUsers.getNext(vweUser);
				vweUser = vweTemp;

		} catch (Exception e) {
		} finally {

Now to Domino JNA. Here is the code. First, I get all IDs from the documents in the view and then I take the result to get the underlying documents and the data.

import java.util.EnumSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.concurrent.Callable;

import com.mindoo.domino.jna.NotesCollection;
import com.mindoo.domino.jna.NotesCollection.EntriesAsListCallback;
import com.mindoo.domino.jna.NotesDatabase;
import com.mindoo.domino.jna.NotesIDTable;
import com.mindoo.domino.jna.NotesNote;
import com.mindoo.domino.jna.NotesViewEntryData;
import com.mindoo.domino.jna.constants.Navigate;
import com.mindoo.domino.jna.constants.OpenNote;
import com.mindoo.domino.jna.constants.ReadMask;
import com.mindoo.domino.jna.gc.NotesGC;

import lotus.domino.NotesException;
import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;

public class DominoApi {

	public static void main(String[] args) throws NotesException {
		try {
			NotesGC.runWithAutoGC(new Callable() {

				public Object call() throws Exception {
					StopWatch stopWatch = new StopWatch();
					Session session = NotesFactory.createSession();
					NotesDatabase dbData = new NotesDatabase(session, "", "fakenames.nsf");

					NotesCollection colFromDbData = dbData.openCollectionByName("People");

					boolean includeCategoryIds = false;
					LinkedHashSet allIds = colFromDbData.getAllIds(includeCategoryIds);
					NotesIDTable selectedList = colFromDbData.getSelectedList();
					String startPos = "0";
					int entriesToSkip = 1;
					int entriesToReturn = Integer.MAX_VALUE;
					EnumSet returnNavigator = EnumSet.of(Navigate.NEXT_SELECTED);
					int bufferSize = Integer.MAX_VALUE;
					EnumSet returnData = EnumSet.of(ReadMask.NOTEID, ReadMask.SUMMARY);

					List selectedEntries = colFromDbData.getAllEntries(startPos, entriesToSkip,
			                returnNavigator, bufferSize, returnData, new EntriesAsListCallback(entriesToReturn));

					for (NotesViewEntryData currEntry : selectedEntries) {
						NotesNote note = dbData.openNoteById(currEntry.getNoteId(), EnumSet.noneOf(OpenNote.class));

			                    note.getItemValueString("lastname") + ", " + note.getItemValueString("firstname"));
					return null;
		} catch (Exception e) {
		} finally {



Now, what do you think? Which code is faster? Domino JNA? Well, not really in my szenario.

I have done a couple of tests on a local machine for both code sample.

The average time ( from 100 runs each ) for my code to get 40.000 documents from the “fakenames.nsf” and to print the values from the firstname and lastName item is 9.70 seconds; the average for Domino JNA is 10.65 seconds.

This does not mean that Domino JNA does not have any advantage over the standard IBM Domino Java API; It depends on the szenario. And in my szenario, there is no advantage in using Domino JNA. It would only result in an advanced complexity and platform dependancies.

If you have read this far, here is an extra for you.  I played with the options and found that the

navUsers.setCacheGuidance(Integer.MAX_VALUE, ViewNavigator.VN_CACHEGUIDANCE_READSELECTIVE);

is a significant performance boost.


Without setting the cache guidance, the average time to get the data out of the application was 11.40 seconds. I could not see any difference in using VN_CACHEGUIDANCE_READALL instead of  VN_CACHEGUIDANCE_READSELECTIVE.

DAOS – Find a note for a missing NLO file

Today, I saw the following message on my Domino console

[0E88:005A-0FF4] 08.03.2016 17:19:01   The database d:\Domino\data\mail\ukrause.nsf was unable to open or read the file d:\DAOS\0002\97FC43BEED143800A6608E557BE888498DB9BC5100015B7C.nlo: File truncated – file may have been damaged

If you see such a message in your production environment, you should immediately find out

  • what causes the damage?
  • which note does the .nlo belong to?

In my case, the answer for the first one is: Anti Virus Software.

And here is, how I found the answer to the second on

If not already in place, set


Next trigger a console command

tell daosmgr LISTNLO MAP -V mail/ukrause.nsf

This will create a file listNLO.txt in you Domino data directory

Open the file and search for the .nlo file in question




You now have the noteId of the document that has a ticket to the damaged .nlo.

Use your favorite tool to find and open the document.


In my case, it was just a SPAM mail. So no worries. But if this happens in production, you should now go and find the ( hopefully intact ) nlo in your backup and restore it.

Windows Fixpack Update Idiocy

Today I decided to install some recommended and optional fixes on my “productive” Windows 2008 R2 /64 Server.

In general, this has been a straight-forward task in the past. Select all fixes and click Install. Grab a new cup of coffee, restart the machine after upgrade and carry on with daily business.

As always, I looked for the available free space and found it to be sufficient.


The overall size of all fixes to be installed was ~80MB, so 1.63GB should be more than enough.


8 of 10 fixes installed without any issues, but the remainin 2 reported errors. As always, the MS error messages are completely useless.

So I asked Google for advice and found at least one hint how to install the DotNet 4.5.2 update. “Download the 4.5.2 offline installer”

I did and ran it locally. A message box popped up and I could not believe what I could see.


So, the successful install consumed 900MB for ~70MB of fixes.

MS should really re-think their upgrade strategy.

And yes, I know. This is ALL so much better on Linux and MAC.



After another fixpack install ( 1.1 MB )


So, where has my free space gone??

Even removal of features needs free disk space. Insane !!


By the way. Free disk space is now 0Bytes …

[Vaadin] – Create a simple twin column modal multi-select dialog

Vaadin is an open source Web application framework for rich Internet applications. In contrast to JavaScript libraries and browser-plugin based solutions, it features a server-side architecture, which means that the majority of the logic runs on the servers. Ajax technology is used at the browser-side to ensure a rich and interactive user experience. On the client-side Vaadin is built on top of and can be extended with Google Web Toolkit.

To let the user interact and enter or change data, you can use simple text fields. But, to keep data consitent, sometimes you want to make only specific values available and let the user select one ore more of the values.

Here is a small sample of a twin column modal select dialog box. It uses only Vaadins basic features, so no 3rd party plugins are needed.

Here is, how it looks


And here is the source code

package org.bluemix.challenge;

import javax.servlet.annotation.WebServlet;

import com.vaadin.annotations.Theme;
import com.vaadin.annotations.VaadinServletConfiguration;
import com.vaadin.annotations.Widgetset;
import com.vaadin.event.ShortcutAction;
import com.vaadin.server.VaadinRequest;
import com.vaadin.server.VaadinServlet;
import com.vaadin.ui.Button;
import com.vaadin.ui.Button.ClickEvent;
import com.vaadin.ui.FormLayout;
import com.vaadin.ui.TwinColSelect;
import com.vaadin.ui.UI;
import com.vaadin.ui.VerticalLayout;
import com.vaadin.ui.Window;
import com.vaadin.ui.Window.CloseEvent;
import com.vaadin.ui.Window.CloseListener;

public class MyUI extends UI {
	private static final int OPTION_COUNT = 6;

	public String selectedOptions = "";

	protected void init(VaadinRequest vaadinRequest) {

		final VerticalLayout layout = new VerticalLayout();

		Button button = new Button("Click Me");
		button.addClickListener(new Button.ClickListener() {

			public void buttonClick(ClickEvent event) {

				final Window dialog = new Window("Select one or more ...");
				dialog.setWidth(400.0f, Unit.PIXELS);

				// register esc key as close shortcut

				TwinColSelect twinColl = new TwinColSelect();

				for (int i = 0; i < OPTION_COUNT; i++) {
					twinColl.setItemCaption(i, "Option " + i);

				twinColl.setLeftColumnCaption("Available options");
				twinColl.setRightColumnCaption("Selected options");
				twinColl.setWidth(95.0f, Unit.PERCENTAGE);

				twinColl.addValueChangeListener(new ValueChangeListener() {

					public void valueChange(ValueChangeEvent event) {
						selectedOptions = String.valueOf(event.getProperty().getValue());

				final FormLayout dialogContent = new FormLayout();

				dialogContent.addComponent(new Button("Done", new Button.ClickListener() {

					public void buttonClick(ClickEvent event) {

				dialog.addCloseListener(new CloseListener() {
					public void windowClose(CloseEvent e) {


	@WebServlet(urlPatterns = "/*", name = "MyUIServlet", asyncSupported = true)
	@VaadinServletConfiguration(ui = MyUI.class, productionMode = false)
	public static class MyUIServlet extends VaadinServlet {
		private static final long serialVersionUID = 452468769467758600L;

When the dialog is closed, it prints the selected options to the console.

The dialog can be closed by hitting the ESC key. Also here the code returns the selected options ( if any )


This is just a basic sample; you can create a custom component and also pass the available options using a bean or whatever suites best for your need.


Latest Windows 10 update completely wrecked my dev environment

I am doing dev with Visual Studio and other tools and programs on a Windows 10 VM. Today, I decided to check for updates and install them, because I had not done any update for the past 2 month.

In general, these updates do not d any harm to the installed system, but today was different. The update took very long, and it looked like just the same when upgrading from Windows 8.1 to Windows 10. A couple of restarts, hints that all my data is still there, where I put it, and other nice hints that nothing special will happen.

Half an hour later, the system showed the logon screen. After login, The desktop was empty and an error message appeared, telling me, that The system can no longer access my shared MacBook drive. Network settings were completely overwritten, all system environment variables gone. Visual Studio had no longer any clue, where to find my libraries …

So I had to go back to the last working Windows version. To my very surprise, this worked. And, it was quick. It took only 5 minutes to restore the prior version.


All my tools and drives are back.

Telekom, O-Zwei und ich

Welchen Kleinkrieg die unterschiedlichen Telefonanbieter in Deutschland untereinander auch immer ausfechten; der Kunde ist der Leidtragende.

Am Freitag kam endlich der Telekomiker. Gut 3 Wochen nach Beauftragung eines O2 DSL Anschlusses.
Wenn ich gewusst hätte, wie das ended, ich hätte ihm keinen Kaffee angeboten.
Nach gut 20 Minuten ist er unverrichteter Dinge wieder abgedackelt.

Nö, da kann er nichts machen, da muss O2 erst “Arbeiten an der Endleitung durchführen”.

Übersetzt heisst das: “Wenn ich im Keller 2 Drähte auf Leiste 1, Klemme 10a:10b lege, dann muss O2 dafür sorgen, daß in der ersten Etage in der Unterverteilung das so gesteckt ist, daß im Büro der Router ein Signal bekommt”.

Klemme 1, 10a:10b ist jetzt ein Beispiel, das ich in meiner grenzenlosen Naivität selbst gewählt habe.

Der Telekomiker wusste nicht, wo GENAU er die signalführenden Strippen auflegen soll.

Anruf beim O2 Support mit der Frage “Wat nu??” Antwort. “Rufen Sie mal den Bautrupp an, da muss eine neue Endleitung gezogen werden. Die ist laut Telekomiker defekt”.

Mein Einwand des “nicht ganz dicht zu sein” wurde gehört aber weder besttigt noch dementiert.

Heute war dann Mister O-Zwei höchstselbst vor Ort und hat die notwendigen “Arbeiten an der Endleitung” durchgeführt.
Ich habe aus dem Debakel mit dem Telekomiker gelernt und keinen Kaffee angeboten.
Ende vom Lied. Immer noch kein DSL, weil “der Anschluss noch nicht geschaltet ist”.

Heisst, O2 hat die Endleitung auf der einen Seite der Leiste 1, 10a:10b aufgelegt, die Leitung der Telekom hängt aber immer noch fröhich und signalführend im Raum.

O-Zwei teilte mir unaufgefordert mit, dass die telekomiker das schon mal gerne so machen, weil wir keinen telekomiker Vertrag haben sondern O2.

Ein weiterer Anruf beim Support mit der Bitte um ein Update zur Dichtigkeit ergab lediglich, dass man mein Anliegen an die Telekomiker weiterleiten wird und man sich dann evtl. noch einmal dazu herablassen würde, mir einen neuen Termin zu nennen.

Zur Dichtigkeit gab es, auch auf erneute Nachfrage keine konkreten Auskünfte. Ich habe mir aber bereits eine eigene Meinung gebildet.

Nächster Gig der Telekomiker ist am  Montag, 07.12.2015.


[Vaadin] – widgetsets ‘com.vaadin.defaultwidgetset’ does not contain implementation for com.vaadin.addon.charts

While working on the IBM Vaadin Challenge, I ran into an issue after adding the charts component to may new project.


I implemented the charts components by adding the the following line to my ivy.xml file


and recompiled the widgetset.

Never the less, the error message appeared.
If you ( like me ) are new to Vaadin, you might spend some time to solve the puzzle. So I thought, I write a short description, how to fix this.

Goto your src folder and locate the compiled widgetset


Next, open the file. I contails a line similar like this

@VaadinServletConfiguration(productionMode = false, ui = ChartUI.class)

Modify the line so it will point to your widgetset ( do not include the ‘.gwt.xml’ part )

@VaadinServletConfiguration(productionMode = false, ui = ChartUI.class,

When you now run the application, it will display your chart.

Build Windows executables on Linux

If you have to build a binary ( .exe, .dll, … ) from source code for LINUX and WINDOWS, you need at least one build environment for each operating system.
In today’s world, this is not a big deal, because we can have as many virtual machines as we want thanks to VMWARE ( or VirtualBox ).
But this could become a complex environment and might also increase license costs to name just a few problems.

Many of us use Linux as our primary operating system. So the question is: “Can we use Linux to compile and link Windows executables?”

The concept of targeting a different platform than the compiler is running on is not new, and is known as cross-compilation.

Cross-compiling Windows binaries on Linux may have many benefits to it.

  • Reduced operating system complexity.
    On cross-platform projects that are also built on Linux, we can get one less operating system to maintain.
  • Access to Unix build tools.
    Build tools such as make, autoconf, automake and Unix utilities as grep, sed, and cat, to mention a few, become available for use in Windows builds as well. Even though projects such as MSYS port a few of these utilities to Windows, the performance is generally lower, and the versions are older and less supported than the native Unix counterparts. Also, if you already have a build environment set up under Linux, you don’t have to set it up again on Windows, but just use the existing one.
  • Lower license costs.
    As we know, Windows costs in terms of license fees. Building on Linux, developers do not need to have a Windows installation on their machines, but maybe just a central Windows installation for testing purposes.

On a Linux build environment, a gcc that compiles native binaries is usually installed in “/usr/bin”.
Native headers and libraries are in turn found in “/usr/include” and “/usr/lib”, respectively.
We can see that all these directories are rooted in “/usr”.

Any number of cross-compiler environments can be installed on the same system, as long as they are rooted in different directories.

To compile and link a Windows executable on Linux do the following

(1) Go to the MinGW-w64 download page.

For 64Bit, open “Toolchains targetting Win64″ , followed by “Automated Builds” and download a recent version to /tmp
For 32Bit, open “Toolchains targetting Win32″ , followed by “Automated Builds” and download a recent version to /tmp


(2) Create 2 directories mkdir /opt/mingw32 and mkdir /opt/mingw64

(3) Unpack the .b2z files to the according directories

For 64Bit tar xvf mingw-w64-bin_x86_64-linux_20131228.tar.bz2 -C /opt/mingw64
For 32Bit tar xvf mingw-w32-bin_x86_64-linux_20131227.tar.bz2 -C /opt/mingw32

(4) Create a new hello.c file in /tmp and paste the following code into it

#include <stdio.h>

int main()


printf("Hello World!\n");

return 0;


(5) Next, you can build the Windows binaries using the following commands

For 64Bit /opt/mingw64/bin/x86_64-w64-mingw32-gcc /tmp/hello.c -o /tmp/hello-w64.exe
For 32Bit /opt/mingw32/bin/i686-w64-mingw32-gcc /tmp/hello.c -o /tmp/hello-w32.exe

You now have to Windows binaries that can be downloaded to a Windows environment.



Well, this is just a simple sample and for more complex projects you probably have to do a little more work, but it should give you the idea, how cross-compiling can be implemented.

[C++] – A plain simple sample to write to and read from shared memory

If you have two programs ( or two threads ) running on the same computer, you might need a mechanism to share information amongst both programs or transfer values from one program to the other.

One of the possible solutions is “shared memory”. Most of us know shared memory only from server crashes and the like.

Here is a simple sample written in C to show, how you can use a shared memory object. The sample uses the BOOST libraries. BOOST libraries provide a very easy way of managing shared memory objects independent from the underlying operating system.

#include <boost/interprocess/managed_shared_memory.hpp>

using namespace boost::interprocess;

int main()
	// delete SHM if exists
	// create a new SHM object and allocate space
	managed_shared_memory managed_shm(open_or_create, "my_shm", 1024);

	// write into SHM
	// Type: int, Name: my_int, Value: 99
	int *i = managed_shm.construct("my_int")(99);
	std::cout << "Write  into shared memory: "<< *i << '\n';

	// write into SHM
	// Type: std::string, Name: my_string, Value: "Hello World"
	std::string *sz = managed_shm.construct("my_string")("Hello World");
	std::cout << "Write  into shared memory: "<< *sz << '\n' << '\n';

	// read INT from SHM
	std::pair<int*, std::size_t> pInt = managed_shm.find("my_int");

	if (pInt.first) {
		std::cout << "Read  from shared memory: "<< *pInt.first << '\n';
	else {
		std::cout << "my_int not found" << '\n';

	// read STRING from SHM
	std::pair<std::string*, std::size_t> pString = managed_shm.find("my_string");

	if (pString.first) {
		std::cout << "Read  from shared memory: "<< *pString.first << '\n';
	else {
		std::cout << "my_string not found" << '\n';

[How To] – Create your own IBM Notes Splash Screen

Inspired from Thomas Bahn’s post, I started to play with the IBM Notes Start Screen.

My first “creation” was “YellowVerse 9“.


This technote describes what you need to replace the original start screen with your own creation.

It is important, that you save your image as Windows BMP. This is the only format that the IBM Notes client can handle.

To modify the existing splash.bmp image, I’ve used Snagit. But it also works with MS Paint.
And, if you own a more sophisticated graphics program and a graphic tablet, then you have much more possibilities.

The main challenge is to find images, that can be made transparent. While the .bmp format does not support transparency, it is possible to add tranparent images as an additional layer to the .bmp.

For your convenience, I have added image templates that can be used as a starting point.

You can also build your very own splash screen. The basic .bmp image needs to be 650x503px. But if you really want to do it from scratch, you propably need more than just a simple graphics program.

Here is, what I did with SnagIt.


A couple of people asked on Twitter and other social media channels for the already posted splash screens. You can download them here.


This is nothing that enhances productivity or even a new way to work. It’s a time killer, but fun …

Update: here are some more …




Download additional files

VMWare Workstation – Unable to open kernel device “.\Global\vmx86” : The system cannot find the file specified

I recently upgraded my VMWare Workstation from Version 10 to 12. The software is running on Windows 10/64.

I never had any issues with VMWare Workstation 10 on Windows 7, 8 and 8.1. But after the upgrade, almost after every restart I saw the following error message when I tried to start a VM


There are several Google search results, even for older versions. Here is the most recent one, that adresses the issue and provides a ( non working ) workaround.

I uninstalled, rebooted, installed the software as advised in the technote. After several restarts it seemed to work, but the error message returned right after the next system restart.

I then looked at AntiVirus and AntiMalware software as a potential candidate for the trouble. I found a couple of registry entries that had been identified to be ‘potentielly unwanted’ and quarantined.

I restored them and after a restart I could start the VMs. Problem solved !

Err, not really.

The error message returned this morning … Damn.

Next, I looked into the Event Log. Not really helpful, because it only said that something went wrong, but no further information.

But I could at least see a pattern. Each time, the error ocurred, It looks like some service was not started because of missing dependencies.

Next, I ran services.msc and found the following.


I tried to start the services manually. Both services started without any errors. And also, I was able to start the VMs.

I am not really sure what causes the service start to fail; looks like some kind of bad timing.

I will now change the service startup from automatic to manual and add some start/stop scripts to my desktop.

I do not use the VMs on a daily basis; so starting the VMWare services manually will also save some system resources.

Speaking at SNoUG

After 2013 & 2014, I again have the honor to speak at SNouG ( Swiss Notes User Group ) in Zurich on 28-Oct-2015

My session is titled “Honey, I shrunk the data!”. This session has been held a couple of times before at various user groups, but it seems that there still is a strong interest in this topic.

I will not only cover data and design compression, DAOS and some new compact features. The session also includes all things DBMT as well.

See you in Zurich!