What is the SIZE of a Docker container?

August 21, 2017 – 9:45 am

I recently was  asked, if it is possible to tell the size of a container and, speaking of disk-space, what are the costs when running multiple instances of a container.

Let’s take the IBM Domino server from my previous post as an example.

You can get the SIZE of a container with the following command:

# docker ps -as -f “name=901FP9”
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
5f37c4d6a826 eknori/domino:domino_9_0_1_FP_9 “/docker-entrypoint.s” 2 hours ago Exited (137) 6 seconds ago 901FP9 0 B (virtual 3.296 GB)

We get a SIZE of 0 B (virtual 3.296 GB) as a result. Virtual size? What is that?

Let me try and explain:
When starting a container, the image that the container is started from is mounted read-only. On top of that, a writable layer is mounted, in which any changes made to the container are written.
The read-only layers of an image can be shared between any container that is started from the same image, whereas the “writable” layer is unique per container (because: you don’t want changes made in container “a” to appear in container “b” )
Back to the docker ps -s output;

  • The “size” information shows the amount of data (on disk) that is used for the writable layer of each container
  • The “virtual size” is the amount of disk-space used for the read-only image data used by the container.

So, with a 0 B container size, it does not make any difference, if we start 1 or 100 containers.

Be aware that the size shown does not include all disk space used for a container. Things that are not included currently are;

  1. volumes used by the container
  2. disk space used for the container’s configuration files (hostconfig.json, config.v2.json, hosts, hostname, resolv.conf) – although these files are small
  3. memory written to disk (if swapping is enabled)
  4. checkpoints (if you’re using the experimental checkpoint/restore feature)
  5. disk space used for log-files (if you use the json-file logging driver) – which can be quite a bit if your container generates a lot of logs, and log-rotation (max-file / max-size logging options) is not configured

So, let’s see what we have to add to the 0 B to get the overall size of our container.

We are using a volume “domino_data” for our Domino server . To get some information about this volume (1) type

# docker volume inspect domino_data
[
{
“Name”: “domino_data”,
“Driver”: “local”,
“Mountpoint”: “/var/lib/docker/volumes/domino_data/_data”,
“Labels”: {},
“Scope”: “local”
}
]

This gives us the physical location of that volume. Now we can get the size of the volume, summing up the size of all files in the volume.

# du -hs /var/lib/docker/volumes/domino_data/_data
1.1G /var/lib/docker/volumes/domino_data/_data

To get the size of the container configuration (2), we need to find the location for our container.

# ls /var/lib/docker/containers/
5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69

Now we have the long Id for our CONTAINER ID. Next type

# du -hs 5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69/
160K 5f37c4d6a8267246bbaff668b3437f121b0fe375d8319364bf7eb10f50d72c69/

Now do the math yourself.  x = (0B + 1.1GB + 160kB ) * n .

I leave it up to you to find out the other sizes ( 3 – 4 ) .

Sizes may vary and will change during runtime; but I assume that you got the idea.  Important to know is that all containers that are using the same image in the FROM command in a Dockerfile share this (readonly) image, so there is only one copy of it on disk.

 

Domino on Docker

August 20, 2017 – 9:23 am

IBM recently announced Docker support for Domino. It is supposed to come with FP10 at the end of this year.

Domino IMHO is not a microservic, but Domino on Docker has other advantages.

Think about a support person maintaining a product. All he needs to investigate a customer’s issue is the data from the customer and a Domino environment that is known to run the application stable. He can the create a new container from a Docker image, copy the files from the customer into the container, start Domino and then he can try to reproduce the issue.

You can also do this with VMs, but Docker images are more flexible. Our supporter might expect that the customer uses a specific version of Linux for the Domino server installation. But it turns out, that he uses the lastest build of the Linux OS.  You would need to setup a new VM with the Linux version that is needed, install and configure Domino etc … Waste of time and resources. Using Docker, this is just one change in a Dockerfile.

I will be speaking about Docker at AdminCamp 2017 in September. I will talk about Docker in general and also about Domino on Docker. In this blogpost, I want to show, how easy it is to create a Domino image ( optional with FP), and then build and run a Docker container from the image.

I assume that you already have Docker installed on a host. I am using RHEL 7 as the host OS for Docker.

Let us start with the basic Domino 9.0.1 image. I am using the excellent start scripts for Domino by Daniel Nashed. If you run Domino on Linux and you do not already have the scripts, get and use them.

First of all, create a new directory on your host. This directory will be used to store the needed Dockerfiles. You can also download the files and use them.

All Domino installation files should be accessible from a web server. replace the YOUR_HOST placeholder with your webserver

Here is the Dockerfile for the Domino 9.0.1 basic installation.

FROM centos
 
ENV DOM_SCR=resources/initscripts 
ENV DOM_CONF=resources/serverconfig 
ENV NUI_NOTESDIR /opt/ibm/domino/
 
RUN yum update -y && \
    yum install -y which && \
    yum install -y wget && \
	yum install -y perl && \
    useradd -ms /bin/bash notes && \
    usermod -aG notes notes && \
    usermod -d /local/notesdata notes && \
    sed -i '$d' /etc/security/limits.conf && \
    echo 'notes soft nofile 60000' >> /etc/security/limits.conf && \
    echo 'notes hard nofile 80000' >> /etc/security/limits.conf && \
    echo '# End of file' >> /etc/security/limits.conf
 
COPY ${DOM_CONF}/ /tmp/sw-repo/serverconfig
 
RUN mkdir -p /tmp/sw-repo/ && \
    cd /tmp/sw-repo/ && \
    wget -q http://YOUR_HOST/DOMINO_9.0.1_64_BIT_LIN_XS_EN.tar && \
    tar -xf DOMINO_9.0.1_64_BIT_LIN_XS_EN.tar &&\
    cd /tmp/sw-repo/linux64/domino && \
    /bin/bash -c "./install -silent -options /tmp/sw-repo/serverconfig/domino901_response.dat" && \
    cd / && \
    rm /tmp/* -R
 
RUN mkdir -p /etc/sysconfig/
COPY ${DOM_SCR}/rc_domino /etc/init.d/
RUN chmod u+x /etc/init.d/rc_domino && \
    chown root.root /etc/init.d/rc_domino
COPY ${DOM_SCR}/rc_domino_script /opt/ibm/domino/
RUN chmod u+x /opt/ibm/domino/rc_domino_script && \
    chown notes.notes /opt/ibm/domino/rc_domino_script
COPY ${DOM_SCR}/rc_domino_config_notes /etc/sysconfig/

We install Domino on the latest CentOS build; if you want to use a specific CentOS build, change the first line in the Dockerfile and add the buildnumber.

You can see a lot of commands that have been combined into one RUN statement. Doing it this way, you can keep the image size a bit smaller. Each RUN command would create an extra layer and this will increase the size of your image.

So, in the first part, we update the CentOS image from the Docker repository with the latest fixes and also install additional packages that we need for the Domino installation.
Next, we copy our response.dat file and the start scripts to our image.
Now we download the Domino 9.0.1 installation package, unpack it and do a silent installation using our response.dat file for configuration.
Last part is installation of the start script files, assigning user and group and granting permissions

All temporary files are also deleted.

Now we can create an image from the Dockerfile.

docker build -t eknori/domino:9_0_1 -f Dockerfile .

This will take about 10 – 15 minutes. When the build is completed, we can list our image

# docker images

eknori/domino 9_0_1 96b6220d177c 14 hours ago 1.883 GB

Next we will use this image and install FP9. If you need some other FP, tweak the Dockerfile to your own needs. Once you get familiar to Docker, this is easy.

FROM eknori/domino:9_0_1
 
ENV DOM_CONF=resources/serverconfig
ENV NUI_NOTESDIR /opt/ibm/domino/
 
COPY ${DOM_CONF}/ /tmp/sw-repo/serverconfig
 
RUN mkdir -p /tmp/sw-repo/ && \
cd /tmp/sw-repo/ && \
wget -q http://YOUR_HOST/domino901FP9_linux64_x86.tar && \
tar -xf domino901FP9_linux64_x86.tar &&\
cd /tmp/sw-repo/linux64/domino && \
/bin/bash -c "./install -script /tmp/sw-repo/serverconfig/domino901_fp9_response.dat" && \
cd / && \
rm /tmp/* -R && \
rm /opt/ibm/domino/notes/90010/linux/90010/* -R

A much shorter Dockerfile, as we already have installed Domino and now can reuse the 9_0_1 image as the base image for our 9_0_1_FP_9 image.

The last line in the RUN command removes the uninstall information. Maybe this can be done in the response.dat file also, but you should do this anyway, as we do not need the backup files.

Again, build the new image.

docker build -t eknori/domino:9_0_1_FP_9 -f Dockerfile .

# docker images
eknori/domino 9_0_1_FP_9 ed0276f21d73 14 hours ago 3.296 GB

Now we can build or final Domino 9.0.1FP9 from the 9_0_1_FP_9.

Our Dockerfile looks like this

FROM eknori/domino:9_0_1_FP_9
 
EXPOSE 25 80 443 1352
 
COPY resources/docker-entrypoint.sh /
RUN chmod 775 /docker-entrypoint.sh
 
USER notes
WORKDIR /local/notesdata
ENV LOGNAME=notes
ENV PATH=$PATH:/opt/ibm/domino/
 
ENTRYPOINT ["/docker-entrypoint.sh"]

and the file used in the ENTRYPOINT command contains the following

#!/bin/bash
 
serverID=/local/notesdata/server.id
 
if [ ! -f "$serverID" ]; then
/opt/ibm/domino/bin/server -listen 1352
else
/opt/ibm/domino/rc_domino_script start
/bin/bash
fi

The ENTRYPOINT is executed when the container starts. The script just checks, if the server is already configured. If not, it starts the server in LISTEN mode. If it finds a server.id, it starts the server.

Let us build or final image.

docker build -t eknori/domino:domino_9_0_1_FP_9 -f Dockerfile .

# docker images
eknori/domino domino_9_0_1_FP_9 1fae2fe73df4 2 hours ago 3.296 GB

Now we are ready to create a container from the image. But we need one additional step. All changes that we make in a container will be lost, once the container is stopped. So we need to create a persitent data store and attach it to the container.

To create a persitent volume, type

docker volume create –name=domino_data

And then type

docker run -it -p 1352:1352 -p 8888:80 -p 8443:443 –name 901FP7 -v domino_data:/local/notesdata eknori/domino:domino_9_0_1_FP_9

to build and run the container. I have used port 1352 instead of 8585 to avoid to open another port on the host system.

After the build, the ENTRYPOINT will start the Domino server in LISTEN mode. You can now setup your server using the remote setup tool.

After you have setup your server, close the remote setup and stop Domino. This will also stop your container.

You can start and get access to the container with

docker start 901FP9
docker attach 901FP9

This gives you great flexibility. Once FP10 is in the wild, create a new image from the 9_0_1 image and install FP10. Then create a new image for your final Domino installation. Run this image and attach the persitent volume.

[Docker CLI] – Delete Containers

May 31, 2017 – 4:35 pm

If you want to delete ALL containers , running or exited, you can do this with a single command.

docker stop $(docker ps -a -q)

docker rm $(docker ps -a -q)

If you only want to delete containers that have ‘exited’, then use:

docker ps -a | grep Exited | cut -d ‘ ‘ -f 1 | xargs  docker rm

IBM Spectrum Conductor for Containers

May 30, 2017 – 1:13 pm

IBM® Spectrum Conductor for Containers is a server platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image repository, a management console, and monitoring frameworks.

For my upcoming AdminCamp 2017 session later this year, I wanted to put together a nice session about Docker in general as well as Kubernetes and how to orchestrate containers without creating .yaml files.
IBM® Spectrum Conductor for Containers is installed as part of orient.me and is the foundation for all the new and upcoming contaierized stuff in Connections 6.

But it is also available as a standalone component.  There are also other graphical tools that work on top of Docker ( and Kubernetes ) but I thought that it is a good idea to use IBM® Spectrum Conductor for Containers.
The installation is more or less just running a Docker container and sit and wait. That is, what I thought.

After reading through the documentation, I decided to use RHEL 7.2 in a VM on ESXi 6.5. I wanted to document the installation process and all the configurations to give attendees a step-by-step instruction how to setup and configure the OS, install additional software like Docker and finally prepare the configuration for the CfC installer. It is all in the installation guide provided by IBM, but I like to have it in one textfile where I just need to copy the commands into the Linux console instead of jumping back and forth in the HTML document.

After configuring the system and tweaking here and there, I tried the install with 1 CPU / 4 GB resulting in a hang in the middle of the installation process.
The installer does not give you any hint, what went wrong. And also the logs are not very helpful.

Next attempt was 2 CPU / 8 GB. It went a bit further in the installation process, but hung at a different point then. Also here, no hint from the installer or in the logs

Final try was 4 CPU / 8 GB. Now the installation finished and I could open the dashboard.

This stuff is the foundation for Connections Next and I can live with the requirements regarding CPU / RAM.

If you just want to use Docker with Kubernetes plus one of the other UI tools, then you are good with a “normal” sized VM ( 1 CPU / 4 GB ). This will also be part of my Docker session at AdminCamp 2017.

Notes FP8 (IF1) might stop your custom sidebar plugins from working

May 11, 2017 – 11:39 am

We got a call from one of our customers reporting a defect in our midpoints doc.Store sidebar plugin. It worked in Notes 9.0.1FP7 but stopped working after the upgrade to FP8.

I was able to reproduce in our development environment. In the error log, we saw the following error message

at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.ViewEntry.getDocument(Unknown Source)
at de.midpoints.docstore.notes.model.DocStoreDocumentCollectionBuilder.calculateDocumentCollection(Unknown Source)
at de.midpoints.docstore.notes.views.DocStoreView$32.run(Unknown Source)
at org.eclipse.core.internal.jobs.Worker.run(Unknown Source)

I was able to find a fix for this particular issue. But there is also an entry in the German Notes Forum reporting similar defects after the upgrade.

I opened a PMR with IBM. IBM is already aware of the issue. According to IBM support, a fix is supposed to be shipped with FP9.

IBM support also proposed a workaround:

The issue does not occur when using the Notes.jar of the 901FP7 with the 901FP8 installation.

Some error messages from the lower levels of the JavaStack:

java.lang.ClassCastException: lotus.domino.local.View
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Database.getView(Unknown Source)
—–

java.lang.ClassCastException: lotus.domino.local.Document
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Database.getDocumentByUNID(Unknown
Source)
—————

java.lang.ClassCastException: lotus.domino.local.Item
incompatible with lotus.domino.local.Session
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Session.FindOrCreate(Unknown Source)
at lotus.domino.local.Document.getItems(Unknown Source)

The issue is tracked at IBM under SPR # RGAUALXF5R / APAR LO92228

Install Docker On RHEL 7 in 3 Easy Steps

March 31, 2017 – 12:42 pm

You are just 3 steps away from running docker containers on a RHEL7.

  1. curl -sSL https://get.docker.com/ | sh
  2. systemctl enable docker
  3. systemctl start docker

Now you can test your installation running

  • docker run hello-world

That’s it.

 

C / c++ for Notes & Domino Developers

March 28, 2017 – 7:41 am

C/ C++ for Notes & Domino Developers von Ulrich Krause

Fun with IBM Traveler and Java

February 5, 2017 – 9:57 am

Today I stumbled upon a very strange behaviour of some Java code, and I do not have any clue about the why.

I am parsing the response (text) file from the “tell traveler show user” command.
The response file is written to the system temp directory and contains all information that you would also see when you invoke the command on the server console. No problem so far.

The response file contains a section that lists all mail file replicas for the user.

IBM Traveler has validated that it can access the database mail/ukrause.nsf.
Monitoring of the database for changes is enabled.
Encrypting, decrypting and signing messages are enabled because the Notes ID is in the mail file or the ID vault.

Canonical Name: CN=Ulrich Krause/O=singultus
Internet Address: ulrich.krause@eknori.de
Home Mail Server: CN=serv01/O=singultus
Home Mail File: mail/ukrause.nsf
Current Monitor Server: CN=serv01/O=singultus Release 9.0.1FP8
Current Monitor File: mail/ukrause.nsf
Mail File Replicas:
[CN=serv02/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
[CN=serv01/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none

Notes ID: Mail File contains the Notes ID which was last updated by CN=serv01/O=singultus on Tuesday, June 16, 2015 1:09:16 PM CEST.

If a server for a replica is down or not reachable the output looks like this

IBM Traveler has validated that it can access the database mail/ukrause.nsf.
Monitoring of the database for changes is enabled.
Encrypting, decrypting and signing messages are enabled because the Notes ID is in the mail file or the ID vault.

Canonical Name: CN=Ulrich Krause/O=singultus
Internet Address: ulrich.krause@eknori.de
Home Mail Server: CN=serv01/O=singultus
Home Mail File: mail/ukrause.nsf
Current Monitor Server: CN=serv01/O=singultus Release 9.0.1FP8
Current Monitor File: mail/ukrause.nsf
Mail File Replicas:
[CN=serv01/O=singultus, mail/ukrause.nsf] is reachable.
ACL for Ulrich Krause/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
ACL for serv01/singultus: Access=Manager Capabilities=create,update,read,delete,copy Missing Capabilities=none
[CN=serv02/O=singultus, mail/ukrause.nsf] is not reachable, status(0x807) “The server is not responding. The server may be down or you may be experiencing network problems. Contact your system administrator if this problem persists.”.

Notes ID: Mail File contains the Notes ID which was last updated by CN=serv01/O=singultus on Tuesday, June 16, 2015 1:09:16 PM CEST.

Here is the code fragment that I use to parse the response file. I am using a LineIterator.

import org.apache.commons.io.FileUtils;
import org.apache.commons.io.LineIterator;
 
import com.google.common.base.Joiner;
import com.google.common.collect.Lists;
 
public class UserFileParser {
 
	private String			filename;
	private LineIterator		lineIterator;
 
	public void process() {
		try {
			lineIterator = FileUtils.lineIterator(new File(filename));
 
			while (lineIterator.hasNext()) {
				String line = lineIterator.nextLine().trim();
				System.out.println(line);

The expected behaviour is that the code will print every line inside the response file to the server console. So far for the theory.

BUT … the code behaves different if the response file contains information about not reachable replicas or not.
I have tested the code in Eclipse on a Windows 10 client without any issues. The problem only exists on the server when the code is executed from within a DOTS task.

If the response file lists all replicas as reachable, the code works as expected. I can see all lines printed to the console.
If the response file contains information about a replica that is not reachable, the code stops after reading

Current Monitor File: mail/ukrause.nsf

It does not get to

Mail File Replicas:

By the way, it does not make any difference, if I use any other kind of reader.

I have changed my code to

	public void process() {
		String line = "";
		try {
			br = new BufferedReader(new FileReader(new File(filename)));
			while ((line = br.readLine().trim()) != null) { 
				System.out.println(line);
				lines.add(line);

Now I get a NullPointerException, but also the code stops at exact the same line in the response file. I all replicas are reachable, no NPE.

java.lang.NullPointerException
at de.eknori.dots.provider.parser.UserFileParser.process(UserFileParser.java:65)

I have already investigated the 2 response files for hidden characters and stuff, but cannot see anything that would explain this behaviour.

From the data in the response file you can see that I have FP8 (Beta) installed; I have not yet checked with FP7, but I expect the same weirdness.

U P D A T E:

FP7 shows the same behaviour.

I have tried reading the file char by char

Reader reader = new InputStreamReader(new FileInputStream(filename), "UTF-8");
Integer i;
 
while ((i = reader.read()) != -1) {
	System.out.println(i);
}

and, indeed, there is a -1 value for i in the middle of the file.

105
99
97
115
58
32
13
10
-1

So, no surprise, that all readers stop to read past this char.

Verse on premises – first impression

December 31, 2016 – 7:07 am

IBM promised to deliver Verse On Premises ( VoP ) on 30-DEC-2016. And they did.

I tried to download it; all I got was

24 hours later, the download worked.

The “installation” is a mixture of click and install and copy files from here to there. I expected something like IBM Notes Traveler installer.

I had VoP installed as a Beta user, so installation was not a big deal. IBM changes the target location and you have to uninstall the beta installation first; no surprise is that.

There is an online documentation on how to install VoP. See https://www.ibm.com/support/knowledgecenter/SS4RQV_1.0.0/admin/topics/vop_configuring_server.html

After “installing” VoP, I was eager to see, how it looks and feels. As I already said, I had BETA 3 of VoP installed.

BETA 3 worked OK so far; not all features were in place and also the design of the calender and what else still was iNotes.

So I expected something new in this area.

This is, what I saw, when I opened VoP

So, fact is: Apart from the issue with showing the content of my mailfile ( we have had this in Beta 2 ), the calendar is still iNotes.

Needless to say that this is a bit of a disappointment.

Also, you will never get a real VoP in the future. For files and contacts, you need a Connections installation. OK, this can be done on premises, but is a huge overhead. For some features, Watson is needed. And there is no WoP available. Will it ever be? I doubt.

VoP is a big disappointment. I already uninstalled it from my Domino. iNotes as it is works for me.

At least, IBM delivered something on 30-Dec-2016 as promised.

IBM Notes and Domino V9.0.1 extends support and enhances its collaboration toolset with social capabilities from IBM Connections V5.5

September 13, 2016 – 8:02 am

read this for detailled information

Domino JNA vs. IBM Domino Java API

September 11, 2016 – 12:02 pm

Today, I finally found the time to take a closer look into the Domino JNA project. The project has been created by Karsten Lehmann, mindoo.

The goal of the project is to provide low level API methods that can be used in Java to speed-up the retrival of data; especially, if you have to deal with lots of data, this can be a significant performance impact.

My szenarion is the following:

Read entries from a Notes Application into a Java bean and add data from another document in another application to the bean.The number of documents in the source application can be from 1 – n. I do not “own” the design of the source database, so I cannot modify it.

In this article, I will concentrate on reading the source application in the fastest way possible. I will show, how I do it right now and also, how you can use Domino JNA.

Here is a small piece of Java to measure the duration of data retrival.

public class StopWatch {
 
	private long	startTime	= 0;
	private long	stopTime	= 0;
	private boolean	running		= false;
 
	public void start() {
		this.startTime = System.nanoTime();
		this.running = true;
	}
 
	public void stop() {
		this.stopTime = System.nanoTime();
		this.running = false;
	}
 
	// elaspsed time in milliseconds
	public long getElapsedTime() {
		long elapsed;
		if (running) {
			elapsed = (System.nanoTime() - startTime);
		} else {
			elapsed = (stopTime - startTime);
		}
		return elapsed;
	}
 
	// elaspsed time in seconds
	public long getElapsedTimeSecs() {
		long elapsed;
		if (running) {
			elapsed = ((System.nanoTime() - startTime) / 1000);
		} else {
			elapsed = ((stopTime - startTime) / 1000);
		}
		return elapsed;
	}
}

I am using “fakenames.nsf” in my sample code. You can download the two sample databases fakenames.nsf and fakenames-views.nsf from this URL:

ftp://domino_jna:domino_jna@www2.mindoo.de

Next, place them in the data folder of your IBM Notes Client.

here is the code, that I use in my application. It uses a NotesNavigator to traverse the view. It then opens the underlying document for each entry found using entry.getDocument() and prints values for some items to the console.

I need to do it this way, because I need the contents of some items to identify the document in the other database. Unfortunately not all needed values are in the view. So just reading the column values is not an option.

import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;
import lotus.domino.View;
import lotus.domino.ViewEntry;
import lotus.domino.ViewNavigator;
 
public class Domino {
 
	public static void main(String[] args) {
		try {
			StopWatch stopWatch = new StopWatch();
			NotesThread.sinitThread();
			Session session = NotesFactory.createSession();
			stopWatch.start();
			Database dbData = session.getDatabase("", "fakenames.nsf");
			View view = dbData.getView("People");
			ViewNavigator navUsers = null;
			ViewEntry vweUser = null;
			ViewEntry vweTemp = null;
			Document docUser = null;
 
			view.setAutoUpdate(false);
			navUsers = view.createViewNav();
			navUsers.setEntryOptions(ViewNavigator.VN_ENTRYOPT_NOCOUNTDATA + ViewNavigator.VN_ENTRYOPT_NOCOLUMNVALUES);
 
			vweUser = navUsers.getFirst();
 
			navUsers.setCacheGuidance(Integer.MAX_VALUE, ViewNavigator.VN_CACHEGUIDANCE_READSELECTIVE);
 
			while (vweUser != null) {
				docUser = vweUser.getDocument();
				System.out.println(
				        docUser.getItemValueString("lastname") + ", " + docUser.getItemValueString("firstname"));
				vweTemp = navUsers.getNext(vweUser);
				docUser.recycle();
				vweUser.recycle();
				vweUser = vweTemp;
			}
			stopWatch.stop();
			System.out.println(stopWatch.getElapsedTimeSecs());
 
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			NotesThread.stermThread();
		}
	}
}

Now to Domino JNA. Here is the code. First, I get all IDs from the documents in the view and then I take the result to get the underlying documents and the data.

import java.util.EnumSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.concurrent.Callable;
 
import com.mindoo.domino.jna.NotesCollection;
import com.mindoo.domino.jna.NotesCollection.EntriesAsListCallback;
import com.mindoo.domino.jna.NotesDatabase;
import com.mindoo.domino.jna.NotesIDTable;
import com.mindoo.domino.jna.NotesNote;
import com.mindoo.domino.jna.NotesViewEntryData;
import com.mindoo.domino.jna.constants.Navigate;
import com.mindoo.domino.jna.constants.OpenNote;
import com.mindoo.domino.jna.constants.ReadMask;
import com.mindoo.domino.jna.gc.NotesGC;
 
import lotus.domino.NotesException;
import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;
 
public class DominoApi {
 
	public static void main(String[] args) throws NotesException {
		try {
			NotesGC.runWithAutoGC(new Callable() {
 
				@Override
				public Object call() throws Exception {
					StopWatch stopWatch = new StopWatch();
					NotesThread.sinitThread();
					Session session = NotesFactory.createSession();
					stopWatch.start();
					NotesDatabase dbData = new NotesDatabase(session, "", "fakenames.nsf");
 
					NotesCollection colFromDbData = dbData.openCollectionByName("People");
 
					boolean includeCategoryIds = false;
					LinkedHashSet allIds = colFromDbData.getAllIds(includeCategoryIds);
					NotesIDTable selectedList = colFromDbData.getSelectedList();
					selectedList.clear();
					selectedList.addNotes(allIds);
					String startPos = "0";
					int entriesToSkip = 1;
					int entriesToReturn = Integer.MAX_VALUE;
					EnumSet returnNavigator = EnumSet.of(Navigate.NEXT_SELECTED);
					int bufferSize = Integer.MAX_VALUE;
					EnumSet returnData = EnumSet.of(ReadMask.NOTEID, ReadMask.SUMMARY);
 
					List selectedEntries = colFromDbData.getAllEntries(startPos, entriesToSkip,
			                returnNavigator, bufferSize, returnData, new EntriesAsListCallback(entriesToReturn));
 
					for (NotesViewEntryData currEntry : selectedEntries) {
						NotesNote note = dbData.openNoteById(currEntry.getNoteId(), EnumSet.noneOf(OpenNote.class));
 
						System.out.println(
			                    note.getItemValueString("lastname") + ", " + note.getItemValueString("firstname"));
						note.recycle();
					}
					stopWatch.stop();
					System.out.println(stopWatch.getElapsedTimeSecs());
					return null;
				}
			});
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			NotesThread.stermThread();
		}
 
	}
 
}

Now, what do you think? Which code is faster? Domino JNA? Well, not really in my szenario.

I have done a couple of tests on a local machine for both code sample.

The average time ( from 100 runs each ) for my code to get 40.000 documents from the “fakenames.nsf” and to print the values from the firstname and lastName item is 9.70 seconds; the average for Domino JNA is 10.65 seconds.

This does not mean that Domino JNA does not have any advantage over the standard IBM Domino Java API; It depends on the szenario. And in my szenario, there is no advantage in using Domino JNA. It would only result in an advanced complexity and platform dependancies.

If you have read this far, here is an extra for you.  I played with the options and found that the

navUsers.<strong>setCacheGuidance</strong>(Integer.MAX_VALUE, ViewNavigator.VN_CACHEGUIDANCE_READSELECTIVE);

is a significant performance boost.

 

Without setting the cache guidance, the average time to get the data out of the application was 11.40 seconds. I could not see any difference in using VN_CACHEGUIDANCE_READALL instead of  VN_CACHEGUIDANCE_READSELECTIVE.

ONTF DomBackup 1.0.0.5 released

April 29, 2016 – 4:27 pm

I have uploaded a new version of ONTF DomBackup. Version 1.0.0.5 contains updated binaries for Windows 32/64 Bit.

It adresses an issue that leads to data losses when compressing files larger 2GB. The issue is described here. The issue is Windows related only!

The fixed version uses 7Zip as an external compression tool.

 

DAOS – Find a note for a missing NLO file

March 8, 2016 – 5:35 pm

Today, I saw the following message on my Domino console

[0E88:005A-0FF4] 08.03.2016 17:19:01   The database d:\Domino\data\mail\ukrause.nsf was unable to open or read the file d:\DAOS\0002\97FC43BEED143800A6608E557BE888498DB9BC5100015B7C.nlo: File truncated – file may have been damaged

If you see such a message in your production environment, you should immediately find out

  • what causes the damage?
  • which note does the .nlo belong to?

In my case, the answer for the first one is: Anti Virus Software.

And here is, how I found the answer to the second on

If not already in place, set

DEBUG_DAOS_DIAGNOSTICS=1

Next trigger a console command

tell daosmgr LISTNLO MAP -V mail/ukrause.nsf

This will create a file listNLO.txt in you Domino data directory

Open the file and search for the .nlo file in question

68258,0X10AA2,69422,0X10F2E,88956,88956,DAOS,

97FC43BEED143800A6608E557BE888498DB9BC5100015B7C,

0X8,VALID,ö1,97FC43BEED143800A6608E557BE888498DB9BC5100015B7C,[0:1],97FC43BEED143800A6608E557BE888498DB9BC5100015B7C,[0:1],Shared,1

You now have the noteId of the document that has a ticket to the damaged .nlo.

Use your favorite tool to find and open the document.

missingFile1

In my case, it was just a SPAM mail. So no worries. But if this happens in production, you should now go and find the ( hopefully intact ) nlo in your backup and restore it.

Windows Fixpack Update Idiocy

February 28, 2016 – 9:12 am

Today I decided to install some recommended and optional fixes on my “productive” Windows 2008 R2 /64 Server.

In general, this has been a straight-forward task in the past. Select all fixes and click Install. Grab a new cup of coffee, restart the machine after upgrade and carry on with daily business.

As always, I looked for the available free space and found it to be sufficient.

disk163

The overall size of all fixes to be installed was ~80MB, so 1.63GB should be more than enough.

10updates

8 of 10 fixes installed without any issues, but the remainin 2 reported errors. As always, the MS error messages are completely useless.

So I asked Google for advice and found at least one hint how to install the DotNet 4.5.2 update. “Download the 4.5.2 offline installer”

I did and ran it locally. A message box popped up and I could not believe what I could see.

dotnet452

So, the successful install consumed 900MB for ~70MB of fixes.

MS should really re-think their upgrade strategy.

And yes, I know. This is ALL so much better on Linux and MAC.

 

UPDATE:

After another fixpack install ( 1.1 MB )

nextupdate

So, where has my free space gone??

Even removal of features needs free disk space. Insane !!

remove

By the way. Free disk space is now 0Bytes …

[Vaadin] – Create a simple twin column modal multi-select dialog

January 30, 2016 – 10:29 am

Vaadin is an open source Web application framework for rich Internet applications. In contrast to JavaScript libraries and browser-plugin based solutions, it features a server-side architecture, which means that the majority of the logic runs on the servers. Ajax technology is used at the browser-side to ensure a rich and interactive user experience. On the client-side Vaadin is built on top of and can be extended with Google Web Toolkit.

To let the user interact and enter or change data, you can use simple text fields. But, to keep data consitent, sometimes you want to make only specific values available and let the user select one ore more of the values.

Here is a small sample of a twin column modal select dialog box. It uses only Vaadins basic features, so no 3rd party plugins are needed.

Here is, how it looks

TwinColl

And here is the source code

package org.bluemix.challenge;
 
import javax.servlet.annotation.WebServlet;
 
import com.vaadin.annotations.Theme;
import com.vaadin.annotations.VaadinServletConfiguration;
import com.vaadin.annotations.Widgetset;
import com.vaadin.data.Property.ValueChangeEvent;
import com.vaadin.data.Property.ValueChangeListener;
import com.vaadin.event.ShortcutAction;
import com.vaadin.server.VaadinRequest;
import com.vaadin.server.VaadinServlet;
import com.vaadin.ui.Button;
import com.vaadin.ui.Button.ClickEvent;
import com.vaadin.ui.FormLayout;
import com.vaadin.ui.TwinColSelect;
import com.vaadin.ui.UI;
import com.vaadin.ui.VerticalLayout;
import com.vaadin.ui.Window;
import com.vaadin.ui.Window.CloseEvent;
import com.vaadin.ui.Window.CloseListener;
 
@Theme("mytheme")
@Widgetset("org.bluemix.challenge.MyAppWidgetset")
@SuppressWarnings("serial")
public class MyUI extends UI {
	private static final int OPTION_COUNT = 6;
 
	public String selectedOptions = "";
 
	@Override
	protected void init(VaadinRequest vaadinRequest) {
 
		final VerticalLayout layout = new VerticalLayout();
		layout.setMargin(true);
		setContent(layout);
 
		Button button = new Button("Click Me");
		button.addClickListener(new Button.ClickListener() {
 
			@Override
			public void buttonClick(ClickEvent event) {
 
				final Window dialog = new Window("Select one or more ...");
				dialog.setWidth(400.0f, Unit.PIXELS);
				dialog.setModal(true);
				dialog.setClosable(false);
				dialog.setResizable(false);
 
				// register esc key as close shortcut
				dialog.setCloseShortcut(ShortcutAction.KeyCode.ESCAPE);
 
				TwinColSelect twinColl = new TwinColSelect();
 
				for (int i = 0; i &lt; OPTION_COUNT; i++) {
					twinColl.addItem(i);
					twinColl.setItemCaption(i, "Option " + i);
				}
 
				twinColl.setRows(OPTION_COUNT);
				twinColl.setNullSelectionAllowed(true);
				twinColl.setMultiSelect(true);
				twinColl.setImmediate(true);
				twinColl.setLeftColumnCaption("Available options");
				twinColl.setRightColumnCaption("Selected options");
				twinColl.setWidth(95.0f, Unit.PERCENTAGE);
 
				twinColl.addValueChangeListener(new ValueChangeListener() {
 
					@Override
					public void valueChange(ValueChangeEvent event) {
						selectedOptions = String.valueOf(event.getProperty().getValue());
 
					}
				});
 
				final FormLayout dialogContent = new FormLayout();
 
				dialogContent.addComponents(twinColl);
				dialogContent.addComponent(new Button("Done", new Button.ClickListener() {
 
					public void buttonClick(ClickEvent event) {
						UI.getCurrent().removeWindow(dialog);
					}
				}));
 
				dialog.addCloseListener(new CloseListener() {
					@Override
					public void windowClose(CloseEvent e) {
						System.out.println(selectedOptions);
					}
				});
 
				dialog.setContent(dialogContent);
				UI.getCurrent().addWindow(dialog);
			}
		});
		layout.addComponent(button);
	}
 
	@WebServlet(urlPatterns = "/*", name = "MyUIServlet", asyncSupported = true)
	@VaadinServletConfiguration(ui = MyUI.class, productionMode = false)
	public static class MyUIServlet extends VaadinServlet {
		private static final long serialVersionUID = 452468769467758600L;
	}
}

When the dialog is closed, it prints the selected options to the console.

The dialog can be closed by hitting the ESC key. Also here the code returns the selected options ( if any )

dialog.setCloseShortcut(ShortcutAction.KeyCode.ESCAPE);

This is just a basic sample; you can create a custom component and also pass the available options using a bean or whatever suites best for your need.

 

Latest Windows 10 update completely wrecked my dev environment

January 5, 2016 – 9:00 am

I am doing dev with Visual Studio and other tools and programs on a Windows 10 VM. Today, I decided to check for updates and install them, because I had not done any update for the past 2 month.

In general, these updates do not d any harm to the installed system, but today was different. The update took very long, and it looked like just the same when upgrading from Windows 8.1 to Windows 10. A couple of restarts, hints that all my data is still there, where I put it, and other nice hints that nothing special will happen.

Half an hour later, the system showed the logon screen. After login, The desktop was empty and an error message appeared, telling me, that The system can no longer access my shared MacBook drive. Network settings were completely overwritten, all system environment variables gone. Visual Studio had no longer any clue, where to find my libraries …

So I had to go back to the last working Windows version. To my very surprise, this worked. And, it was quick. It took only 5 minutes to restore the prior version.

restorewin

All my tools and drives are back.

Telekom, O-Zwei und ich

December 2, 2015 – 6:16 pm

Welchen Kleinkrieg die unterschiedlichen Telefonanbieter in Deutschland untereinander auch immer ausfechten; der Kunde ist der Leidtragende.

Am Freitag kam endlich der Telekomiker. Gut 3 Wochen nach Beauftragung eines O2 DSL Anschlusses.
Wenn ich gewusst hätte, wie das ended, ich hätte ihm keinen Kaffee angeboten.
Nach gut 20 Minuten ist er unverrichteter Dinge wieder abgedackelt.

Nö, da kann er nichts machen, da muss O2 erst “Arbeiten an der Endleitung durchführen”.

Übersetzt heisst das: “Wenn ich im Keller 2 Drähte auf Leiste 1, Klemme 10a:10b lege, dann muss O2 dafür sorgen, daß in der ersten Etage in der Unterverteilung das so gesteckt ist, daß im Büro der Router ein Signal bekommt”.

Klemme 1, 10a:10b ist jetzt ein Beispiel, das ich in meiner grenzenlosen Naivität selbst gewählt habe.

Der Telekomiker wusste nicht, wo GENAU er die signalführenden Strippen auflegen soll.

Anruf beim O2 Support mit der Frage “Wat nu??” Antwort. “Rufen Sie mal den Bautrupp an, da muss eine neue Endleitung gezogen werden. Die ist laut Telekomiker defekt”.

Mein Einwand des “nicht ganz dicht zu sein” wurde gehört aber weder besttigt noch dementiert.

Heute war dann Mister O-Zwei höchstselbst vor Ort und hat die notwendigen “Arbeiten an der Endleitung” durchgeführt.
Ich habe aus dem Debakel mit dem Telekomiker gelernt und keinen Kaffee angeboten.
Ende vom Lied. Immer noch kein DSL, weil “der Anschluss noch nicht geschaltet ist”.

Heisst, O2 hat die Endleitung auf der einen Seite der Leiste 1, 10a:10b aufgelegt, die Leitung der Telekom hängt aber immer noch fröhich und signalführend im Raum.

O-Zwei teilte mir unaufgefordert mit, dass die telekomiker das schon mal gerne so machen, weil wir keinen telekomiker Vertrag haben sondern O2.

Ein weiterer Anruf beim Support mit der Bitte um ein Update zur Dichtigkeit ergab lediglich, dass man mein Anliegen an die Telekomiker weiterleiten wird und man sich dann evtl. noch einmal dazu herablassen würde, mir einen neuen Termin zu nennen.

Zur Dichtigkeit gab es, auch auf erneute Nachfrage keine konkreten Auskünfte. Ich habe mir aber bereits eine eigene Meinung gebildet.

Nächster Gig der Telekomiker ist am  Montag, 07.12.2015.

 

[Vaadin] – widgetsets ‘com.vaadin.defaultwidgetset’ does not contain implementation for com.vaadin.addon.charts

November 15, 2015 – 8:12 am

While working on the IBM Vaadin Challenge, I ran into an issue after adding the charts component to may new project.

vaadinerror1

I implemented the charts components by adding the the following line to my ivy.xml file

 <dependency org="com.vaadin.addon"
                name="vaadin-charts"
                rev="2.1.3"></dependency>

and recompiled the widgetset.

Never the less, the error message appeared.
If you ( like me ) are new to Vaadin, you might spend some time to solve the puzzle. So I thought, I write a short description, how to fix this.

Goto your src folder and locate the compiled widgetset

vaadinerror2

Next, open the ..UI.java file. I contails a line similar like this

@VaadinServletConfiguration(productionMode = false, ui = ChartUI.class)

Modify the line so it will point to your widgetset ( do not include the ‘.gwt.xml’ part )

@VaadinServletConfiguration(productionMode = false, ui = ChartUI.class,
 widgetset="com.example.challenge.chart.widgetset.Challenge_chartWidgetset")

When you now run the application, it will display your chart.

Build Windows executables on Linux

October 23, 2015 – 8:31 am

If you have to build a binary ( .exe, .dll, … ) from source code for LINUX and WINDOWS, you need at least one build environment for each operating system.
In today’s world, this is not a big deal, because we can have as many virtual machines as we want thanks to VMWARE ( or VirtualBox ).
But this could become a complex environment and might also increase license costs to name just a few problems.

Many of us use Linux as our primary operating system. So the question is: “Can we use Linux to compile and link Windows executables?”

The concept of targeting a different platform than the compiler is running on is not new, and is known as cross-compilation.

Cross-compiling Windows binaries on Linux may have many benefits to it.

  • Reduced operating system complexity.
    On cross-platform projects that are also built on Linux, we can get one less operating system to maintain.
  • Access to Unix build tools.
    Build tools such as make, autoconf, automake and Unix utilities as grep, sed, and cat, to mention a few, become available for use in Windows builds as well. Even though projects such as MSYS port a few of these utilities to Windows, the performance is generally lower, and the versions are older and less supported than the native Unix counterparts. Also, if you already have a build environment set up under Linux, you don’t have to set it up again on Windows, but just use the existing one.
  • Lower license costs.
    As we know, Windows costs in terms of license fees. Building on Linux, developers do not need to have a Windows installation on their machines, but maybe just a central Windows installation for testing purposes.

On a Linux build environment, a gcc that compiles native binaries is usually installed in “/usr/bin”.
Native headers and libraries are in turn found in “/usr/include” and “/usr/lib”, respectively.
We can see that all these directories are rooted in “/usr”.

Any number of cross-compiler environments can be installed on the same system, as long as they are rooted in different directories.

To compile and link a Windows executable on Linux do the following

(1) Go to the MinGW-w64 download page.

For 64Bit, open “Toolchains targetting Win64″ , followed by “Automated Builds” and download a recent version to /tmp
For 32Bit, open “Toolchains targetting Win32″ , followed by “Automated Builds” and download a recent version to /tmp

mingw

(2) Create 2 directories mkdir /opt/mingw32 and mkdir /opt/mingw64

(3) Unpack the .b2z files to the according directories

For 64Bit tar xvf mingw-w64-bin_x86_64-linux_20131228.tar.bz2 -C /opt/mingw64
For 32Bit tar xvf mingw-w32-bin_x86_64-linux_20131227.tar.bz2 -C /opt/mingw32

(4) Create a new hello.c file in /tmp and paste the following code into it

#include <stdio.h>
 
int main()
 
{
 
printf("Hello World!\n");
 
return 0;
 
}

(5) Next, you can build the Windows binaries using the following commands

For 64Bit /opt/mingw64/bin/x86_64-w64-mingw32-gcc /tmp/hello.c -o /tmp/hello-w64.exe
For 32Bit /opt/mingw32/bin/i686-w64-mingw32-gcc /tmp/hello.c -o /tmp/hello-w32.exe

You now have to Windows binaries that can be downloaded to a Windows environment.

compiled

run

Well, this is just a simple sample and for more complex projects you probably have to do a little more work, but it should give you the idea, how cross-compiling can be implemented.

[C++] – A plain simple sample to write to and read from shared memory

September 15, 2015 – 7:52 am

If you have two programs ( or two threads ) running on the same computer, you might need a mechanism to share information amongst both programs or transfer values from one program to the other.

One of the possible solutions is “shared memory”. Most of us know shared memory only from server crashes and the like.

Here is a simple sample written in C to show, how you can use a shared memory object. The sample uses the BOOST libraries. BOOST libraries provide a very easy way of managing shared memory objects independent from the underlying operating system.

#include <boost/interprocess/managed_shared_memory.hpp>
#include 
 
using namespace boost::interprocess;
 
int main()
{
	// delete SHM if exists
	shared_memory_object::remove("my_shm");
	// create a new SHM object and allocate space
	managed_shared_memory managed_shm(open_or_create, "my_shm", 1024);
 
	// write into SHM
	// Type: int, Name: my_int, Value: 99
	int *i = managed_shm.construct("my_int")(99);
	std::cout << "Write  into shared memory: "<< *i << '\n';
 
	// write into SHM
	// Type: std::string, Name: my_string, Value: "Hello World"
	std::string *sz = managed_shm.construct("my_string")("Hello World");
	std::cout << "Write  into shared memory: "<< *sz << '\n' << '\n';
 
	// read INT from SHM
	std::pair<int*, std::size_t> pInt = managed_shm.find("my_int");
 
	if (pInt.first) {
		std::cout << "Read  from shared memory: "<< *pInt.first << '\n';
	}
	else {
		std::cout << "my_int not found" << '\n';
	}
 
	// read STRING from SHM
	std::pair<std::string*, std::size_t> pString = managed_shm.find("my_string");
 
	if (pString.first) {
		std::cout << "Read  from shared memory: "<< *pString.first << '\n';
	}
	else {
		std::cout << "my_string not found" << '\n';
	}
}