Setup MySQL on Docker – Order Matters

I wanted to create a new container on my Docker host for MySQL today. In general not a big deal. Pull the image from the repository and execute a docker run command along with some parameters and options to create the container.
I followed the instructions on https://hub.docker.com/_/mysql but ran into an issue, where the container started but exited after a couple of seconds.

I had created two volumes that should be mapped to the container to keep the configuration and the data persitent.

docker volume create mysql-data
docker volume create mysql-conf

Tis is not necessary because docker creates a volume when it creates the container. But the volume name is a cryptic string, and I wanted to have it in a human readable form.
I used the following command to create the container. I ran it without the -d option to be able to see output in the shell.

docker run --name=mysqlsrv -e MYSQL_ROOT_PASSWORD=SecretPassw0rd mysql:latest -p 3306:3306 -p 33060:33060 -v mysql-data:/var/lib/mysql -v mysql-conf:/etc/mysql

The container started, but then stopped.

2020-11-21 09:11:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.22-1debian10 started.
2020-11-21 09:11:50+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-11-21 09:11:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.22-1debian10 started.
2020-11-21 09:11:50+00:00 [Note] [Entrypoint]: Initializing database files
2020-11-21T09:11:50.346284Z 0 [ERROR] [MY-010083] [Server] --verbose is for use with --help; did you mean --log-error-verbosity?
2020-11-21T09:11:50.346355Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.22) initializing of server in progress as process 42
2020-11-21T09:11:50.351718Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-11-21T09:11:55.615069Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-11-21T09:12:02.486299Z 0 [ERROR] [MY-010147] [Server] Too many arguments (first extra is 'my-sql-data:/var/lib/mysql').
2020-11-21T09:12:02.486810Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
2020-11-21T09:12:02.487256Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-11-21T09:12:07.897884Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.22) MySQL Community Server - GPL.

After some try and error it turned out that the order of the parameters matter. You have to put the -p and -v parameters before the mysql:tag.

docker run -d -p 3306:3306 -p 33060:33060 -v mysql-data:/var/lib/mysql -v mysql-conf:/etc/mysql --name=mysqlsrv -e MYSQL_ROOT_PASSWORD=SecretPassw0rd mysql:latest

After that MySQL starts and maps the volumes in /var/lib/docker/volumes to the paths inside the container.


Create random files with random content with Java

I was playing with DAOS in Domino 12 recently and needed a way to create thousands of test files with a given file size and random content.

I did not want to use existing files with real data for my tests. There are several programs available for Linux and Windows. Google for it. But as a developer, I should be able to create my one tool.

Here is some sample Java code that uses java.util.Random to create filnames and content in an easy way.

package de.eknori;

import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Random;

public class DummyFileCreator {
	static int filesize = 129 * 1024;
	static int count = 9500;

	static File dir = new File("c:\\temp\\dummy\\");
	static String ext = ".txt";

	public static void main(String[] args) {
		byte[] bytes = new byte[filesize];
		BufferedOutputStream bos = null;
		FileOutputStream fos = null;

		try {
			for (int i = 0; i < count; i++) {
				Random rand = new Random();

				String name = String.format("%s%s", System.currentTimeMillis(), rand.nextInt(100000) + ext);
				File file = new File(dir, name);

				fos = new FileOutputStream(file);
				bos = new BufferedOutputStream(fos);

				rand.nextBytes(bytes);
				bos.write(bytes);

				bos.flush();
				bos.close();
				fos.flush();
				fos.close();
			}

		} catch (FileNotFoundException fnfe) {
			System.out.println("File not found" + fnfe);
		} catch (IOException ioe) {
			System.out.println("Error while writing to file" + ioe);
		} finally {
			try {
				if (bos != null) {
					bos.flush();
					bos.close();
				}
				if (fos != null) {
					fos.flush();
					fos.close();
				}
			} catch (Exception e) {
				System.out.println("Error while closing streams" + e);
			}
		}
	}

}

The code is self-explanatory. Adjust the variables to your own needs, and you are ready to go


Domino DAOS T2 S3 Credentials

Starting in Domino 11, the Domino Attachment Object Service (DAOS) tier 2 storage feature enables you to use an S3-compatible storage service to store older attachment objects that haven’t been accessed within a specified number of days.

This feature allows you to reduce the amount of data stored on Domino® servers that use DAOS. It can also improve the performance of any incremental file backups that you do for DAOS.

Before you enable DAOS tier 2 storage, you must configure Domino® credential store to store the credentials that are used for connections to the storage service.

This document describes how to configure a new credential store. Section 5 describes how to add the storage service credentials to the Domino credential store.

The document says:

Create a text file, for example, dominocred.txt, that contains the service credentials

That means that you have to create a textfile in the Domino server file system.

I find this approach not very practical. In many cases Domino administrators do not necessarily have access to the file system. This means that sometimes cumbersome requests have to be made , so that authorized persons can copy the necessary file to the server.

Here another solution had to be found. I have thought about the following small workaround.

In credstore.nsf, I made a copy of the S3 Credential form and opened the existing items for editing. The form serves as a request document.


In the QueryClose event of the form I have a little LotusScript that calls an agent. The request document’s NoteId is passed to the agent.

import java.io.BufferedWriter;
import java.io.FileWriter;

import lotus.domino.AgentBase;
import lotus.domino.AgentContext;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.Session;

public class JavaAgent extends AgentBase {

	public void NotesMain() {

		try {
			Session session = getSession();
			Database db = session.getCurrentDatabase();
			Document param = null;
			AgentContext agentContext = session.getAgentContext();
			param = db.getDocumentByID(agentContext.getCurrentAgent().getParameterDocID());

			if (null != param) {
				String dataDir = session.getEnvironmentString("Directory",true) + "/";
				String fileName = dataDir + param.getUniversalID() + ".txt";
				BufferedWriter writer = new BufferedWriter(new FileWriter(fileName, true));
				writer.append("[" + param.getItemValueString("$ServiceTag") + "]\n");
				writer.append("aws_access_key_id = " + param.getItemValueString("AWSAccessKeyId") + "\n");
				writer.append("aws_secret_access_key = " + param.getItemValueString("Fingerprint") + "\n");
				writer.close();
				sleep(2000);
				String cmd = "tell daosmgr S3 storecred " + fileName;
				session.sendConsoleCommand("", cmd);
				param.remove(true);
			}

			param.recycle();
			db.recycle();
			session.recycle();

		} catch (Exception e) {
			e.printStackTrace();
		}
	}
}

The agent reads the items from the request document and creates a text file with the required format and content in the Domino datadir.
The agent then sends a console command to create the S3 credentials in the credstore.nsf.
The credentials are added to the credential store with the named credential.

The text file is deleted as well as the request document in the credstore.nsf. when the command completes. No credentials are visible at the console or in log files.

A small workaround that makes the life of a Domino administrator easier.


Testing new DataBase methods in Domino V12 Early Access without Domino Designer V12

Domino V12 Early Access CodeDrop 3 comes with a couple of new Java/LotusScript transaction methods that have been added to the (Notes)Database class.
At the moment, there is no Domino Designer V12 available. So how can we test the new methods?

If you are familiar with Java, then this is possible, because Java development not neccessarily needs Domino Designer.

All we need is the Notes.jar file from the V12 Domino Docker container.

To access the Domino V12 program directory you can use the commands as described in https://help.hcltechsw.com/domino/earlyaccess/inst_dock_useful_commands.html

or you can create a volume and access the program directory in the same way as you do with the data directory.

docker run -it -p 80:80 -p 443:443 -p 1352:1352 --hostname=serv03.fritz.box --name domino12 --cap-add=SYS_PTRACE --stop-timeout=90 -v notesdata:/local/notesdata -v d12:/opt/hcl/domino/notes/latest/linux domino-docker:V1200_10082020prod

Now that you have access to the Domino V12 program directory, copy the Notes.jar file located in /opt/hcl/domino/notes/latest/linux/ndext to your development environment.

I have created a small JaveServerAddin based on the work of Dmytro Pastovenskyi https://dpastov.blogspot.com/2020/10/javaserveraddin-in-domino-introduction.html.

You can download the sources from here

In the BuildPath change the location of Notes12.jar to the location of Notes.jar in your environment.

To build the project, change to the bin directory inside the project and issue the command

jar cfe D12Test.jar Domino12Test de 

Copy the resulting .Jar file to the ndext folder in the Domino V12 program directory. Make sure to set the correct execution rights ( 755 ). Now you can start the addin with

lo runjava de.eknori.Domino12Test

After the addin has started you will see the following on the Domino console

11/01/2020 05:45:15 Domino12Test: version 2
11/01/2020 05:45:15 Domino12Test: build date 2020-10-22 11:00 CET
11/01/2020 05:45:15 Domino12Test: java 1.8
11/01/2020 05:45:15 Domino12Test: seconds elapsed 30

During the next scheduled addin run, the code will create a new document in the d12test.nsf. You need to create this database on your server before running the addin. The database does not need to contain any design elements.

When the addin ran, you should see new documents in the database created with a form “commit”.

You should NOT see any documents that have a “rollback” form.

The same way, you can also test the new DQL enhancements in V12 Early Access Code Drop 3.


Update HTTPPassword item in names.nsf (backend)

Yesterday, we ran into an issue with the HTTP Password in the person record in names.nsf.

The problem occured after we upgraded the customers Domino server from V 9.0.1 to V11.0.1FP1.

The customer has some backend processes installed that let them delegate the process of register, update and delete users and groups to different departments. One part of the process is a piece of code that sets the HTTP password in the person record.

The issue was that the password was stored in clear text after upgrading the server. I looked into the design and could spot the root of the issue.

In pubnames_9.ntf, the HTTPPassword item has an input translation formula that encodes the password.

In V11 of the pubnames.ntf, the HTTPPassword item is missing and so is the input translation formula. The password encoding has been moved to the “Enter Password” button.

As a condequence, if you set the password in a backend agent, the String is not encoded and visible in clear text to others.

The fix is simple. We changed our agent code from

...
doc.HTTPPassword = pwdDoc.getItemValue("pwd").text
...

to

...
Dim result As Variant
result = Evaluate(|@Password("|+ pwddoc.getItemValue("pwd").text + |")|)
doc.HTTPPassword.result(0)
...
call doc.save(true, false)

to encode the password. This tipp might be useful, if you have similar processes implemented.

I have not looked into the design of Domino 10. But chances are that HCL has changed the design also in this release.


Domino 12 Early Access Program – Time-based one-time password (TOTP) authentication

The October code drop of Domino 12 ( Early Access Program) introduces TOTP as a new security feature.

A time-based one-time password (TOTP) is a temporary passcode generated by an algorithm that uses the current time of day as one of its authentication factors. Time-based one-time passwords are commonly used for two-factor authentication. In two-factor authentication scenarios, a user must enter a traditional, static password as well as a time-based one-time password to gain access to the computing system.

To configure TOTP, please follow the instructions in the documentation

TOTP uses the IDVault. It is important that the server running Domino 12 is the primary server for the IDVault by now. Development is still work in progress and you will run into issues with TOTP when you’re Domino 12 running together with Domino 11 like I do.

The IDVault in Domino 12 comes with an updated design to show information about TOTP.

After you have configured your server for TOTP, you will see a new login dialog when you access an application on the Domino 12 server that needs authentication.

If you acces the server for the first time and TOTP is not yet set up for your user, you need to setup a TOTP authentication device.

There are a couple of applications available. I am using TOTP Authenticator on an iPhone . I also tested with Authy

You’re not participating in the program yet? Read more about the HCL Domino 12 Early Access Program here.


HCL Ambassador Nomination 2021 is open

HCL Ambassadors nominations are open from 01-OCT to 31-OCT-2020.

HCL Ambassador is a distinction that HCL awards select members of the community that are both experts in their field and are passionate about sharing their HCL knowledge with others. 

HCL Ambassadors are exactly that, ambassadors. Importantly they are not employees, but their commitment to sharing their expertise has a huge impact on the HCL community. Whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events – they help make HCL’s mission of making technology play nice, possible.

HCL Ambassadors are eager to bring their technical expertise to new audiences both in person and online around the world.

More information about the program can be found here.

You can nominate yourself or someone else.

If you want to nominate ME, I would be happy. Permission is hereby explicitly granted.

Take a few minutes to fill out the nomination form.

Thanks in advance.


Gradle – Execution failed for task ‘clean’. Unable to delete file

I am using Gradle to build my Java projects. This works well on a Mac, but the build process fails on a Windows machine, when the task clean is going to be executed.

Task :clean FAILED
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':clean'.
Unable to delete file: C:\0.GIT\travelerrules-dots\de.midpoints.travelerrules.dots\build\unpuzzle_temp\maven-ant-tasks-2.1.4-SNAPSHOT

The jar file is build when I omit the clean task, but I always wanted the build process to to all the build steps on Windows and on the Mac. I never found a good solution, and also upgrading Gradle did not solve the problem.

Today, I found at least a workaround. Run the following command from the Windows command prompt.

TASKKILL /F /IM java.exe

now you can use

gradle cleanEclipse eclipse clean build

without any issues.


HCL Domino DirSync & the ‘&’

Yesterday, Martin Vogel from sirius-net GmbH sent me an email and asked for help with HCL Domino DirSync.

He had an issue with the filter syntax and also encountered some strange error messages when trying to sync from an Active Directory to Domino Directory.

Let’s first see, what the problem with the filter is. By default, DirSync syncs all person and group objects under the given SearchBase.

If you only want to get a subset of the possible results, you can use a filter in the DirSync Configuration document. Here is the filter.

(&(objectClass=person)(memberOf=CN=DistributionGroup,CN=Services &  Accounts,CN=Sync,DC=ad,DC=fritz,DC=box))

We checked the filter syntax by running a search from LDAPAdmin. The search returned the expected result.

But when we try to apply the filter to the DirSync configuration, the following error message occurs when trying to save the document.

Apparently there is a problem with the ‘&‘ in the ‘CN=Services & Accounts‘ part of the filter.

My first guess was that the ‘&’ character is not allowed in the filter string. But why is it possible to create an OU with this character in Active Directory?

And also, why does LDAPAdmin not complain about bad syntax?

The “Naming conventions in Active Directory for computers, domains, sites, and OUs” documentation does not mention any disallowed characters for OU names.

I took a closer look at the DirSyncUtil class in names.nsf. The error message is displayed during validation of the filter string. The code assumes that a “&” or “|” can only occur right after a “(“.

			Else
				state = State_inpar ' if we see an operator after this it's a problem.
			End If
			lastChar = ch
		Next
		If depth <> 0 Then
crunch:
			LDAPValidate = i
		End If
		Exit Function

If “&” or “|” occur after any other char, it is assumed an error. So the filter string is syntactically ok. But it is not validated as error-free.

Next, we needed to find a way to work around this limitation. One option would be to disable validation. Not a good idea. Or, we could set the filter string using Ytria ScanEZ.

This works, but in a productive environment this is certainly not the correct procedure.

Google to the rescue, we found another option to apply the filter string and validate it on document save & close.

Simply replace the ‘&’ char by it’s escape sequence ‘\26’. Our filter string will look like this then.

(&(objectClass=person)(memberOf=CN=DistributionGroup,CN=Services \26 Accounts,CN=Sync,DC=ad,DC=fritz,DC=box))

Now you can save & close the document without the error message. We then enabled the DirSync configuration. When DirSync tried to sync, we saw an error message on the server console.

"<ct sq="00003128" ti="0039C81D-C12585DF" ex="ndirsync" pi="17F0" tr="0004-0FB8" co="7">[17F0:0004-0FB8] DirSync> CSyncFromAD::SyncSpan( - 84: Decoding error)@syncfromad.cpp:2866 - 13171:DirSync encounterred LDAP error ./ct>"

We cross-checked the configuration on another machine and with another Active Directory.

Here the sync ran without any issues and the expected person records were synced into the Domino Directory.

We repeated the test without any filter string applied and even in this case, an error occurs. But the error has changed.

[17F0:0005-0324] DirSync ResyncAll by CheckBox: 0
[17F0:0005-0324] DirSync Preview: 0
[17F0:0005-0324] DirSync Level: 16
[17F0:0005-0324] DirSync SyncFlows: 2
[17F0:0005-0324] DirSync OnPremCookie:
[17F0:0005-0324] DirSync UserDirCookie: 54247270
[17F0:0005-0324] DirSync ResyncAll by Request: 1
[17F0:0005-0324] DirSync Sync all request calling SyncFromLDAPToNAB.
[17F0:0005-0324] DirSync SyncFromLDAPToNAB The parms were:
[17F0:0005-0324] DirSync base:
[17F0:0005-0324] DirSync scope: subtree
[17F0:0005-0324] DirSync filter: (&(|(objectClass=Group)(objectClass=Person))(uSNChanged>=0))
[17F0:0005-0324] DirSync attributes: co
, company
, department
, description
, facsimileTelephoneNumber
, givenName
, mail
, homephone
, initials
, l, manager

, mobile
, pager
, physicaldeliveryofficename
, postalcode
, sn
, st
, streetaddress
, telephonenumber
, title
, uid
, wWWHomePage
, memberOf
, objectClass
, objectGUID
, groupType
, member
, uSNChanged
[17F0:0005-0324] DirSync page size: 5000
[17F0:0005-0324] SyncFromLDAPToNAB ldap err: 54

Error code 54 stands for LDAP_LOOP_DETECTED. The documentation says that the error …

Indicates that the client discovered an alias or referral loop, and is thus unable to complete this request.

I am not an Active Directory expert. So I have no clue, why this errors occur on one machine, but not on the other. I’ve asked Mike O’Brien from HCL Development.

Odd. The error 84 (0x54) is an LDAP_DECODING_ERROR. This means that there was an issue decoding a server result. The 54 which is (0x36) is a LDAP_LOOP_DETECT which indicates a service issue.
The fact that he is seeing this with the default filter leads me to believe it is something service related.

Mike O’Brien, HCL

Apparently, the errors are related to the Active Directory configuration. Anyone any thoughts on this?


Docker – Prevent Container Autostart

Docker will autostart any container with a RestartPolicy of ‘always’ when the docker service initially starts.

I’ve mostly had this situation occur when a container was created with –restart always, and the situation later changed such that I no longer wanted this to happen.

You won’t find any evidence of this within cron or any other normal system startup scripts; you’ll have to dig into the container configuration to find it.

In order to quickly find the RestartPolicy config, you can use

docker inspect my-container | grep -A 3 RestartPolicy

The -A n grep option shows n lines after the match.

To update the RestartPolicy config you can use

docker update --restart=no my-container

Here is a sample from one of my containers.

docker inspect c2ea02bc1349 | grep -A 3 RestartPolicy
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},

docker update --restart=no c2ea02bc1349

docker inspect c2ea02bc1349 | grep -A 3 RestartPolicy
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},

Crash In Domino V11.0.1 Amgr when executing LotusScript Code

I am seing intermittent crashes of my Domino V11.0.1 server on Windows/2016 10.0 [64-bit] (Build 9200), PlatID=2, (2 Processors) when the agent manager runs a LotusScript agent. I have also seen this kind of crash on another server in a customer environment.

The agent ran on my server for about a week in a 5 minute schedule before the crash occurred, while the customer server already crashed after a couple of hours.

On the server console we saw:

[1BF4:0002-15E4] Thread=[1BF4:0002-15E4]
[1BF4:0002-15E4] Stack base=0xA9BCE790, Stack size = 20432 bytes
[1BF4:0002-15E4] PANIC: Object handle is invalid

The crash stack in the NSD shows the following

#
thread 1/17: [ nAMgr: 1bf4: 15e4] FATAL THREAD (Panic)
FP=0xB1A9BC7EB8, PC=0x7FFB68F05A84, SP=0xB1A9BC7EB8
stkbase=0xB1A9BD0000, total stksize=86016, used stksize=33096
EAX=0x00000004, EBX=0x00000000, ECX=0x00000b20, EDX=0x00000000
ESI=0x000927c0, EDI=0x00000b20, CS=0x00000033, SS=0x0000002b
DS=0x00000000, ES=0x00000000, FS=0x00000000, GS=0x00000000 Flags=0x1700000246
#
[ 1] 0x7FFB68F05A84 ntdll.ZwWaitForSingleObject+20 (10,0,0,B1A9BC7FD0)
[ 2] 0x7FFB65F04DAF KERNELBASE.WaitForSingleObjectEx+143 (10,B1A9BC8680,7FFB00000000,b20)
@[ 3] 0x7FFB55A8D430 nnotes.OSRunExternalScript+1808 (0,0,424,0)
@[ 4] 0x7FFB55A897FC nnotes.FRTerminateWindowsResources+1532 (0,23164920CC0,0,1)
@[ 5] 0x7FFB55A8B383 nnotes.OSFaultCleanupExt+1395 (0,4fd0,0,B1A9BC9940)
@[ 6] 0x7FFB55A8AE07 nnotes.OSFaultCleanup+23 (4fd0,B1A9BC8E30,0,7FFB567F6668)
@[ 7] 0x7FFB55AF7A76 nnotes.OSNTUnhandledExceptionFilter+390 (B1A9BC9820,7FFB570A2568,B1A9BC9940,FFFFE804495CCD3)
@[ 8] 0x7FFB55A8E06A nnotes.Panic+1066 (30,12585AE001FDDC1,0,2b4)
@[ 9] 0x7FFB55A8D943 nnotes.Halt+35 (23165F43FE8,9,0,0)
@[10] 0x7FFB566F1AD1 nnotes.HANDLEDereference+113 (B1A9BCC980,7FFB4E4DE1F2,23170103018,7FFB4E556D38)
@[11] 0x7FFB5674B956 nnotes.InitDbContextExt+310 (23170103018,0,23170103018,0)
@[12] 0x7FFB56745F64 nnotes.NSFDbUserGetbTrans+36 (67200004,7FFB585E3E80,23164F80018,0)
@[13] 0x7FFB56125640 nnotes.ClientSearchFill+80 (2000141c,7FFB200003C5,23164F80018,7FFB00000000)
@[14] 0x7FFB562E6899 nnotes.QueueFill2+73 (23170102618,2000141c,0,0)
@[15] 0x7FFB562E690B nnotes.QueueGet+27 (23170102618,7FFB585E3E80,23164F80018,23170102618)
@[16] 0x7FFB4E559586 nlsxbe.ANServer::ANSVNextDbFile+214 (0,B1A9BCD8C0,109,0)
@[17] 0x7FFB4E558E0E nlsxbe.ANServer::ANDispatchMethod+270 (B1A9BCD8C0,0,23170102718,7FFB5702A5F7)
@[18] 0x7FFB4E4EB50F nlsxbe.ANCLASSCONTROL+7887 (23164C17358,7FFB00000109,B1A9BCD840,B1A9BCD8C0)
@[19] 0x7FFB56F97F3F nnotes.LSsInstance::AdtCallBack+319 (2317022D6C8,2311399B9C0,1,23164C17358)
@[20] 0x7FFB56FCE9C2 nnotes.LScObjCli::ProdMethodCall+82 (2317022D6C8,0,23,38)
@[21] 0x7FFB56FC4C5B nnotes.LSsThread::AdtCallMethod+219 (7fff,2317022E558,B1A9BCD9A8,2311399B9C0)
@[22] 0x7FFB56FBF3D2 nnotes.LSsThread::NRun+9922 (23164BB1B08,B1A9BC000B,0,24702531)
@[23] 0x7FFB56FBFD51 nnotes.LSsThread::Run+449 (2311399B9C0,2316E307FA8,0,2)
@[24] 0x7FFB56F6AC88 nnotes.LSIThread::RunInternal+104 (12585AE001FDDC1,0,0,12585AE00213D3A)
@[25] 0x7FFB56F6AF42 nnotes.LSIThread::RunToCompletion+386 (2316E2F1E28,2316E2F1E28,B1A9BCDD10,12585AE00213D3A)
@[26] 0x7FFB56F65DEE nnotes.CLSIDocument::RunScript+878 (B1A9BCEC00,2316E2FF9E8,B1A9BCEC00,0)
@[27] 0x7FFB561F5958 nnotes.CRawActionLotusScript::Run+648 (2,B1A9BCE410,B1A9BCEC00,200017cf)
@[28] 0x7FFB561EE147 nnotes.CRawAction::Execute+391 (2316E300828,0,23100000000,0)
@[29] 0x7FFB561E9FDC nnotes.CAssistant::Run+4236 (12585AE00000000,B1A9BCEBC8,2316E2F1E28,23100000000)
@[30] 0x7FFB5D825334 namgrdll.RunTask+2900 (B1A9BCF808,7FFB000001D2,7FFB00000000,23100000000)
@[31] 0x7FFB5D8246D9 namgrdll.ProcessMessage+361 (0,1,140,23138BE6AC8)
@[32] 0x7FFB5D823D2B namgrdll.ExecutiveMain+315 (23164B8E120,1,3,1)
@[33] 0x7FFB5D826C3C namgrdll.AddInMain+412 (0,23164B8E108,0,0)
@[34] 0x7FF691AA1037 nAMgr.NotesMain+55 (0,0,7FF691AA0000,B1A9BCFC60)
@[35] 0x7FF691AA11D0 nAMgr.notes_main+336 (7FFB654859F8,0,0,3)
@[36] 0x7FF691AA1078 nAMgr.main+24 (0,0,0,0)
@[37] 0x7FF691AA14E0 nAMgr.__scrt_common_main_seh+268 (0,0,0,0)
[38] 0x7FFB68D884D4 KERNEL32.BaseThreadInitThunk+20 (0,0,0,0)
[39] 0x7FFB68ECE871 ntdll.RtlUserThreadStart+33 (0,0,0,0)

On the server, I have set DEBUG_LS_DUMP=1.

The call stack identifies the getNextDatabase method of the NotesDbDirectory class as the problematic part of the code.

<@@ ------ LotusScript Interpreter -> Call Stack for [ nAMgr: 1bf4: 15e4] (Time 07:49:05) ------ @@>
Source database is: 'nuke-server.nsf'
[2] GETNEXTDATABASE
[1] RUN_WITH_NOTESEXT @ line number 49
[0] INITIALIZE @ line number 3
** Detach from process [ nAMgr: 1bf4]

The agent uses the NotesDbDirectory class and iterates over all .nsf file on the Domino server.

Option Declare

Dim g_session As NotesSession

Sub Initialize
	Set g_session = New NotesSession()
End Sub


Public Sub run_with_notesext()
	On Error GoTo handle_err
	
	Dim dbdir As New NotesDbDirectory(g_session.currentDatabase().Server)
	Dim db As NotesDatabase
	
	Set db = dbdir.GetFirstDatabase(1247)
	
	While Not (db Is Nothing)
		' do stuff with db ...

		Set db = dbdir.GetNextDatabase
	Wend
	
End Sub

I have opened a case (#CS0141246) with HCL support.


DirSync – Identify deleted users

During my DNUG47 Online session on “Active Directory Synchronisation with HCL Domino v11 DirSync” I was asked if it is possible to identify users that have been synced to the Domino Directory and later on deleted in Active Directory.

Two scenarios must be distinguished here.

  • User has been synced from AD but not registered in Domino Directory
  • User has been synced and registered in Domino Directory using the “Register selected user” functionality.

In the first case, users that have been deleted from the AD are also deleted from the Domino Directory.

In the latter case, users are NOT deleted from the Domino Directory.

But how can we tell, if a user in Domino Directory has been added to the Domino Directory from the Active Directory.

Let us take a closer look into the document properties of such a user.

When a user has been synced by DirSync, you will see the “briefcase” icon left to the user name.

In addition, several items are added to the Domino document, i.e. ‘objectGUID’, ‘$$DirsyncDigest’, ‘$$DirsyncDomain’ and ‘$$LdapDN’.

When the user has been registered, also the “AvailableForDirSync” item is added to the document.

When you remove the user from the Active Directory, you will see the following output on the Domino Console.

[0868:0005-2A30] DirSync
DirSync Removed 'objectGUID', '$$DirsyncDigest', '$$DirsyncDomain' and '$$LdapDN' for registered user 'CN=James Kirk/O=singultus' with Note ID '33810'.
[0868:0005-2A30] DirSync NOTE: This registered user is now DISCONNECTED from its AD counterpart and can be reconnected later by matching the e-mail address.
[0868:0005-2A30] DirSync resyncall - SyncFromLDAPToNAB completed in: 0.71 seconds

Keep in mind, that you need to perform a full “Resync” to change the user state in Domino Directory.

When we now look into the document properties, we will see the following.

First of all, you see that the “briefcase” icon is no longer available.

As stated in the console output, ‘objectGUID’, ‘$$DirsyncDigest’, ‘$$DirsyncDomain’ and ‘$$LdapDN’ have been removed from the document, but the “AvailableForDirSync” item is still there.

So this can be used as an indicator to identify user records that have been synced from the Active Directory, registered as Notes user, and removed at some point from the Active Directory.


Issue when trying to bind nginx on CentOS 7.4 to other port than 80

Problem:

I was fighting with a permission related issue with nginx on CentOS 7.4. When I configure nginx to listen to port 80 everything works as expected, but when I use any other port (i.e. 82) it doesn’t.

[root@CentOS7 nginx]# sudo systemctl start nginx
Mai 28 18:32:52 CentOS7 systemd[1]: Starting The nginx HTTP and reverse proxy server…
Mai 28 18:32:52 CentOS7 nginx[22626]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Mai 28 18:32:52 CentOS7 nginx[22626]: nginx: [emerg] bind() to 0.0.0.0:82 failed (13: Permission denied)
Mai 28 18:32:52 CentOS7 nginx[22626]: nginx: configuration file /etc/nginx/nginx.conf test failed
Mai 28 18:32:52 CentOS7 systemd[1]: nginx.service: control process exited, code=exited status=1
Mai 28 18:32:52 CentOS7 systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Mai 28 18:32:52 CentOS7 systemd[1]: Unit nginx.service entered failed state.
Mai 28 18:32:52 CentOS7 systemd[1]: nginx.service failed.

Solution:

This will most likely be related to SELinux

To check which ports are ports are allowed with SELinux and http use the following command

semanage port -l | grep http_port_t
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000

As you can see from the output above with SELinux in enforcing mode http is only allowed to bind to the listed ports.
The solution is to add the ports you want to bind on to the list

semanage port -a -t http_port_t -p tcp 82

will add port 82 to the list.

Now you can start nginx without any issues.

[root@CentOS7 nginx]# sudo systemctl start nginx
[root@CentOS7 nginx]# sudo systemctl status nginx
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Do 2020-05-28 18:38:41 CEST; 6s ago
Process: 22862 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 22859 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 22857 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 22864 (nginx)
Tasks: 3
CGroup: /system.slice/nginx.service
├─22864 nginx: master process /usr/sbin/nginx
├─22865 nginx: worker process
└─22866 nginx: worker process
Mai 28 18:38:41 CentOS7 systemd[1]: Starting The nginx HTTP and reverse proxy server…
Mai 28 18:38:41 CentOS7 nginx[22859]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Mai 28 18:38:41 CentOS7 nginx[22859]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Mai 28 18:38:41 CentOS7 systemd[1]: Started The nginx HTTP and reverse proxy server.

The plain simple guide to installing Atlassian JIRA on CentOS 8

I recently installed Atlassian JIRA on a CentOS 8 minimal install and ran into an issue with running the installation as a service. The issue was reproducible on another CentOS 8 machine.

I found it a good idea to post my workaround because I could not find any other solution. Here is what I did.

Download the version to be installed from the Atlassian download repository.

wget https://product-downloads.atlassian.com/software/jira/downloads/atlassian-jira-software-8.8.1-x64.bin -O atlassian-jira-software.bin

Change permissions and run the installer

chmod +x atlassian-jira-software.bin
./atlassian-jira-software.bin

Accept the default values. I only changed the Http port, because the JIRA default port is already in use by another program.

Do NOT install Jira as service. If you choose YES, all configuration will be in place, but JIRA will not start automatically.

Configure your local firewall accordingly

firewall-cmd --permanent --add-port=8085/tcp
firewall-cmd --reload

JIRA software requires a database for its installation, therefore the first step will be to create a database in the (here goes your) database engine. I use PostgreSQL. The user postgres already exists because I use the same machine for Atlassian BitBucket.

su - postgres
psql
postgres=# CREATE USER jiradbuser PASSWORD 'jiradbpassword';
postgres=# CREATE DATABASE jiradb WITH ENCODING 'UNICODE' LC_COLLATE 'C' LC_CTYPE 'C' TEMPLATE template0;
postgres=# GRANT ALL PRIVILEGES ON DATABASE jiradb to jiradbuser

Now for the workaround. As user root create a new file using your preferred text editor.

nano /etc/systemd/system/jira.service

Copy and paste the following lines into jira.service. If you have changed the default installation path, make sure to modify the path accordingly.

[Unit]
Description=Jira Issue & Project Tracking Software
[Service]
Type=forking
User=jira
PIDFile=/opt/atlassian/jira/work/catalina.pid
ExecStart=/opt/atlassian/jira/bin/start-jira.sh
ExecStop=/opt/atlassian/jira/bin/stop-jira.sh
[Install]
WantedBy=multi-user.target

Save the file and reload the systemctl daemon. Then enable the new service and start JIRA.

systemctl daemon-reload
systemctl enable jira
systemctl start jira

Now you can open the JIRA web site in your browser and setup and configure your JIRA instance.


Problem importing certificates into keyring with LE4D after upgrade to Domino V11.0.1

Due to an issue with the JVM installed with Domino V11.0.1, LE4D throws an error when the tool tries to import the new / renewed certificate into the Domino keyring file.
The agent calls the kyrtool and passes the required parameters to the tool.
On the Domino V11.0.1 console, you will see an error

13.04.2020 06:48:52 Agent error: java.io.IOException: Cannot run program "cmd.exe": Malformed argument has embedded quote: "d:\domino\kyrtool.exe" create -k "d:\domino\data\eknori_staging.kyr"
13.04.2020 06:48:52 Agent error: at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
13.04.2020 06:48:52 Agent error: at java.lang.Runtime.exec(Runtime.java:621)
13.04.2020 06:48:52 Agent error: at java.lang.Runtime.exec(Runtime.java:486)
13.04.2020 06:48:52 Agent error: at de.midpoints.le4d.tools.CommandProcessor.executeCommand(CommandProcessor.java:11)
13.04.2020 06:48:52 Agent error: at de.midpoints.le4d.manager.Le4dManager.runKyrTool(Le4dManager.java:623)
13.04.2020 06:48:52 Agent error: at de.midpoints.le4d.manager.Le4dManager.run(Le4dManager.java:205)
13.04.2020 06:48:52 Agent error: at de.midpoints.MPStarter.NotesMain(MPStarter.java:16)
13.04.2020 06:48:52 Agent error: at lotus.domino.AgentBase.runNotes(Unknown Source)
13.04.2020 06:48:52 Agent error: at lotus.domino.NotesThread.run(Unknown Source)
13.04.2020 06:48:52 Agent error: Caused by:

The problem is not in the code itself, because it runs on Domino V9.0.1FP10 and also on Domino V10.x and also on Domino V11. It seemed to stop working after upgrading the server to V11.0.1

I searched for the error on Google and found some references to it. The cause of this error is due to the Java update mentioned here: https://www.oracle.com/technetwork/java/javase/13-0-1-relnotes-5592797.html#JDK-8221858

To fix the error in Domino V11.0.1 do the following

  • If not already in place, create a new text file javaOptions.txt in the DominoDataDir
  • Add the following line to the javaOptions.txt file ( If you already have a javaOptions file, append the new entry to the existing lines in the file)
    -Djdk.lang.Process.allowAmbiguousCommands=true
  • Save javaOptions.txt
  • Add the following line to the server notes.ini
    JAVAOPTIONSFILE=DominoDataDir/javaoptions.txt
  • Restart the server

When you now run the LE4D tool, everything should work!


HCL Nomad On Android Smartphones

I do not own an Android smartphone. But I saw a couple of forum entries from people with up-to-date Android device complaining, that HCL Nomad does not run on their devices.

I found an interesting post by Erik Schwalb in the German Notes Forum.

According to this post, the device must support the “64 bit abi for native applications”. https://developer.android.com/ndk/guides/cpp-support

The term “abi” stands for application binary interface and provides a way to run apps on the Android platform, that are written in C/C++ (native) code or linked with any third party native libraries. The core functions of Notes/Domino are written in C++ and therefore a Notes-like client such as HCL Nomad needs this 64 bit abi for native applications. We don’t have a reliable way to tell which Android devices have 64 bit abi support and which do not.
At this point the best way is to try to download the app from the Play Store, and if it’s available, it’s supported and if not, then it’s not supported.

This seems not to be documented.


[How to] – install MSSQL Server on RHEL / CentOS 8

I already wrote about how to install MSSQL Server on RHEL 7. Today, I tried to install MSSQL on CentOS 8 following my instructions. The installation failed.

[root@scm opt]# sudo yum install -y mssql-server
 packages-microsoft-com-mssql-server-2017                                                                                           123 kB/s |  15 kB     00:00
Last metadata expiration check: 0:00:01 ago on Sun 26 Jan 2020 07:40:10 AM CET.
Error:
 Problem: cannot install the best candidate for the job
 nothing provides python needed by mssql-server-14.0.3257.3-13.x86_64
 nothing provides openssl < 1:1.1.0 needed by mssql-server-14.0.3257.3-13.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) 

The SQL Server 2017 makes use of python2 and OpenSSL 1.0. You’ll need to install the package without resolving dependencies.

CentOS 8 does not install Python by default and OpenSSL seems also not to be available when you do a minimal install.

Here is, what I did to get MSSQL Server installed on CentOS 8.

sudo yum -y install python2 compat-openssl10
sudo alternatives --set python /usr/bin/python2
sudo yum download mssql-server
sudo rpm -Uvh --nodeps mssql-server*rpm 

This will install the dependencies needed. You should see the following output:

[root@scm opt]# sudo yum -y install python2 compat-openssl10
 Last metadata expiration check: 0:09:33 ago on Sun 26 Jan 2020 07:41:11 AM CET.
 Dependencies resolved.
  Package                                     Architecture              Version                                                  Repository                    Size
 Installing:
  compat-openssl10                            x86_64                    1:1.0.2o-3.el8                                           AppStream                    1.1 M
  python2                                     x86_64                    2.7.16-12.module_el8.1.0+219+cf9e6ac9                    AppStream                    109 k
 Installing dependencies:
  python2-libs                                x86_64                    2.7.16-12.module_el8.1.0+219+cf9e6ac9                    AppStream                    6.0 M
  python2-pip-wheel                           noarch                    9.0.3-14.module_el8.1.0+219+cf9e6ac9                     AppStream                    1.2 M
  python2-setuptools-wheel                    noarch                    39.0.1-11.module_el8.1.0+219+cf9e6ac9                    AppStream                    289 k
 Installing weak dependencies:
  python2-pip                                 noarch                    9.0.3-14.module_el8.1.0+219+cf9e6ac9                     AppStream                    2.0 M
  python2-setuptools                          noarch                    39.0.1-11.module_el8.1.0+219+cf9e6ac9                    AppStream                    643 k
 Enabling module streams:
  python27                                                              2.7
 Transaction Summary
 Install  7 Packages
 Total download size: 11 M
 Installed size: 42 M
 Downloading Packages:
 (1/7): python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64.rpm                                                                    1.4 MB/s | 109 kB     00:00
 (2/7): compat-openssl10-1.0.2o-3.el8.x86_64.rpm                                                                                    5.3 MB/s | 1.1 MB     00:00
 (3/7): python2-pip-wheel-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch.rpm                                                           3.6 MB/s | 1.2 MB     00:00
 (4/7): python2-setuptools-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch.rpm                                                         3.3 MB/s | 643 kB     00:00
 (5/7): python2-setuptools-wheel-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch.rpm                                                   2.4 MB/s | 289 kB     00:00
 (6/7): python2-pip-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch.rpm                                                                 2.3 MB/s | 2.0 MB     00:00
 (7/7): python2-libs-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64.rpm                                                               3.9 MB/s | 6.0 MB     00:01
 Total                                                                                                                              6.2 MB/s |  11 MB     00:01
 Running transaction check
 Transaction check succeeded.
 Running transaction test
 Transaction test succeeded.
 Running transaction
   Preparing        :                                                                                                                                           1/1
   Installing       : python2-setuptools-wheel-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch                                                                     1/7
   Installing       : python2-pip-wheel-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch                                                                             2/7
   Installing       : python2-libs-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                                                                                 3/7
   Installing       : python2-pip-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch                                                                                   4/7
   Installing       : python2-setuptools-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch                                                                           5/7
   Installing       : python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                                                                                      6/7
   Running scriptlet: python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                                                                                      6/7
   Installing       : compat-openssl10-1:1.0.2o-3.el8.x86_64                                                                                                    7/7
   Running scriptlet: compat-openssl10-1:1.0.2o-3.el8.x86_64                                                                                                    7/7
   Verifying        : compat-openssl10-1:1.0.2o-3.el8.x86_64                                                                                                    1/7
   Verifying        : python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                                                                                      2/7
   Verifying        : python2-libs-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                                                                                 3/7
   Verifying        : python2-pip-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch                                                                                   4/7
   Verifying        : python2-pip-wheel-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch                                                                             5/7
   Verifying        : python2-setuptools-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch                                                                           6/7
   Verifying        : python2-setuptools-wheel-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch                                                                     7/7
 Installed:
   compat-openssl10-1:1.0.2o-3.el8.x86_64                                              python2-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64
   python2-pip-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch                             python2-setuptools-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch
   python2-libs-2.7.16-12.module_el8.1.0+219+cf9e6ac9.x86_64                           python2-pip-wheel-9.0.3-14.module_el8.1.0+219+cf9e6ac9.noarch
   python2-setuptools-wheel-39.0.1-11.module_el8.1.0+219+cf9e6ac9.noarch
 Complete!

[root@scm opt]# sudo alternatives --set python /usr/bin/python2

[root@scm opt]# sudo yum download mssql-server
 Last metadata expiration check: 0:10:40 ago on Sun 26 Jan 2020 07:41:11 AM CET.
 mssql-server-14.0.3257.3-13.x86_64.rpm                                                                                             3.9 MB/s | 183 MB     00:46

[root@scm opt]# sudo rpm -Uvh --nodeps mssql-server*rpm
 warning: mssql-server-14.0.3257.3-13.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID be1229cf: NOKEY
 Verifying…                          ################################# [100%]
 Preparing…                          ################################# [100%]
 Updating / installing…
    1:mssql-server-14.0.3257.3-13      ################################# [100%]
 +--------------------------------------------------------------+
 Please run 'sudo /opt/mssql/bin/mssql-conf setup'
 to complete the setup of Microsoft SQL Server
 +--------------------------------------------------------------+
 SQL Server needs to be restarted in order to apply this setting. Please run
 'systemctl restart mssql-server.service'.

[root@scm opt]# sudo /opt/mssql/bin/mssql-conf setup
 Choose an edition of SQL Server:
   1) Evaluation (free, no production use rights, 180-day limit)
   2) Developer (free, no production use rights)
   3) Express (free)
   4) Web (PAID)
   5) Standard (PAID)
   6) Enterprise (PAID)
   7) Enterprise Core (PAID)
   8) I bought a license through a retail sales channel and have a product key to enter.
 Details about editions can be found at
 https://go.microsoft.com/fwlink/?LinkId=852748&clcid=0x409
 Use of PAID editions of this software requires separate licensing through a
 Microsoft Volume Licensing program.
 By choosing a PAID edition, you are verifying that you have the appropriate
 number of licenses in place to install and run this software.
 Enter your edition(1-8): 3
 The license terms for this product can be found in
 /usr/share/doc/mssql-server or downloaded from:
 https://go.microsoft.com/fwlink/?LinkId=855862&clcid=0x409
 The privacy statement can be viewed at:
 https://go.microsoft.com/fwlink/?LinkId=853010&clcid=0x409
 Do you accept the license terms? [Yes/No]:yes
 Enter the SQL Server system administrator password:
 Confirm the SQL Server system administrator password:
 Configuring SQL Server…
 The licensing PID was successfully processed. The new edition is [Express Edition].
 ForceFlush is enabled for this instance.
 ForceFlush feature is enabled for log durability.
 Created symlink /etc/systemd/system/multi-user.target.wants/mssql-server.service â /usr/lib/systemd/system/mssql-server.service.
 Setup has completed successfully. SQL Server is now starting.

DAOS – JAR files in Java agents. (UPDATE)

Yesterday I wrote about a supposed problem with DAOS.

Daniel Nashed contacted me and explained that it is not a bug; DAOS works as designed.

DAOS makes no difference between data and design documents. Only the presence of an objects of type attachment is decisive whether the object is transferred to the DAOS repository if the configured threshold value is exceeded.

Imported archive files in agents are therefore not the only objects that are transferred to the DAOS repository if the requirements are met.

The same behavior applies to Java Script Libraries, Forms, Pages and About & Using documents. Only JAR design elements are not affected because the data is stored differently here.

The application continues to work without problems. Problems may arise if the NLO files are moved to a DAOS T2 storage. In any case, the behavior of DAOS in connection with design elements should be kept in mind.