IBM Connections application development state of the union – part 2

Part 1 was about API’s and SPI’s – this part will be about widgets or apps as IBM likes to call them now. There are big differences between how widgets / apps works for on-premises, cloud and on Mobile. Let us starts with Mobile as it’s the quickest one to address but also the most depressing…

Besides adding menu items to the IBM Connections mobile app menu (the one that slides in from the left) and having the content load in an embedded browser control there is no support for widgets / apps on Mobile. None. Zero. Given that IBM Connections always has been marketed as social software that focus on the individual and where all content and data is tied to the user it has always been surprising to me how little focus IBM has put on Mobile from the ISV perspective in this regard. From my perspective as an ISV it would be obvious that ISV’s would want to pivot of a profile, a file, a community etc. and launch into a custom app supplying that context.

I have always been a big advocate for adding widgets / apps / actions to IBM Connections Mobile. And yes I know that adding custom content to an iOS or Android app is hard and there are security implications but there are ways around it. Simply supporting declarative actions using URL token replacements would go a long way (on-premises they could be loaded from mobile-config.xml). Allow me add an action specifying the feature it should go into (Profiles, Communities, Files etc.) and allow me to add a URL pattern to it. The URL pattern should be able to take URL token replacements so my URL could be something like https://myapp.example.com/people/%uid% or https://myapp.example.com/people/%email% or https://myapp.example.com/files/%fileid%. Once activated the app could grab til touch action, replace the tokens in the URL based on the current record and load it in the browser control it uses for those aforementioned left menu shortcuts.

Obviously I’m always way too optimistic but how hard could that be? And I think it would add a ton of options to customers and ISV’s for pivoting off content in IBM Connections and go into custom apps.

Thinking further about it – what if a native app for the data was present on the device and a registered URL scheme was used it would probably automatically launch into that app carrying the context along. Maybe even sending along some credentials to make this app switch transparent to the user – how cool would that be. But even it that wasn’t the case (I know iOS added restrictions as to what and how many URL schemes an app could query) and I would end up in a web app it would still be a great improvement.

IBM Connections application development state of the union – part 1

IBM Connections has been on the market for a lot of time now and has always been a real strong player when it comes to application development. I thought it was time to review where we are application development wise over what will probably be a couple of posts. First off is API’s…

IBM Connections is and has always been strong from the point of API’s – there is an API for almost all areas of the product and always has. I think IBM Connections was the first product (maybe still the only one) that was built by IBM Collaboration Solutions using an API first approach or at least with a strong emphasis on API’s.

The API’s are extensive and are pretty straight forward to use. The main caveat about the IBM Connections API’s is that they were designed a looooong time ago and hasn’t been updated since. I went to the documentation to check and the majority of the API’s hasn’t changed one bit since v. 2.5. This means they are still using the once cool Atom publishing protocol using lots of XML and lots of namespaces. Stability is very nice but an XML based API is really appdev unfriendly these days and the use of namespaces makes it even worse and difficult to handle – you either handle the complexity or you simply choose the parse without namespaces. Extracting data is either done using XPath or string parsing none of which is easy or performs well and again the namespaces makes XPath notoriously difficult.

This being said there is one exception to the XML based nature of the API’s. When Activity Streams were added in IBM Connections 4.5 it was brought to market with a JSON based API and is still the only component in the IBM Connections suite using JSON based API’s. Due to its roots in OpenSocial the API was however hard to grasp and I don’t think it got much traction despite my best efforts to make it approachable. My presentation on how to use the Activity Stream remains one of my most popular presentations at conferences.

When this is said and done I think that the most important thing IBM could do for IBM Connections API wise would be to update the API’s. In todays world a JSON based API is really expected and is the defacto way of accessing data. Do away with XML and namespaces and adopt a standards based JSON approach to messages being sent to the server and returned from the server. Of course the legacy Atom based API should be left in but it should really be augmented with a JSON ditto as soon as possible.

Besides the API’s there is also a set of SPI’s (Service Provider Interfaces) for stuff like hooking into the event subsystem, discovering features etc. The event SPI is pretty well designed and is very useful. However it seems that it was never really polished, completed and designed for customer / partner consumption as much of the functionality is reserved for IBM widgets and never documented or supported. Other pieces of the SPI’s can only run within the same JVM as IBM Connections which really makes it unusable from a partner perspective.

The worst thing about the API or SPI is however what is not there…

There is no support for programmatic user registration or user deletion / deactivation. There is no support for feature discovery from outside the JVM. There is no support for getting a complete member lists for a community based on a (LDAP) directory groups. Using the community membership API will return a list of members some of which may be groups. If that’s the case good luck. Your best / only option there would be to use WIM (WebSphere Identity Manager) to resolve the group and then manually combine that result with the direct community members. Of course if you are using IBM Domino as the LDAP server start by figuring out how IBM mangled the group UID before resolving it in WIM. That’s really a can of worms of its own and maybe worth a blog post of its own. There is no support for a super-user that can access data on all users behalf. There is no support for easily reusing the IBM Connections header bar if using IBM Connections on-premises.

I don’t want to finish this post being too pessimistic but there is really room for improvement from the API / integration standpoint. IBM needs to step up to the plate and update the platform and make it current from an API perspective. Oh and while you are at it document the API’s, create up to date examples and actually make the documentation discoverable using Google…

Software Dependency Management and the associated risks

Being a Maven convert and a guy that likes to dabble in programming this topic is very interesting albeit not one I’ve thought much about – and I guess this is true for most. Or let’s put it another way. After you start using Maven, npm, flask or whatever other dependency management tool you use for the job you think of dependency management as a done deal. Not having to download a jar / package makes it easier and thus, for some reason, less worrisome to add a dependency. Until this morning where I read a great post titled Developer Supply Chain Management by Ted Neward. If you’re a programmer and if you use Maven or npm or flask or whatever other automated dependency management tool you really should read this.

And if you use it as part of your product development cycle you should read it. Twice… And then act – part of which is talking to the rest of the team about it.

Thinking about dependency management and how to save dependencies should probably come back front and center and this should be a lesson to us all. If nothing else you should implement a local – dare I say on-premises – caching dependency and/or artifact server so that all dependencies are cached, stored and backed up locally (in a datastore you control). If nothing else enforce that all automated build servers download through the artifact server so that all dependencies that goes into a build is known, cached and kept.

It’s definitely something to think about.

Simple tool to save certificate chain certificates as PEM files

It’s been increasingly frustrating to support our OnTime Group Calendar for Microsoft customers with on-prem Exchange as they usually use a self-signed certificate for TLS resulting in Java throwing a fit. Getting the certificate chain using a browser or OpenSSL is easy enough but for some customers that still prove too difficult. I couldn’t find a tool to automate the export so I wrote a small tool in Java. The tool simply takes the address of the site to contact and saves the certificate chain as individual PEM files ready for import into the Java keystore. Now there is no fingerprint check so use at your own risk. Using the tool is like so:

java Main http://www.ibm.com

The code is available on Github and doubles as an example of how to accept all certificates using a custom TrustManager and HostNameVerifier. I even threw in some Java 8 to make Rene happy 🙁

YMMV…

Actually making Eclipse work for plugin appdev on Windows 10 64 bit

Just yesterday I blogged about how easy it was to get Eclipse configured for IBM Notes 9.0.1 plugin appdev. And it was easy – it just didn’t work for real development. After I imported all the plugins for the OnTime Group Calendar clients nothing would compile. After looking for a while I could see that most errors was from resolving the SWT classes such as Display, Canvas and so on and that made me think of a similar issue I had on Mac. I dove into the target platform definition. In the target platform definition I went on the Environment tab and set the following:

  • Operating system: win32
  • Windowing system: win32
  • Architecture: x86

I also set the Java Runtime Environment to the IBM Notes JVM I defined yesterday. After that the change to the target platform made everything rebuild – now without any errors and now I could launch the products from Eclipse.

Fake names

Needed to generate fake names and emails today for a stub API I’m developing. Found a github gist that did the trick. Very easy. Just had to install the faker gem first:

$ sudo gem install faker

The example generates to CSV but I needed from object instances for C# so changed the code as such:

require 'faker'
require 'securerandom'

File.open("output.txt", "wb") do |file|
  i=0
  until i == 500
    uuid = SecureRandom.uuid
    fake = "new SearchUserResult("" + Faker::Name.name + "", "" +
        Faker::Internet.email + "", UserType.Person, "" + uuid + ""),n"
    file << fake
    i=i+1
  end
end

Configuring Eclipse Neon on Windows 10 64 bit for Notes plugin development

A member of the community reached out to me yesterday to ask whether I recognized a specific error message he was encountering trying to make Eclipse launch Notes 9 correctly for plugin development. I came back with a few suggestions but as I hadn’t tried on Windows 10 yet I really couldn’t offer much help. This morning I tried configuring Notes 9.0.1 for plugin development on Windows 10 and it went smoothly. Here are the steps I took:

  1. Download Eclipse Neon for RCP and RAP development bundle for Windows 64 bit
  2. Unzip bundle and launch Eclipse
  3. Follow the steps described in my Configure Eclipse 4.2 for Notes 9 post
  4. When configuring variables I used the following values:
    • install_id: 1460097942140
    • rcp.base_version: 9.0.1_20131002-1404
    • rcp.home as described in above post

That’s it really…

Reserved characters in WebSphere Application Server passwords… Really!?

Had somewhat of a surprise today when IBM Support informed us that the issue our customer was experiencing could be due to unsupported characters in the password of the user mapped to the connectionsAdmin J2C alias. Say what!? But apparently there are restrictions on the different characters one can use. The password we were using had exclamation point (!) in it which is a no no. The customer is currently on WebSphere Application Server 8.5.5.6 and support suggested we try and upgrade to 8.5.5.7. Funny thing is that the customer has been using that password for years so it must have worked previously.

IBM Connections wiki: Special characters in password

WebSphere Application Server 8.5.5 InfoCenter: Characters that are valid for user IDs and passwords

First Git hook for Atlassian Bitbucket (formerly Atlassian Stash)

For my current project I’ve setup a full CI pipeline to automate the build process of the application (an EAR-file in this case) and deploy it to the test server. The build itself is a Maven build that runs all the tests and builds the EAR file. We are a number of people working on the application – some do frontend work (mainly JavaScript) and I do the backend. The Git repository we use is split into three branches as it concerns this project – one for backend (feature/eventboard_backend), one for frontend (feature/eventboard_frontend) and one that merges the two into the final result for building (feature/eventboard). So I was setting all this up – had the build script ready, the build server ready (Atlassian Bamboo), the deployment script working over SCP/SSH but I needed a nice way to automatically merge the two development branches into the main branch for the build.

The way I solved it was to write a Git post-receive hook on the Git server side (Atlassian Bitbucket). This post-receive hook detects a push to either of the two development branches and when it does merges the two into the main branch and pushes it branch back up. This push is in turn detected by Atlassian Bamboo that then kicks of the build and the deployment. So nice. Even though it took me a couple of hours to configure it has already saved so much time and all builds and deployments are consistant.

Today I extended the build script to monitor another branch so I now both deploy into our “bleeding edge” environment and our test environment.

The post-receive hook is written in bash and is as below. It took me a while to grok but a hook is simply a script that runs as the server OS user when ever something happens. The script is free to run as another user so my script runs as a special Git user so we can distinguish between which users does what. It also means that I could restrict access to feature/eventboard branch so it’s only writable by this build user.

The only caveat about this hook was that we are using Atlassian Bitbucket which apparently only accepts hooks written in Java. There is however a way to add bash-based hooks directly in the file system on the server under /<bitbucket-home>/shared/data/repositories/<repoid> where the repoid can be found in the repository settings on the Bitbucket server if logged in as admin.

#!/bin/bash

CHECKOUT_NAME=eventboard
MERGE_INTO_BRANCH=feature/eventboard
MONITOR_BRANCH1=feature/eventboard_web
MONITOR_BRANCH2=feature/eventboard_backend
WORKING_DIR=/local/stash-hooks-work

while read oldrev newrev refname
do
        branch=$(git rev-parse --symbolic --abbrev-ref $refname)
        echo "Currently on branch '$branch'"
        if [ "$MONITOR_BRANCH1" == "$branch" ] || [ "$MONITOR_BRANCH2" == "$branch" ]; then
                echo "Detected commit on $MONITOR_BRANCH1 or $MONITOR_BRANCH2 - merging..."
                if [ ! -d "$WORKING_DIR" ]; then
                        mkdir -p $WORKING_DIR
                fi
                cd $WORKING_DIR
                unset GIT_DIR
                if [ ! -d "$CHECKOUT_NAME" ]; then
                        # repo doesn't exit - abort
                        echo "*** Required repo for post-receive hook not configured - exiting..."
                        exit
                else
                        cd $CHECKOUT_NAME
                        git reset --hard
                        git checkout $MERGE_INTO_BRANCH
                        git pull origin $MERGE_INTO_BRANCH
                fi
                git fetch origin $MONITOR_BRANCH1:$MONITOR_BRANCH1
                git fetch origin $MONITOR_BRANCH2:$MONITOR_BRANCH2
                git merge $MONITOR_BRANCH1 $MONITOR_BRANCH2 -m "Merged 
                          '$MONITOR_BRANCH1' and '$MONITOR_BRANCH2' into 
                          '$MERGE_INTO_BRANCH'"
                git push origin $MERGE_INTO_BRANCH
        fi
done

Using Tomcat APR (Apache Native Runtime) on Mac

I had to document some steps using the Apache Portable Runtime (APR) and TLS configuration and for that needed APR on my Mac. I couldn’t really make it work at first but after fiddling a bit I figured it out. There are the steps in bullet form:

Download APR and compile

  • Download APR from Apache (http://apr.apache.org/). I downloaded v. 1.5.2.
  • Compile in Terminal.
    • CFLAGS=’-arch x86_64′ ./configure
    • make
    • make test
    • make install

Install OpenSSL with headers

The OpenSSL on Mac doesn’t come with the header files so you cannot compile the Tomcat native library by default. To fix that use Homebrew to install a new version of OpenSSL first.

  • Install Homebrew per instructions on the website
  • brew install openssl

Compile Tomcat native library

The Tomcat native library is supplied with the Tomcat download. My Tomcat was v. 8.0.17. Steps as below:

  • cd Tomcat8.0.17/bin
  • gunzip tomcat-native.tar.gz
  • tar xf tomcat-native.tar
  • cd tomcat-native-1.1.32-src/jni/native
  • CFLAGS=’-arch x86_64′ ./configure –with-apr=/usr/local/apr –with-ssl=/usr/local/opt/openssl
  • make
  • make install

Configure Tomcat to use APR

This step is basically just to make sure that the Tomcat native library is on the Java Library path. Do as follows:

  • cd Tomcat8.0.17/bin
  • vi setenv.sh
  • Add text: JAVA_OPTS=”-Djava.library.path=/usr/local/apr/lib”

Now when you run Tomcat using catalina.sh you should see a line like below stating what version of the native library was loaded.

15-May-2016 18:14:01.106 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent
     Loaded APR based Apache Tomcat Native library 1.1.32 using APR version 1.5.2.

Further reading: