Part 1 was about API's and SPI's - this part will be about widgets or apps as IBM likes to call them now. There are big differences between how widgets / apps works for on-premises, cloud and on Mobile. Let us starts with Mobile as it's the quickest one to address but also the most depressing...
Besides adding menu items to the IBM Connections mobile app menu (the one that slides in from the left) and having the content load in an embedded browser control there is no support for widgets / apps on Mobile. None. Zero. Given that IBM Connections always has been marketed as social software that focus on the individual and where all content and data is tied to the user it has always been surprising to me how little focus IBM has put on Mobile from the ISV perspective in this regard. From my perspective as an ISV it would be obvious that ISV's would want to pivot of a profile, a file, a community etc. and launch into a custom app supplying that context.
I have always been a big advocate for adding widgets / apps / actions to IBM Connections Mobile. And yes I know that adding custom content to an iOS or Android app is hard and there are security implications but there are ways around it. Simply supporting declarative actions using URL token replacements would go a long way (on-premises they could be loaded from mobile-config.xml). Allow me add an action specifying the feature it should go into (Profiles, Communities, Files etc.) and allow me to add a URL pattern to it. The URL pattern should be able to take URL token replacements so my URL could be something like https://myapp.example.com/people/%uid% or https://myapp.example.com/people/%email% or https://myapp.example.com/files/%fileid%. Once activated the app could grab til touch action, replace the tokens in the URL based on the current record and load it in the browser control it uses for those aforementioned left menu shortcuts.
Obviously I'm always way too optimistic but how hard could that be? And I think it would add a ton of options to customers and ISV's for pivoting off content in IBM Connections and go into custom apps.
Thinking further about it - what if a native app for the data was present on the device and a registered URL scheme was used it would probably automatically launch into that app carrying the context along. Maybe even sending along some credentials to make this app switch transparent to the user - how cool would that be. But even it that wasn't the case (I know iOS added restrictions as to what and how many URL schemes an app could query) and I would end up in a web app it would still be a great improvement.
IBM Connections has been on the market for a lot of time now and has always been a real strong player when it comes to application development. I thought it was time to review where we are application development wise over what will probably be a couple of posts. First off is API's...
IBM Connections is and has always been strong from the point of API's - there is an API for almost all areas of the product and always has. I think IBM Connections was the first product (maybe still the only one) that was built by IBM Collaboration Solutions using an API first approach or at least with a strong emphasis on API's.
The API's are extensive and are pretty straight forward to use. The main caveat about the IBM Connections API's is that they were designed a looooong time ago and hasn't been updated since. I went to the documentation to check and the majority of the API's hasn't changed one bit since v. 2.5. This means they are still using the once cool Atom publishing protocol using lots of XML and lots of namespaces. Stability is very nice but an XML based API is really appdev unfriendly these days and the use of namespaces makes it even worse and difficult to handle - you either handle the complexity or you simply choose the parse without namespaces. Extracting data is either done using XPath or string parsing none of which is easy or performs well and again the namespaces makes XPath notoriously difficult.
This being said there is one exception to the XML based nature of the API's. When Activity Streams were added in IBM Connections 4.5 it was brought to market with a JSON based API and is still the only component in the IBM Connections suite using JSON based API's. Due to its roots in OpenSocial the API was however hard to grasp and I don't think it got much traction despite my best efforts to make it approachable. My presentation on how to use the Activity Stream remains one of my most popular presentations at conferences.
When this is said and done I think that the most important thing IBM could do for IBM Connections API wise would be to update the API's. In todays world a JSON based API is really expected and is the defacto way of accessing data. Do away with XML and namespaces and adopt a standards based JSON approach to messages being sent to the server and returned from the server. Of course the legacy Atom based API should be left in but it should really be augmented with a JSON ditto as soon as possible.
Besides the API's there is also a set of SPI's (Service Provider Interfaces) for stuff like hooking into the event subsystem, discovering features etc. The event SPI is pretty well designed and is very useful. However it seems that it was never really polished, completed and designed for customer / partner consumption as much of the functionality is reserved for IBM widgets and never documented or supported. Other pieces of the SPI's can only run within the same JVM as IBM Connections which really makes it unusable from a partner perspective.
The worst thing about the API or SPI is however what is not there...
There is no support for programmatic user registration or user deletion / deactivation. There is no support for feature discovery from outside the JVM. There is no support for getting a complete member lists for a community based on a (LDAP) directory groups. Using the community membership API will return a list of members some of which may be groups. If that's the case good luck. Your best / only option there would be to use WIM (WebSphere Identity Manager) to resolve the group and then manually combine that result with the direct community members. Of course if you are using IBM Domino as the LDAP server start by figuring out how IBM mangled the group UID before resolving it in WIM. That's really a can of worms of its own and maybe worth a blog post of its own. There is no support for a super-user that can access data on all users behalf. There is no support for easily reusing the IBM Connections header bar if using IBM Connections on-premises.I don't want to finish this post being too pessimistic but there is really room for improvement from the API / integration standpoint. IBM needs to step up to the plate and update the platform and make it current from an API perspective. Oh and while you are at it document the API's, create up to date examples and actually make the documentation discoverable using Google...
Being a Maven convert and a guy that likes to dabble in programming this topic is very interesting albeit not one I've thought much about - and I guess this is true for most. Or let's put it another way. After you start using Maven, npm, flask or whatever other dependency management tool you use for the job you think of dependency management as a done deal. Not having to download a jar / package makes it easier and thus, for some reason, less worrisome to add a dependency. Until this morning where I read a great post titled Developer Supply Chain Management by Ted Neward. If you're a programmer and if you use Maven or npm or flask or whatever other automated dependency management tool you really should read this.
And if you use it as part of your product development cycle you should read it. Twice... And then act - part of which is talking to the rest of the team about it.
Thinking about dependency management and how to save dependencies should probably come back front and center and this should be a lesson to us all. If nothing else you should implement a local - dare I say on-premises - caching dependency and/or artifact server so that all dependencies are cached, stored and backed up locally (in a datastore you control). If nothing else enforce that all automated build servers download through the artifact server so that all dependencies that goes into a build is known, cached and kept.
It's definitely something to think about.