Using SalesforceDX to automate getting Apex class test coverage percentages

So SalesforceDX is good for many things but this particular blog post is going to be around how it provides easy access to something which is otherwise hard or cumbersome to get at. Like Apex class test coverage. It’s available through other means such as the UI and the tooling api but there it takes manual work (clicking) or requires additional plumbing to set up and extract. With SalesforceDX it’s surprisingly easy.

As opposed to popular belief SalesforceDX may be used with any org and not just the scratch orgs that SalesforceDX affords for development. Connecting to any org is as simple as using the Force to do the OAuth dance:

$ sfdx force:auth:web:login

For additional points you can give the org connection an alias for easy reference (using –setalias) and specify the login URL if required (using –instanceurl) i.e. if you’re adding a sandbox.

$ sfdx force:auth:web:login --setalias MyOrg --instanceurl https://test.salesforce.com

Once you have the org connection you can use force:apex:test:run to run tests and force:apex:test:report to – surprise – return the test report.

$ sfdx force:apex:test:run -u mheisterberg@example.com.appdev
Run "sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev" to retrieve test results.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev
=== Test Results
TEST NAME OUTCOME MESSAGE RUNTIME (MS)
────────────────────────────────────────────────────────────────── ─────── ─────── ────────────
ChangePasswordControllerTest.testChangePasswordController Pass 11
AccountTriggerHandlerTest.testSetAccountOwner Pass 3623
AccountTriggerHandlerTest.testSetContactId Pass 114
AccountTriggerHandlerTest.testSetLowecaseEmail Pass 215
AccountTriggerHandlerTest.testSetPCAK Pass 83
AccountTriggerHandlerTest.testValidateEmailUniquenessNegative Pass 42
AccountTriggerHandlerTest.testValidateEmailUniquenessPositive Pass 80
AddressesListRestTest.testGetAddressesList Pass 11097
AddressRestTest.testDeleteAddress Pass 1388
AddressRestTest.testGetAddress Pass 753
AddressRestTest.testPostAddress Pass 734
AddressRestTest.testPutAddress Pass 731
ConsentRestTest.testGetConsent Pass 959
ConsentRestTest.testPostConsent Pass 768
ConsentRestTest.testPutConsent Pass 975
ConsentsListRestTest.testGetConsensList Pass 3761
ConsumerRestTest.testGetConsumer Pass 1004
ConsumerRestTest.testPostConsumer Pass 988
MarketRelationTriggerHandlerTest.testBehavior Pass 8
ProfileRestTest.testDeleteProfile Pass 1071
ProfileRestTest.testGetProfile Pass 710
ProfileRestTest.testPostProfile Pass 739
ProfileRestTest.testPutProfile Pass 679
ProfilesListRestTest.testGetProfilesList Pass 921
ForgotPasswordControllerTest.testForgotPasswordController Pass 29
MyProfilePageControllerTest.testSave Pass 258
SiteLoginControllerTest.testSiteLoginController Pass 17
SiteRegisterControllerTest.testRegistration Pass 16
=== Test Summary
NAME VALUE
─────────────────── ─────────────────────────────
Outcome Passed
Tests Ran 28
Passing 28
Failing 0
Skipped 0
Pass Rate 100%
Fail Rate 0%
Test Start Time Apr 13, 2018 10:18 AM
Test Execution Time 31774 ms
Test Total Time 31774 ms
Command Time 50941 ms
Hostname https://cs85.salesforce.com
Org Id 00D6E0000008eojUAA
Username mheisterberg@example.com.appdev
Test Run Id 7076E00000Uo5sc
User Id 0051r0000087iv9AAA

It’s pretty nifty huh!?

Again for added points add –json to the test report command to get the data back in JSON. And if you already have something that accepts test coverage data from say JUnit you can just add “–resultformat junit” and boom! You’ll get the test report in JUnit XML format. But everything started with me wanting to retrieve code coverage data and that hasn’t been part of the output so far. But again SalesforceDX to the rescue… Just add –codecoverage and you’ll receive code coverage percentages as well as part of the report.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev -c
== Apex Code Coverage
ID NAME % COVERED UNCOVERED LINES
────────────────── ───────────────────────────────── ────────────────── ────────────────────────────────────────────────────────────────────
01p6E000000aSHvQAM SiteLoginController 100%
01p6E000000aSHxQAM SiteRegisterController 81.48148148148148% 39,40,43,44,45
01p6E000000aSHzQAM ChangePasswordController 100%
01p6E000000aSI1QAM ForgotPasswordController 88.88888888888889% 15
01p6E000000aSI3QAM MyProfilePageController 87.5% 21,37,38
01p6E000000brQtQAI AccountTriggerHandler 95% 61,63,66,181
01p6E000000cBdhQAE MarketRelationTriggerHandler 78% 38,40,41,42
01p6E000000csObQAI Wrappers 98% 5
01p6E000000aIXsQAM ConsentRest 79% 35,36,54,55,67,69,70,88,89,104,105,106,119,121,125
01p6E000000ak0yQAA SegmentBuilder 100%
01q6E0000004xLIQAY AccountTrigger 100%
01q6E0000004zB0QAI MarketRelationTrigger 80% 13
01p6E000000aUJVQA2 UserBuilder 100%
01p6E000000aJjqQAE ConsentsListRest 94.11764705882352% 48
01p6E000000cnhQQAQ AddressRest 78% 35,36,60,68,70,92,93,110,111,112,126,128,132,149,150,162,164,165
01p6E000000caCxQAI ConsumerRest 84% 17,18,27,28,49,50,51,111,120,146,148,149,159,220,222,225,284,290,297
01p6E000000aIXxQAM ProfileRest 76% 27,28,49,57,59,78,79,91,93,94,111,112,127,128,129,141,143,147
01p6E000000aJjWQAU ProfilesListRest 94.11764705882352% 41
01p6E000000cr6iQAA AddressesListRest 83.33333333333334% 39,52,54
=== Test Results
<snipped>

Combine that with –json and you have the foundation for automating this. So sweet. You could even write a little script to output this any way you like.

Happy scripting…

Loving streams in node.js

Node.js is a great platform for writing scripts. Besides being Javascript and besides having access to npm it lends it very well to data processing as it’s completely async unless you specifically tell it not to be. One of the best aspects in my opinion about node.js as a data processing language is the concepts of streams and using streams to process data. Using streams can drastically lower the memory consumption by processing data as it comes down the stream instead of keeping everything in memory at any one time. Think SAX instead of DOM parsing.

In node.js using streams is easy. Basically data flows from Readable streams to Writable streams. Think producers of data and consumers of data. Buffering is handled automatically (at least in the built in streams) and if a down stream consumer stops processing the upstream producer will stop producing. Elegant and easy. Readable streams can be stuff like files or network sockets and Writeable streams stuff like files or network sockets… Or stdout which in node.js also implement the Writable stream API. Working with streams is like being a plumber so piping (using the pipe method) is how you connect streams.

An example always helps – the below example reads from alphabet.txt and pipes the data to stdout.

const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(process.stdout)
> a
> b
> c

Simple example but works with small and big files without too much of a difference in memory consumption.

Sometimes processing is required and for this we use Transform streams (these are basically streams that can read and write). Say that we want to uppercase all characters. It’s easy by piping through a Transform stream and then on to the Writable stream (stdout):

const {Transform} = require('stream')
const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(new Transform({
    transform(chunk, encoding, callback) {
      // chunk is a Buffer 
      let data = chunk.toString().toUpperCase() 
      callback(null, data) 
    }
  }))
  .pipe(process.stdout)
> A
> B
> C

It’s easy to see how streams are very composeable and adding processing steps are easy. the pipe could even be determined at runtime. The above examples use strings but streams can also work on objects if required.

Streams are beautiful but can take some time to master. I highly recommend reading up on streams and start getting to know them. The “Node.js Streams: Everything you need to know” post is very nice and provides a good overview.

Happy coding!

 

Lightning Logger

In my Lightning components I always find logging and remembering how to navigate to urls using events to be an issue so I wrote this small utility base component that I then extend when creating new components. The base component share its helper with the subcomponents which allows for easy reuse of the utility functionality. The utility code provides both logging and various other utility functions such as navigating to other objects, presenting toasts etc.

Component definition of the base component is simple (note the {!v.body} which is key for abstract components to make sure the markup of child components appear):

<aura:component abstract="true" extensible="true">
 <aura:handler name="init" value="{!this}" action="{!c.doinit}" />
 {!v.body}
</aura:component>

Controller is likewise simple with basically only a callout to the helper to initialize a named Logger instance and store it in a variable called logger in the helper.

({
  doinit: function(component, event, helper) {
    const logger = helper.getLogger('SFLC_LightningHelper');
    logger.trace('Initializing LightningHelper');
    helper.logger = logger;
  }
})

To use it from another component you first extend the base component:

<aura:component implements="flexipage:availableForAllPageTypes,force:hasRecordId" extends="c:BaseComponent" controller="Foo_LCC">
  <aura:attribute name="recordId" type="Id" />
  <aura:handler name="init" value="{!this}" action="{!c.doinit}" />
  ...
  ...
</aura:component>

Then in the component controller initialization event create a named utility object (actually names the logger) and store it the “util” variable in the helper making it accessible using helper.util and the logger using helper.util.logger.

({
  doinit: function(component, event, helper) {
    // build utility
    const utility = helper.buildUtilityObject('MyComponent');
    helper.util = utility;
    
    // load data
    helper.loadData(component);
  }
})

From here on out you can simply use helper.util.logger.info, helper.util.logger.debug etc. to log for the component. All log messages are output using the named logger. The log level  which by default is NONE (meaning no output is output) is controllable using a URL parameter and loggers may be turned on and off using a URL parameter as well. Please note that the URL format used here doesn’t live up to the changes coming to Lightning URL’s in Summer ’18 or Winter ’18 (cannot remember which release).

Using the utility functionality can be done from controller and helper as shown in the loadData method using both utility functionality to invoke a remote action and to log:

({
  loadData: function(component) {
    // load data
    const helper = this;
    helper.util.invokeRemoteAction(component, 
      'getData', {'recordId': component.get("v.recordId")}, 
      function(err, data) {
        if (err) {
          helper.util.toast.error('Data Load Error', 'Unable to load data from server ('+ err +'). If the issue persists contact your System Administrator.', {sticky: true}); 
          return;
        }
        helper.util.log.debug('Received data from endpoint', data); 
      }
    )
  }
})

Zip file with Controller and Helper: BaseComponent

Passed Salesforce Platform Developer 1 – thoughts and take aways

So I had it on my V2MOM all last financial year to complete the Salesforce Platform Developer 1 certification which although is an optional certification for me was something I wanted to try and tackle. So I signed up for the exam in the beginning of January but before I could take it the exam was cancelled. It appeared that the questions and answers to the exam had been leaked online (well duh!) so my exam was cancelled until a new and updated exam was ready. So I waited and one day I was asked whether I wanted to try the beta exam of the new certification. It did involve me having to go to a testing center in person (I normally do them remotely proctored) but I agreed and took the exam.

So to call this a new exam is quite an overstatement.

Now I didn’t do the old exam but I’m pretty sure they simply shuffled the questions and rewrote a few. The exam is (still) VERY VisualForce heavy and peppered with weird Apex question that in my opinion doesn’t really fit the prevue of this being an entry level certification. There is VERY little Lightning which I found a bit odd with VisualForce being considered Legacy in my book and component based development using Lightning being the status quo and future. But alright who I am I to judge…

So it being a beta exam I didn’t get the result right away as you normally do but I did pass it which I consider more luck than anything else. What I really wanted to share here is that Platform Developer 1 is (still) a legacy exam and solid working knowledge of VisualForce is required. Forget about Lightning – VisualForce is the name of the game for this exam. I did share some constructive feedback about the exam internally and I would really like to see the current Platform Developer 1 certification being parked and marked as legacy and a truly new exam being brought to light. This exam should bring Apex and Lightning to the forefront maybe springling in a few questions on VisualForce.

My real point here is that no developer new to the platform will ever – or should have to – learn VisualForce. Crossing my fingers for a new / additional exam.

 

 

GDPR

So I’m not usually a guy who enjoys legalese and toying with paragraphs but I must admit that GDPR interests me. Both as a consumer and as a professional. As a consumer I find it nice and a great initiative to protect my rights and privacy as a consumer. I find the privacy regulations and the added responsibility put on service providers to be a welcome change. With the economic penalties outlined in the legislation the GDPR has to be respected. And I think they will – maybe once the initial battles has been fought.

As a professional I have a different approach and a different take on it. While also interesting the burden put on companies are very big and the challenges that has to be solved can seem somewhat insurmountable. Thinking about data in CRM, ERP, file shares, web site logs, e-commerce, data from POS terminals to name but a few makes this potentially a very big thing. What does it mean to allow transparency and data portability? What does it mean to be forgotten? With an IP address being considered PII (private identifiable information) it makes even core systems like web site logs and tracking systems subject to change. How do I even figure out where these pieces of information are stored. It’s indeed a great challenge. At least for B2C companies – it will most likely be much less burdensome for B2B.

To make matters worse the GDPR legislation was adopted by the EU on 27 April 2016 and it becomes enforceable from 25 May 2018 after a two-year transition period. But yet we are only really starting to take it serious now. How can that be? I’m starting to see this as a next year 2000 problem but whereas Y2K was takes serious a long way out this seems to have been mostly ignored. At least from where I sit. It will be very interesting to follow.

The project I’m on now is actually about transitioning a series of black-box consumer signup systems into a transparent Salesforce Service Cloud installation for a customer while ensuring double opt-in and keeping records of consent. We are on a pretty tight schedule to be ready for 25 May but it’s looking okay but the scope is also pretty well defined. But if this had been for the entirety of the customer data it would have been much worse. Now the project is much bigger than this but it’s interesting how it took the GDPR to get them going – maybe it was a good thing as it probably helped their business case internally.

There once was a product called Hannover…

<rant>

Once upon a time – think 2006’ish – a complete revamp of a product was unveiled. The revamp went under the codename Hannover after the name of the city it was unveiled in. The revamp was to blow competition away and make the supplier of the product rule to World with the new product platform, new technologies and all the amazing stuff the client would be able to do. And it was amazing. It was like magic and provided access to new and amazing features and exposed great API’s that allowed developers to build sweet apps to bridge the gap between the proprietary world of yesteryear and the new internet era. It was built on a proven open source source platform and built using a proven industry standard programming language that many developers knew. It could be said that the language was the Lingua Franca of its time. To make it even better the client would be backwards compatible and run all the apps of its predecessors – like all the way back to the very first version of the product from the good ol’ DOS days. In many ways it was almost too good to be true.

It did however also not quite turn out the way the supplier had hoped. There was a problem with all this goodness. Not in the product. Not in the ambitions. Not in the chosen platform. In many respects it was a good idea, a good launch and the product delivered in most – if not all – of the areas it had promised new and amazing solutions for.

The problem was in the application developer support. They failed the product. Or maybe more to the point – the supplier failed the developers.

For the last 5-10 years nothing much had happened on the platform. Sure the platform had adopted JavaScript and Java and sure it had brought incremental improvements to the appdev experience. New feature here. A new simply action there. But nothing massive. But now the supplied threw this completely new way of developing apps on the market. The change was so massive and all the supplier would talk about was all these new capabilities – why wouldn’t they? Problem was that they had lulled developers to sleep with histories of declarative programming and how visual programming and laying out elements on screen was enough. One client to rule to them all. Simple actions and formulas to solve complex issues. But then all of a sudden developers was expected – from one day to the next – to grasp component development (called JSR 168 portlets at the time), data coming from different backend sources, UI threads, async programming, regex’es and low level widget development.

I’ll go out on a limb and state that the product was a failure. Sure customers migrated to the new client but many hesitantly. And it took a long time. Many never reaped any of the benefits of the new platform and ended up jumping ship.

Some developers did make the jump however as they were real developers. But many did not. It sounded too complex to them and it was. It didn’t align with the World they knew. They were business peoples trained to be developers – not developers by trade which was really a requirement. They chose to ignore the revamped product and all the features it brought. So all the good effort, energy, time and money put into product fell by the wayside. Sure it was used by some (including yours truly) but for many it was never adopted. So sad…

But why is this important and why am I writing it now?

Well to be completely honest it’s just a brain dump of thoughts. I find it interesting as time passes how often I see the same pattern reappear. Different products, different ways they try to reinvest themself and different results. In my opinion completely reinventing the way things are done in a product without making absolutely sure you either address the new message to the right audience or make sure the story is complete when told is due to fail. Of course some products are better than others and some suppliers are better at listening than others. But it’s interesting to watch.

But boy that Hannover product could have ruled the World if the supplier had been better at preparing developers beforehand and making sure they got on board.

</rant>

That moment when grown men start crying…

OK I admit it. I cried. A little… So there are many emotional moments in a life but seeing the succesful launch of the SpaceX Falcon Heavy is one of those. I cannot not be impressed of the work it must have taken to a) design the thing, b) build the thing, c) launch the thing and d) bring the freaking boosters back to Earth. Oh and the secret e) the social media coverage and small touches from the Tesla Roaster onboard, Space Man,  the “Don’t Panic” text on the screen, the plaque of names and finally but not least Life on Mars by David Bowie playing alongside. Wow! Just wow!

I highly recommend watching the video of the launch with the intro or if you’re not quite to it the actual launch from T-10 seconds below it.