Bash one-liner for Apex test coverage percentage using SalesforceDX

Update 3 May 2018: There are issues with the percentages reported by SalesforceDX plus it doesn’t report coverage on classes with 0% coverage which will shrew the results. The approach outlined above can be used as an indication but cannot as of today be used as a measure for code coverage when it comes to production deployments. As an example I’ve had the above snippet report a coverage of 88% where as a production deploy reported 63% coverage. We – Salesforce – are aware of the issue and are working to resolve it. Stay tuned!

Note to self – quick note on how to run all tests in a connected org (as identified by the -u argument) and use jq and awk to grab the overall test coverage percentage.

$ sfdx force:apex:test:run -u mheisterberg@example.com.appdev -c -w 2 -r json | jq -r ".result.coverage.coverage[].coveredPercent" | awk '{s+=$1;c++} END {print s/c}'
> 88.1108

YMMV!

 

Using SalesforceDX to perform Bulk API operations

As noted the other day (Using SalesforceDX to automate getting Apex class test coverage percentages) SalesforceDX is great for many things but one of the ways is automate some operations that are time consuming or just takes a lot of manual work each time. One of these things are Bulk API operations which in of by themselves are not hard but there is no UI for them besides the DataLoader and no console API when using the DataLoader when not on Windows.

The customer I’m working for currently has a monster data load to perform and one of the things I’ve done is writing  script to split the data into data sets. One set per country – 91 sets all in all. All sets consists of 3 files to support the data load. One file for Accounts and two additional files for custom objects that needs to be loaded as well. All in all it’s a lot of clicking in the DataLoader and it doesn’t really scale for testing. That’s a lot of clicking in DataLoader when testing.

But I’m lucky as SalesforceDX receives new functionality all the time and at some point some data Bulk API features had snuck by me so I was pleasantly surprised to discover force:data:bulk:upsert and force:data:bulk:delete today. They we just what I needed. SalesforceDX to the rescue yet again…

So today I grabbed by IDE by the horns (#vscode in my case) and wrote some wrappers around the Bulk API capabilities of SalesforceDX. The fact that all SalesforceDX commands takes an optional –json argument makes it easy to script and parse responses. This combined with select-shell from npm I now have a nice CLI interface to doing Bulk data loads. The script looks as the available data sets and asks me what country to load data for and then what export timestamp to process (the data sets may exists in multiple versions). Then it goes and does its thing UPSERTing all 3 times in turn and reports status. So nice. The Bulk API is asynchronous so  the script also handles polling for job status and only proceeds once the job has completed successfully.

$ ./upsert mheisterberg@example.com.appdev
SFDX - Org for mheisterberg@example.com.appdev is connected...

Select country code:
 ae
 au
 ca
 cn
 es
 fr
 hk
 hu
 co
 ▸ it
 jp
 kr
 my
 pt
 sg
 th
 tr
 tw
 us

Select timestamp:
 2018-04-16T07:25:37Z
 2018-04-16T08:31:28Z
 ▸ 2018-04-16T08:34:14Z

Will process following data
Country : it
Timestamp: 2018-04-16T08:34:14Z
UPSERT for Account data...
Issued UPSERT bulk request to object (Account) - id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG - state: Queued
SFDX - asking for bulk status for id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG
SFDX - received bulk status for id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG - state: Completed
Issued UPSERT bulk request to object (MarketRelation__c) - id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW - state: Queued
SFDX - asking for bulk status for id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW
SFDX - received bulk status for id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW - state: Completed
Issued UPSERT bulk request to object (Consent__c) - id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW - state: Queued
SFDX - asking for bulk status for id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW
SFDX - received bulk status for id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW - state: Completed
Finished upsert of data

Once I’m done testing a particular data set I can use the –delete-accounts flag to my script to delete data using the Bulk API as well. Here I actually combined force:data:soql:query and force:data:bulk:delete to first retrieve the ID’s of the records I need to delete and then kick off the required Bulk API delete requests. Again easy peasy. And repeatable…

$ ./upsert mheisterberg@example.com.appdev --delete-accounts
SFDX - Org for mheisterberg@example.com.appdev is connected...

Are you sure?
 ▸ No
 Yes

Received 32463 records
Issued DELETE bulk request to object (Account) - id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: Queued
SFDX - asking for bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG
SFDX - received bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: InProgress
SFDX - asking for bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG
SFDX - received bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: Completed
Performed delete...

Only issue I had here really was that node.js has a maximum buffer size of 200kb to stdin so I could not simply read the stdin response from the SOQL query as it may be pretty big. Instead I pipe to a tmp-file and read that back in and parse as JSON. Not ideal but it gets the job done.

The code of the script itself is for the customers eyes only but the source for the helpers is available as sfdx-bulk-helper on Github and sfdx-bulk-helper on npm.

YMMV!

 

Using SalesforceDX to automate getting Apex class test coverage percentages

So SalesforceDX is good for many things but this particular blog post is going to be around how it provides easy access to something which is otherwise hard or cumbersome to get at. Like Apex class test coverage. It’s available through other means such as the UI and the tooling api but there it takes manual work (clicking) or requires additional plumbing to set up and extract. With SalesforceDX it’s surprisingly easy.

As opposed to popular belief SalesforceDX may be used with any org and not just the scratch orgs that SalesforceDX affords for development. Connecting to any org is as simple as using the Force to do the OAuth dance:

$ sfdx force:auth:web:login

For additional points you can give the org connection an alias for easy reference (using –setalias) and specify the login URL if required (using –instanceurl) i.e. if you’re adding a sandbox.

$ sfdx force:auth:web:login --setalias MyOrg --instanceurl https://test.salesforce.com

Once you have the org connection you can use force:apex:test:run to run tests and force:apex:test:report to – surprise – return the test report.

$ sfdx force:apex:test:run -u mheisterberg@example.com.appdev
Run "sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev" to retrieve test results.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev
=== Test Results
TEST NAME OUTCOME MESSAGE RUNTIME (MS)
────────────────────────────────────────────────────────────────── ─────── ─────── ────────────
ChangePasswordControllerTest.testChangePasswordController Pass 11
AccountTriggerHandlerTest.testSetAccountOwner Pass 3623
AccountTriggerHandlerTest.testSetContactId Pass 114
AccountTriggerHandlerTest.testSetLowecaseEmail Pass 215
AccountTriggerHandlerTest.testSetPCAK Pass 83
AccountTriggerHandlerTest.testValidateEmailUniquenessNegative Pass 42
AccountTriggerHandlerTest.testValidateEmailUniquenessPositive Pass 80
AddressesListRestTest.testGetAddressesList Pass 11097
AddressRestTest.testDeleteAddress Pass 1388
AddressRestTest.testGetAddress Pass 753
AddressRestTest.testPostAddress Pass 734
AddressRestTest.testPutAddress Pass 731
ConsentRestTest.testGetConsent Pass 959
ConsentRestTest.testPostConsent Pass 768
ConsentRestTest.testPutConsent Pass 975
ConsentsListRestTest.testGetConsensList Pass 3761
ConsumerRestTest.testGetConsumer Pass 1004
ConsumerRestTest.testPostConsumer Pass 988
MarketRelationTriggerHandlerTest.testBehavior Pass 8
ProfileRestTest.testDeleteProfile Pass 1071
ProfileRestTest.testGetProfile Pass 710
ProfileRestTest.testPostProfile Pass 739
ProfileRestTest.testPutProfile Pass 679
ProfilesListRestTest.testGetProfilesList Pass 921
ForgotPasswordControllerTest.testForgotPasswordController Pass 29
MyProfilePageControllerTest.testSave Pass 258
SiteLoginControllerTest.testSiteLoginController Pass 17
SiteRegisterControllerTest.testRegistration Pass 16
=== Test Summary
NAME VALUE
─────────────────── ─────────────────────────────
Outcome Passed
Tests Ran 28
Passing 28
Failing 0
Skipped 0
Pass Rate 100%
Fail Rate 0%
Test Start Time Apr 13, 2018 10:18 AM
Test Execution Time 31774 ms
Test Total Time 31774 ms
Command Time 50941 ms
Hostname https://cs85.salesforce.com
Org Id 00D6E0000008eojUAA
Username mheisterberg@example.com.appdev
Test Run Id 7076E00000Uo5sc
User Id 0051r0000087iv9AAA

It’s pretty nifty huh!?

Again for added points add –json to the test report command to get the data back in JSON. And if you already have something that accepts test coverage data from say JUnit you can just add “–resultformat junit” and boom! You’ll get the test report in JUnit XML format. But everything started with me wanting to retrieve code coverage data and that hasn’t been part of the output so far. But again SalesforceDX to the rescue… Just add –codecoverage and you’ll receive code coverage percentages as well as part of the report.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev -c
== Apex Code Coverage
ID NAME % COVERED UNCOVERED LINES
────────────────── ───────────────────────────────── ────────────────── ────────────────────────────────────────────────────────────────────
01p6E000000aSHvQAM SiteLoginController 100%
01p6E000000aSHxQAM SiteRegisterController 81.48148148148148% 39,40,43,44,45
01p6E000000aSHzQAM ChangePasswordController 100%
01p6E000000aSI1QAM ForgotPasswordController 88.88888888888889% 15
01p6E000000aSI3QAM MyProfilePageController 87.5% 21,37,38
01p6E000000brQtQAI AccountTriggerHandler 95% 61,63,66,181
01p6E000000cBdhQAE MarketRelationTriggerHandler 78% 38,40,41,42
01p6E000000csObQAI Wrappers 98% 5
01p6E000000aIXsQAM ConsentRest 79% 35,36,54,55,67,69,70,88,89,104,105,106,119,121,125
01p6E000000ak0yQAA SegmentBuilder 100%
01q6E0000004xLIQAY AccountTrigger 100%
01q6E0000004zB0QAI MarketRelationTrigger 80% 13
01p6E000000aUJVQA2 UserBuilder 100%
01p6E000000aJjqQAE ConsentsListRest 94.11764705882352% 48
01p6E000000cnhQQAQ AddressRest 78% 35,36,60,68,70,92,93,110,111,112,126,128,132,149,150,162,164,165
01p6E000000caCxQAI ConsumerRest 84% 17,18,27,28,49,50,51,111,120,146,148,149,159,220,222,225,284,290,297
01p6E000000aIXxQAM ProfileRest 76% 27,28,49,57,59,78,79,91,93,94,111,112,127,128,129,141,143,147
01p6E000000aJjWQAU ProfilesListRest 94.11764705882352% 41
01p6E000000cr6iQAA AddressesListRest 83.33333333333334% 39,52,54
=== Test Results
<snipped>

Combine that with –json and you have the foundation for automating this. So sweet. You could even write a little script to output this any way you like.

Happy scripting…

Loving streams in node.js

Node.js is a great platform for writing scripts. Besides being Javascript and besides having access to npm it lends it very well to data processing as it’s completely async unless you specifically tell it not to be. One of the best aspects in my opinion about node.js as a data processing language is the concepts of streams and using streams to process data. Using streams can drastically lower the memory consumption by processing data as it comes down the stream instead of keeping everything in memory at any one time. Think SAX instead of DOM parsing.

In node.js using streams is easy. Basically data flows from Readable streams to Writable streams. Think producers of data and consumers of data. Buffering is handled automatically (at least in the built in streams) and if a down stream consumer stops processing the upstream producer will stop producing. Elegant and easy. Readable streams can be stuff like files or network sockets and Writeable streams stuff like files or network sockets… Or stdout which in node.js also implement the Writable stream API. Working with streams is like being a plumber so piping (using the pipe method) is how you connect streams.

An example always helps – the below example reads from alphabet.txt and pipes the data to stdout.

const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(process.stdout)
> a
> b
> c

Simple example but works with small and big files without too much of a difference in memory consumption.

Sometimes processing is required and for this we use Transform streams (these are basically streams that can read and write). Say that we want to uppercase all characters. It’s easy by piping through a Transform stream and then on to the Writable stream (stdout):

const {Transform} = require('stream')
const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(new Transform({
    transform(chunk, encoding, callback) {
      // chunk is a Buffer 
      let data = chunk.toString().toUpperCase() 
      callback(null, data) 
    }
  }))
  .pipe(process.stdout)
> A
> B
> C

It’s easy to see how streams are very composeable and adding processing steps are easy. the pipe could even be determined at runtime. The above examples use strings but streams can also work on objects if required.

Streams are beautiful but can take some time to master. I highly recommend reading up on streams and start getting to know them. The “Node.js Streams: Everything you need to know” post is very nice and provides a good overview.

Happy coding!

 

Lightning Logger

In my Lightning components I always find logging and remembering how to navigate to urls using events to be an issue so I wrote this small utility base component that I then extend when creating new components. The base component share its helper with the subcomponents which allows for easy reuse of the utility functionality. The utility code provides both logging and various other utility functions such as navigating to other objects, presenting toasts etc.

Component definition of the base component is simple (note the {!v.body} which is key for abstract components to make sure the markup of child components appear):

<aura:component abstract="true" extensible="true">
 <aura:handler name="init" value="{!this}" action="{!c.doinit}" />
 {!v.body}
</aura:component>

Controller is likewise simple with basically only a callout to the helper to initialize a named Logger instance and store it in a variable called logger in the helper.

({
  doinit: function(component, event, helper) {
    const logger = helper.getLogger('SFLC_LightningHelper');
    logger.trace('Initializing LightningHelper');
    helper.logger = logger;
  }
})

To use it from another component you first extend the base component:

<aura:component implements="flexipage:availableForAllPageTypes,force:hasRecordId" extends="c:BaseComponent" controller="Foo_LCC">
  <aura:attribute name="recordId" type="Id" />
  <aura:handler name="init" value="{!this}" action="{!c.doinit}" />
  ...
  ...
</aura:component>

Then in the component controller initialization event create a named utility object (actually names the logger) and store it the “util” variable in the helper making it accessible using helper.util and the logger using helper.util.logger.

({
  doinit: function(component, event, helper) {
    // build utility
    const utility = helper.buildUtilityObject('MyComponent');
    helper.util = utility;
    
    // load data
    helper.loadData(component);
  }
})

From here on out you can simply use helper.util.logger.info, helper.util.logger.debug etc. to log for the component. All log messages are output using the named logger. The log level  which by default is NONE (meaning no output is output) is controllable using a URL parameter and loggers may be turned on and off using a URL parameter as well. Please note that the URL format used here doesn’t live up to the changes coming to Lightning URL’s in Summer ’18 or Winter ’18 (cannot remember which release).

Using the utility functionality can be done from controller and helper as shown in the loadData method using both utility functionality to invoke a remote action and to log:

({
  loadData: function(component) {
    // load data
    const helper = this;
    helper.util.invokeRemoteAction(component, 
      'getData', {'recordId': component.get("v.recordId")}, 
      function(err, data) {
        if (err) {
          helper.util.toast.error('Data Load Error', 'Unable to load data from server ('+ err +'). If the issue persists contact your System Administrator.', {sticky: true}); 
          return;
        }
        helper.util.log.debug('Received data from endpoint', data); 
      }
    )
  }
})

Zip file with Controller and Helper: BaseComponent

Passed Salesforce Platform Developer 1 – thoughts and take aways

So I had it on my V2MOM all last financial year to complete the Salesforce Platform Developer 1 certification which although is an optional certification for me was something I wanted to try and tackle. So I signed up for the exam in the beginning of January but before I could take it the exam was cancelled. It appeared that the questions and answers to the exam had been leaked online (well duh!) so my exam was cancelled until a new and updated exam was ready. So I waited and one day I was asked whether I wanted to try the beta exam of the new certification. It did involve me having to go to a testing center in person (I normally do them remotely proctored) but I agreed and took the exam.

So to call this a new exam is quite an overstatement.

Now I didn’t do the old exam but I’m pretty sure they simply shuffled the questions and rewrote a few. The exam is (still) VERY VisualForce heavy and peppered with weird Apex question that in my opinion doesn’t really fit the prevue of this being an entry level certification. There is VERY little Lightning which I found a bit odd with VisualForce being considered Legacy in my book and component based development using Lightning being the status quo and future. But alright who I am I to judge…

So it being a beta exam I didn’t get the result right away as you normally do but I did pass it which I consider more luck than anything else. What I really wanted to share here is that Platform Developer 1 is (still) a legacy exam and solid working knowledge of VisualForce is required. Forget about Lightning – VisualForce is the name of the game for this exam. I did share some constructive feedback about the exam internally and I would really like to see the current Platform Developer 1 certification being parked and marked as legacy and a truly new exam being brought to light. This exam should bring Apex and Lightning to the forefront maybe springling in a few questions on VisualForce.

My real point here is that no developer new to the platform will ever – or should have to – learn VisualForce. Crossing my fingers for a new / additional exam.

 

 

GDPR

So I’m not usually a guy who enjoys legalese and toying with paragraphs but I must admit that GDPR interests me. Both as a consumer and as a professional. As a consumer I find it nice and a great initiative to protect my rights and privacy as a consumer. I find the privacy regulations and the added responsibility put on service providers to be a welcome change. With the economic penalties outlined in the legislation the GDPR has to be respected. And I think they will – maybe once the initial battles has been fought.

As a professional I have a different approach and a different take on it. While also interesting the burden put on companies are very big and the challenges that has to be solved can seem somewhat insurmountable. Thinking about data in CRM, ERP, file shares, web site logs, e-commerce, data from POS terminals to name but a few makes this potentially a very big thing. What does it mean to allow transparency and data portability? What does it mean to be forgotten? With an IP address being considered PII (private identifiable information) it makes even core systems like web site logs and tracking systems subject to change. How do I even figure out where these pieces of information are stored. It’s indeed a great challenge. At least for B2C companies – it will most likely be much less burdensome for B2B.

To make matters worse the GDPR legislation was adopted by the EU on 27 April 2016 and it becomes enforceable from 25 May 2018 after a two-year transition period. But yet we are only really starting to take it serious now. How can that be? I’m starting to see this as a next year 2000 problem but whereas Y2K was takes serious a long way out this seems to have been mostly ignored. At least from where I sit. It will be very interesting to follow.

The project I’m on now is actually about transitioning a series of black-box consumer signup systems into a transparent Salesforce Service Cloud installation for a customer while ensuring double opt-in and keeping records of consent. We are on a pretty tight schedule to be ready for 25 May but it’s looking okay but the scope is also pretty well defined. But if this had been for the entirety of the customer data it would have been much worse. Now the project is much bigger than this but it’s interesting how it took the GDPR to get them going – maybe it was a good thing as it probably helped their business case internally.