How to add missing SalesforceDX org alias

If you ever added a non-scratch org in SalesforceDX but forgot to add an alias using –alias / -a or simply want to change an alias there is a way easier way when readding all orgs. Simply edit the alias.json file as below.

  1. Locate your SalesforceDX settings directory (on Mac that is in ~/.sfdx and on Windows it seems to be %USERPROFILE%\.sfdx)
  2. Edit the alias.json file (please note that it uses Unix style endings so use an appropriate editor)
  3. Add missing alias mapping or correct incorrect mappings. The file simply maps an alias to the org username shown in “sfdx force:org:list”. Below is an example file.
{
 "orgs": {
   "example.qa": "mheisterberg@example.com.qa",
   "example.uat": "mheisterberg@example.com.uat"
 }
}

Issue with importing keystore into Salesforce

When attempting to import a certificate from a Java keystore (JKS) into Salesforce using “Security | Certificate and Key Management” under Setup the following error is encountered:

Data Not Available
The data you were trying to access could not be found. It may be due to another user deleting the data or a system error. If you know the data is not deleted but cannot access it, please look at our support page.

Resolving the issue is easy once you know how:

  1. Go To Setup | Identity Provider
  2. Press “Enable Identity Provider” button (Enabling this option will not affect any existing functionality in the Org. The full Identity Provider setup would need several other steps as well so enabling simply under — Setup | Identity Provider– will not make any difference so it can be enabled safely.)
  3. Once Identity Provider is enabled in the Org, it will create a self-signed certificate in your Org under — Setup | Certificate and Key Management
  4. Try to import the certificate from your JKS through “Import from keystore” option and it should be successful.
  5. (This is optional step) You may disable the Identity Provider again and then delete the automatically created Self-Signed certificate (via step 3) under ‘Certificate and Key Management’. This will turn your Org setup to its original state.

 

Life at Salesforce Through a Techie’s Eyes

I’ve been profiled in a post on Salesforce.com in a post titled Life at Salesforce Through a Techie’s Eyes. The post outlines how I see Salesforce as an employer and how I got to work for Salesforce. Sweet!

Only thing I saw that was stripped is the last sentence from the “Quick facts about me” section. It was “Even though I work under a blue cloud I continue to bleed yellow”. Oh well… 🙂

Bash one-liner for Apex test coverage percentage using SalesforceDX

Update 3 May 2018: There are issues with the percentages reported by SalesforceDX plus it doesn’t report coverage on classes with 0% coverage which will shrew the results. The approach outlined above can be used as an indication but cannot as of today be used as a measure for code coverage when it comes to production deployments. As an example I’ve had the above snippet report a coverage of 88% where as a production deploy reported 63% coverage. We – Salesforce – are aware of the issue and are working to resolve it. Stay tuned!

Note to self – quick note on how to run all tests in a connected org (as identified by the -u argument) and use jq and awk to grab the overall test coverage percentage.

$ sfdx force:apex:test:run -u mheisterberg@example.com.appdev -c -w 2 -r json | jq -r ".result.coverage.coverage[].coveredPercent" | awk '{s+=$1;c++} END {print s/c}'
> 88.1108

YMMV!

 

Using SalesforceDX to perform Bulk API operations

As noted the other day (Using SalesforceDX to automate getting Apex class test coverage percentages) SalesforceDX is great for many things but one of the ways is automate some operations that are time consuming or just takes a lot of manual work each time. One of these things are Bulk API operations which in of by themselves are not hard but there is no UI for them besides the DataLoader and no console API when using the DataLoader when not on Windows.

The customer I’m working for currently has a monster data load to perform and one of the things I’ve done is writing  script to split the data into data sets. One set per country – 91 sets all in all. All sets consists of 3 files to support the data load. One file for Accounts and two additional files for custom objects that needs to be loaded as well. All in all it’s a lot of clicking in the DataLoader and it doesn’t really scale for testing. That’s a lot of clicking in DataLoader when testing.

But I’m lucky as SalesforceDX receives new functionality all the time and at some point some data Bulk API features had snuck by me so I was pleasantly surprised to discover force:data:bulk:upsert and force:data:bulk:delete today. They we just what I needed. SalesforceDX to the rescue yet again…

So today I grabbed by IDE by the horns (#vscode in my case) and wrote some wrappers around the Bulk API capabilities of SalesforceDX. The fact that all SalesforceDX commands takes an optional –json argument makes it easy to script and parse responses. This combined with select-shell from npm I now have a nice CLI interface to doing Bulk data loads. The script looks as the available data sets and asks me what country to load data for and then what export timestamp to process (the data sets may exists in multiple versions). Then it goes and does its thing UPSERTing all 3 times in turn and reports status. So nice. The Bulk API is asynchronous so  the script also handles polling for job status and only proceeds once the job has completed successfully.

$ ./upsert mheisterberg@example.com.appdev
SFDX - Org for mheisterberg@example.com.appdev is connected...

Select country code:
 ae
 au
 ca
 cn
 es
 fr
 hk
 hu
 co
 ▸ it
 jp
 kr
 my
 pt
 sg
 th
 tr
 tw
 us

Select timestamp:
 2018-04-16T07:25:37Z
 2018-04-16T08:31:28Z
 ▸ 2018-04-16T08:34:14Z

Will process following data
Country : it
Timestamp: 2018-04-16T08:34:14Z
UPSERT for Account data...
Issued UPSERT bulk request to object (Account) - id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG - state: Queued
SFDX - asking for bulk status for id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG
SFDX - received bulk status for id 7516E000002DckQQAS, jobId 7506E000002QQ3zQAG - state: Completed
Issued UPSERT bulk request to object (MarketRelation__c) - id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW - state: Queued
SFDX - asking for bulk status for id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW
SFDX - received bulk status for id 7516E000002DckkQAC, jobId 7506E000002QQ4JQAW - state: Completed
Issued UPSERT bulk request to object (Consent__c) - id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW - state: Queued
SFDX - asking for bulk status for id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW
SFDX - received bulk status for id 7516E000002DckuQAC, jobId 7506E000002QQ4OQAW - state: Completed
Finished upsert of data

Once I’m done testing a particular data set I can use the –delete-accounts flag to my script to delete data using the Bulk API as well. Here I actually combined force:data:soql:query and force:data:bulk:delete to first retrieve the ID’s of the records I need to delete and then kick off the required Bulk API delete requests. Again easy peasy. And repeatable…

$ ./upsert mheisterberg@example.com.appdev --delete-accounts
SFDX - Org for mheisterberg@example.com.appdev is connected...

Are you sure?
 ▸ No
 Yes

Received 32463 records
Issued DELETE bulk request to object (Account) - id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: Queued
SFDX - asking for bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG
SFDX - received bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: InProgress
SFDX - asking for bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG
SFDX - received bulk status for id 7516E000002DckLQAS, jobId 7506E000002QQ3uQAG - state: Completed
Performed delete...

Only issue I had here really was that node.js has a maximum buffer size of 200kb to stdin so I could not simply read the stdin response from the SOQL query as it may be pretty big. Instead I pipe to a tmp-file and read that back in and parse as JSON. Not ideal but it gets the job done.

The code of the script itself is for the customers eyes only but the source for the helpers is available as sfdx-bulk-helper on Github and sfdx-bulk-helper on npm.

YMMV!

 

Using SalesforceDX to automate getting Apex class test coverage percentages

So SalesforceDX is good for many things but this particular blog post is going to be around how it provides easy access to something which is otherwise hard or cumbersome to get at. Like Apex class test coverage. It’s available through other means such as the UI and the tooling api but there it takes manual work (clicking) or requires additional plumbing to set up and extract. With SalesforceDX it’s surprisingly easy.

As opposed to popular belief SalesforceDX may be used with any org and not just the scratch orgs that SalesforceDX affords for development. Connecting to any org is as simple as using the Force to do the OAuth dance:

$ sfdx force:auth:web:login

For additional points you can give the org connection an alias for easy reference (using –setalias) and specify the login URL if required (using –instanceurl) i.e. if you’re adding a sandbox.

$ sfdx force:auth:web:login --setalias MyOrg --instanceurl https://test.salesforce.com

Once you have the org connection you can use force:apex:test:run to run tests and force:apex:test:report to – surprise – return the test report.

$ sfdx force:apex:test:run -u mheisterberg@example.com.appdev
Run "sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev" to retrieve test results.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev
=== Test Results
TEST NAME OUTCOME MESSAGE RUNTIME (MS)
────────────────────────────────────────────────────────────────── ─────── ─────── ────────────
ChangePasswordControllerTest.testChangePasswordController Pass 11
AccountTriggerHandlerTest.testSetAccountOwner Pass 3623
AccountTriggerHandlerTest.testSetContactId Pass 114
AccountTriggerHandlerTest.testSetLowecaseEmail Pass 215
AccountTriggerHandlerTest.testSetPCAK Pass 83
AccountTriggerHandlerTest.testValidateEmailUniquenessNegative Pass 42
AccountTriggerHandlerTest.testValidateEmailUniquenessPositive Pass 80
AddressesListRestTest.testGetAddressesList Pass 11097
AddressRestTest.testDeleteAddress Pass 1388
AddressRestTest.testGetAddress Pass 753
AddressRestTest.testPostAddress Pass 734
AddressRestTest.testPutAddress Pass 731
ConsentRestTest.testGetConsent Pass 959
ConsentRestTest.testPostConsent Pass 768
ConsentRestTest.testPutConsent Pass 975
ConsentsListRestTest.testGetConsensList Pass 3761
ConsumerRestTest.testGetConsumer Pass 1004
ConsumerRestTest.testPostConsumer Pass 988
MarketRelationTriggerHandlerTest.testBehavior Pass 8
ProfileRestTest.testDeleteProfile Pass 1071
ProfileRestTest.testGetProfile Pass 710
ProfileRestTest.testPostProfile Pass 739
ProfileRestTest.testPutProfile Pass 679
ProfilesListRestTest.testGetProfilesList Pass 921
ForgotPasswordControllerTest.testForgotPasswordController Pass 29
MyProfilePageControllerTest.testSave Pass 258
SiteLoginControllerTest.testSiteLoginController Pass 17
SiteRegisterControllerTest.testRegistration Pass 16
=== Test Summary
NAME VALUE
─────────────────── ─────────────────────────────
Outcome Passed
Tests Ran 28
Passing 28
Failing 0
Skipped 0
Pass Rate 100%
Fail Rate 0%
Test Start Time Apr 13, 2018 10:18 AM
Test Execution Time 31774 ms
Test Total Time 31774 ms
Command Time 50941 ms
Hostname https://cs85.salesforce.com
Org Id 00D6E0000008eojUAA
Username mheisterberg@example.com.appdev
Test Run Id 7076E00000Uo5sc
User Id 0051r0000087iv9AAA

It’s pretty nifty huh!?

Again for added points add –json to the test report command to get the data back in JSON. And if you already have something that accepts test coverage data from say JUnit you can just add “–resultformat junit” and boom! You’ll get the test report in JUnit XML format. But everything started with me wanting to retrieve code coverage data and that hasn’t been part of the output so far. But again SalesforceDX to the rescue… Just add –codecoverage and you’ll receive code coverage percentages as well as part of the report.

$ sfdx force:apex:test:report -i 7076E00000Uo5sc -u mheisterberg@example.com.appdev -c
== Apex Code Coverage
ID NAME % COVERED UNCOVERED LINES
────────────────── ───────────────────────────────── ────────────────── ────────────────────────────────────────────────────────────────────
01p6E000000aSHvQAM SiteLoginController 100%
01p6E000000aSHxQAM SiteRegisterController 81.48148148148148% 39,40,43,44,45
01p6E000000aSHzQAM ChangePasswordController 100%
01p6E000000aSI1QAM ForgotPasswordController 88.88888888888889% 15
01p6E000000aSI3QAM MyProfilePageController 87.5% 21,37,38
01p6E000000brQtQAI AccountTriggerHandler 95% 61,63,66,181
01p6E000000cBdhQAE MarketRelationTriggerHandler 78% 38,40,41,42
01p6E000000csObQAI Wrappers 98% 5
01p6E000000aIXsQAM ConsentRest 79% 35,36,54,55,67,69,70,88,89,104,105,106,119,121,125
01p6E000000ak0yQAA SegmentBuilder 100%
01q6E0000004xLIQAY AccountTrigger 100%
01q6E0000004zB0QAI MarketRelationTrigger 80% 13
01p6E000000aUJVQA2 UserBuilder 100%
01p6E000000aJjqQAE ConsentsListRest 94.11764705882352% 48
01p6E000000cnhQQAQ AddressRest 78% 35,36,60,68,70,92,93,110,111,112,126,128,132,149,150,162,164,165
01p6E000000caCxQAI ConsumerRest 84% 17,18,27,28,49,50,51,111,120,146,148,149,159,220,222,225,284,290,297
01p6E000000aIXxQAM ProfileRest 76% 27,28,49,57,59,78,79,91,93,94,111,112,127,128,129,141,143,147
01p6E000000aJjWQAU ProfilesListRest 94.11764705882352% 41
01p6E000000cr6iQAA AddressesListRest 83.33333333333334% 39,52,54
=== Test Results
<snipped>

Combine that with –json and you have the foundation for automating this. So sweet. You could even write a little script to output this any way you like.

Happy scripting…

Loving streams in node.js

Node.js is a great platform for writing scripts. Besides being Javascript and besides having access to npm it lends it very well to data processing as it’s completely async unless you specifically tell it not to be. One of the best aspects in my opinion about node.js as a data processing language is the concepts of streams and using streams to process data. Using streams can drastically lower the memory consumption by processing data as it comes down the stream instead of keeping everything in memory at any one time. Think SAX instead of DOM parsing.

In node.js using streams is easy. Basically data flows from Readable streams to Writable streams. Think producers of data and consumers of data. Buffering is handled automatically (at least in the built in streams) and if a down stream consumer stops processing the upstream producer will stop producing. Elegant and easy. Readable streams can be stuff like files or network sockets and Writeable streams stuff like files or network sockets… Or stdout which in node.js also implement the Writable stream API. Working with streams is like being a plumber so piping (using the pipe method) is how you connect streams.

An example always helps – the below example reads from alphabet.txt and pipes the data to stdout.

const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(process.stdout)
> a
> b
> c

Simple example but works with small and big files without too much of a difference in memory consumption.

Sometimes processing is required and for this we use Transform streams (these are basically streams that can read and write). Say that we want to uppercase all characters. It’s easy by piping through a Transform stream and then on to the Writable stream (stdout):

const {Transform} = require('stream')
const fs = require('fs')
const path = require('path')

fs.createReadStream(path.join(__dirname, 'alphabet.txt'))
  .pipe(new Transform({
    transform(chunk, encoding, callback) {
      // chunk is a Buffer 
      let data = chunk.toString().toUpperCase() 
      callback(null, data) 
    }
  }))
  .pipe(process.stdout)
> A
> B
> C

It’s easy to see how streams are very composeable and adding processing steps are easy. the pipe could even be determined at runtime. The above examples use strings but streams can also work on objects if required.

Streams are beautiful but can take some time to master. I highly recommend reading up on streams and start getting to know them. The “Node.js Streams: Everything you need to know” post is very nice and provides a good overview.

Happy coding!