Salesforce and DevOps Part 3 - Development Lifecycle

Date published • 9th August 2018

Now that you (hopefully) read my rants on my views and the tools involved in DevOps for Salesforce, I'm going to show how a common development lifecycle works in Salesforce today.

I'm going to leave this slightly agnostic to the tools, focusing more on the steps for development and releases that in my view are needed for a common Salesforce implementation. This includes DX use, I'm going to mention it here and there, but won't go into detail in this post, as it will have a dedicated one next in the series.

This post covers just a single org lifecycle, as you can extrapolate to as many orgs you need by applying a flow for each. This assumes that the orgs are completely independent and that there's a valid use case for that. For some good info on determining if your company or client needs one or multiple orgs, check this article, it can probably be a bit outdated on some details, but it's a very good starting point.

Let's get to it, then.

First of all, I have to say that the amount of environments, repositories, branches and types of testing needed will depend on the size of the implementation, so I'll try to frame it to the main, and more common, steps. I'll point out some of the differences and how to adapt to different things as we go along.

One of the most important things of having flexibility and adopting the DevOps culture, is that each team chooses the scale that is appropriate to them, there should be nothing forcing teams to work in one way or the other too strictly (as long as the culture is adopted, of course!). Below, I'll explain how I usually like to structure the most common implementations.

Project and environments
I'm going to base the flow on a project where there is a single stream of work, with a medium size team, in a brand new org, for simplicity.

Note: if the org isn't brand new, the existing source must be pulled down and put in source control in the appropriate way. In my next post about DX (or not DX), I'll mention this in more detail, and touch on how to break the source in packages (or not yet, maybe?).

Artboard Copy 14

The core environments we should be using for this are listed below. 

Let's start with the less stable ones, used for in-sprint work, I'll call this set of environments the "dev train":

  • 1 dev environment (developer sandbox or scratch org) for each functional or technical member of the team. As long as they're coding / configuring, they get an isolated env
  • CI environments for validating builds. This can either be a developer sandbox or ephemeral scratch orgs that are spun up just for this purpose on each automated build and destroyed straight after
  • 1 QA sandbox. This environment is owned by the testers, for functional testing during an ongoing sprint. Depending on the amount of test data needed, it can be a developer only or a developer pro sandbox. Some could argue that it could even be a scratch org, if it fits. I believe that, right now (until DX evolves drastically), it's preferable to start testing in a way that is closer to how deployments will be made into production eventually, so I'm going with a sandbox, at the moment

Now, let's move on to the more stable ones, the ones where we'll only put release candidates, on the path to production. I've heard this called the "release train", so I'll steal that term.

  • 1 SIT or End to End testing sandbox. This could be a partial copy or full copy sandbox, as required and available
  • 1 UAT sandbox. This is usually a full copy sandbox
  • 1 Staging or Pre-Prod sandbox. This is a tricky one, I usually choose a dev or dev pro sandbox, as I think we need the capability of refreshing it quite fast and not wait 5 or 29 days. This is mostly used for emergency fixes and deployment dry-runs. In case of it being used for ongoing emergency support, it could be that you'll need to have some (or most) of the production data in there, so plan accordingly
  • Production org: obviously

Emergency fixes will need the following environments:

  • Hotfix dev: developer sandbox or scratch org
    Hotfix QA: developer or developer pro sandbox, as needed and available

Additional environments are often required for the following situations, among others:

  • Release training
  • Ad hoc training
  • Specific demos
  • Specific pilots

For big enterprise level implementations, in order to adapt this structure to allow for multiple work streams, you'll need to replicate the "dev train" environments for each stream, as needed. You'll also need to add a consolidated CI (CCI) environment that will be used for automating the builds validating the correctness of the integrated codebase for a particular release candidate that includes more than one work stream. This new environment could be a developer sandbox or ephemeral scratch orgs, similar to the ones for CI.

It's important to note that for implementations with many work streams, you'll need A LOT of collaboration, communication and synchronising (think of implementing deployment windows with tight deadlines, making streams miss windows as they're not met). Usually, in this case, a team is needed for each work stream and an additional team is required for managing the "release train" and governing the lifecycle. 

Branching mechanism

Artboard Copy 15

As we discussed in previous posts, We'll use Git for source control, so I'll explain the branches that are needed for the common scenario we're dealing with. I like to use the Gitflow workflow (or close enough) that you can find explained here. Top to bottom descriptions:

  • Master: this is the holy grail, the source of truth of what's in production, not to be touched, don't even look at it!
  • Release: 1 long-lived or several short-lived branches (one each) for candidates to live and source for creating the release candidate artifacts
  • Develop: main branch for development, constantly changed during sprints
  • Feature: feature specific branches, used for particular user stories, during initial development and unit testing. This can also be used for bugs, although, if you prefer, you can call them bugfix, to be more strict, in case of bugs
  • Hotfix: branches that are only used for emergency fixes, in case the world is ending in the production org and it can't wait for the next release!

Step by step summary

I'll try to keep it light here, as it can get too cumbersome, I will hopefully cover the highlights

Dev Train

Artboard Copy 16

For each user story (or bug), the following happens:

  • A feature branch is created by a developer, out from the develop branch in source control 
  • The developer pushes the latest state into the assigned dev environment
  • The developer works on the user story, committing often locally
  • The developer runs unit tests (config or code, as appropriate) until satisfied
  • The developer pushes the local changes to the git server
  • An automatic build is triggered at this stage into the CI environment, to validate nothing got broken with the new changes, check code quality and run unit tests. Repeat until successful
  • The developer creates a pull request from the feature branch into develop
  • Peer and supervisor reviews happen (collaborating as needed, using the tools!) before the pull request gets approved and merged
  • Note: some people prefer to run automatic builds only on creation of pull requests, before merging, instead of with every push to a feature branch. Dealer's choice, I guess, but as long as you have the resources and the tools (especially Bamboo's awesome branching feature), I say go for it and run builds on each push 
  • When merged, an automatic build runs to validate the merged changes
  • Optionally, you can start testing deployments with draft artifacts at this stage. You can create one here for the successful builds and keep updating it with each new story/bug. I can argue that it's not as useful and a bit of overhead, but it's possible, if you like that kind of thing
  • On demand (manually), or on an agreed schedule with the testers, a deployment happens to the QA environment, to update it with the finished features and to allow functional tests to start in there

This goes on, until the sprint's features are finished, bugs are ironed out, everything is demoed, tested and validated. Life is good (so far).

Considerations and takeaways for the dev train:

  • If possible (as mentioned in a previous post), automate as much as possible, and add automated tests to your CI builds
  • The point here is to fail fast, that's why there are already a few checkpoints and testing points for all features
  • Rollbacks in Salesforce are hard, so be very strict in what gets approved and merged into your develop branch
  • Be aware of release schedules, to use the appropriate versions and refreshes, based on the time they'll go into production

Release train

Artboard Copy 17

Sprint's tasks done, the following things happen:

  • Develop branch gets merged into a release branch and an automatic build runs to create the artifact for the release candidate. At this point it's likely and recommended, if the team is big enough to handle it, to carry on in parallel with development of subsequent sprints/releases in the "dev train", as usual
  • Optionally, you can add an intermediate environment here, to only validate the merge to release, or in case of consolidating multiple work streams (CCI). You can also run a validate only build into the SIT environment, before doing a full deploy, if you want to be extra careful
  • With the artifact created and a release marked and tagged accordingly, an automated (or manual, depending on the schedule) build will deploy the candidate into SIT
  • End to end testing, regression testing (automated, please!) and system integration testing happens here, until satisfied enough to let the users start UAT
  • The same artifact built into SIT gets deployed to UAT
  • When UAT is finished and signed-off by the business, we'll proceed to tag the candidate accordingly and validate that deployment to production will work without issues
  • We'll refresh the Staging environment (careful with Salesforce release windows) and do a full dry-run of the go-live in there, including smoke testing of the new features (and regression testing again, if possible)
  • When satisfied, do a "validate only" deployment into production (optional)
  • Deploy fully into prod, confident that all is well (one hopes!)
  • Merge your release branch into master, as that's already in production, and carry on. Some people like to merge into master before deploying to production and create a final artifact there, I prefer to do it afterwards

Considerations and takeaways for the release train:

  • I won't delve too deep into hotfixes, as they're just for emergencies (we don't have those, right? right?). To sum them up, only if REALLY needed, branch out from master, refresh/create a dev environment, automate builds with new changes to run all necessary tests and then straight into Staging, carrying on in a similar way to the rest of the usual release train. It's important to then propagate changes down to the develop branch, to avoid recreating issues or overwriting fixes
  • Issues and bugs found must be assessed and fixed only when they're critic and are actually bugs and not unnecessary change requests
  • If a proper bug is found (or a must have, critical change), ideally, the bug goes down to the dev train and is taken out of the current release. If that's not possible, a mini hotfix can get created into the release branch, with appropriate quality/merge/peer reviews and artifact updates
  • Always remember to propagate changes down to the dev train, if applicable
  • Remember to check Salesforce release windows and sandbox previews, to avoid nasty surprises

Fully automated pipeline from dev to prod: dream and considerations

In some technology stacks, it's possible (and usual) to automate everything and go into production without a lot of hassle, really frequently. Rolling back easily if something goes wrong and so on. Think of mobile apps updating daily and things like that. That's the dream, right? 

If you're able to do this with your Salesforce implementation, you're a really lucky individual (or team), please let me know how you did it, I'd like to try it.

I haven't done it yet for the following reasons / considerations:

  • Manual steps: Salesforce still needs a lot of manual steps before and after deployments in several cases, mostly due to metadata that's not supported. This is very common, and breaks all processes
  • Working with enterprises: needing to get different sign offs, formal UAT periods, integration with many different systems that run at very different paces, etc.
  • Inability to automate all tests, due to lack of resources, and historical lack of confidence by clients in the automation of tests. Can't say I blame them too much, as expressed in my views on technical support and capabilities from Salesforce
  • Inability to easily and automatically roll back changes: rollbacks are messy, usually almost a release on their own, no comments... I'll add some things on this topic on my wish list (future post in the series).

And that's it for this entry, next one will be about DX and whether I use it or not, how and when. Looking forward to that one!

Please feel free to comment with suggestions, considerations, things I missed. Anything!

Words by:

Ivan, Head of Architecture

If you would like to know more about what we do, we’re happy to answer any questions you may have for us.

Share this article:

View more from the blog and events