Blog

Image 6

Speed And Consistency – Leveraging Relationship And Team Values In Development And Delivery Practices

How We Develop

In the last instalment of our series, we introduced some of the changes we’ve applied to our relationship and working practices with a federal government customer providing a middleware product to the healthcare industry. These changes have made a substantial difference to both the product and the outcomes we’re able to deliver. It’s now time to continue our journey, starting with product development.

Our development and delivery practices have also benefited from focusing on our team values, relationship, and how we work.

Leading into development and delivery, we’re working towards more frequent smaller target releases compared with our previous slower monolithic releases. This means we can deliver value to customers more frequently, reducing their hesitancy to take on new releases due to the sheer scale of change, and meaning we get quicker real-world feedback on which features are being used (and hence worth investing in) versus which features aren’t. This is a great outcome moving away from the project-based delivery model, giving us flexibility to constantly refine the scope and timeframe for each release. We can do better though: we’re still working on a major release per year and minor releases somewhere between quarterly and biannually.

When considering our backlog of work items that are candidates for any given release, we’re increasingly placing equal priority on traditionally less valued items such as eradicating major defects, continuous improvement, addressing technical debt and maintaining technical and architectural currency.

We’re actively aligning a now legacy codebase with contemporary frameworks and deployment practices. For us, this means considering how we can progressively migrate a product built with the Microsoft .NET Framework, SOAP-based web services implemented using Windows Communication Foundation (WCF) and a user interface built with ASP.NET MVC and assuming deployment in Internet Information Services (IIS) on Windows-based servers to contemporary frameworks and practices such as .NET Core, RESTful services, single page applications (SPAs) built with JavaScript frameworks, and supporting deployment to servers or containers.

This is where continuous improvement and iteration come to the fore: by continually focusing on improving what we can when we can, we’re progressively chipping away at a larger problem. A couple of examples where we’ve seen this approach shine:

  • Migrating our background processes from WCF and IIS hosting to use a framework that supports the same component being run as a console application or installed as a Windows service. This presents us with a number of advantages, easing debugging for developers and providing flexibility to deploy “traditionally” to Windows servers or in the future as single-purpose containers.

  • Taking the opportunity to introduce a RESTful API as part of migrating the majority of our configuration settings into the application database. This allows us to take a step towards writing RESTful services in .NET Core, and to focus on supporting common cross-cutting concerns such as authentication and authorisation, dependency injection, validation, exception handling, versioning, logging, resource access and mapping from the outset. This is in stark contrast to our legacy codebase, where we’re still suffering as a result of many of these concerns not having been addressed sooner. We also benefit from learning by doing, getting feedback we can direct into our next iteration.

We’ve also focused on making our developers more productive, getting them up and running sooner and improving consistency. An aspect of development that had traditionally been problematic was getting developers set up to develop against a particular target release and ensuring their development environment continued to reflect that target release. Here we’ve taken advantage of some of the automation work I’ll describe later to create (if not present) and upgrade development dependencies such as the application database every time the developer starts debugging, meaning there’s no wasting time creating databases, executing scripts and pre-loading data just so you can debug an issue or feature you’re working on.

Another area we’ve focused on is our practices around source code branching and merging and pull requests. This is still a bit of a work in progress, and again we know we can do better. While we do apply branch policies to prevent merging directly into a target release branch, in some cases the granularity of our work items leads to fairly long-lived feature branches (occasionally up to several weeks), in turn leading to larger pull requests. There are a few downsides to this:

  • It’s hard for the reviewers to find sufficient time to work through changes to 100s of files.

  • It’s cognitively challenging to maintain focus across that number of files for any extended period of time, which can lead to the reviewers missing things they’d ordinarily pick up, just through mental attrition.

  • The sheer size of the review can also lead to the review ending up taking days to weeks to complete, which means the feature branch is even longer lived and further reduces flow.

  • This reduced flow then increases the likelihood of inconsistencies and merge conflicts upon integration.

We’re actively working on approaches for improving this, ranging from ensuring we create smaller work items, to better organised commits and smaller pull requests, and considering trialling pair programming to make the design/develop/refactor process more interactive and reduce the overhead on the PR review.

How We Build And Deploy

Perhaps the greatest example of how we’ve lived by our value to eliminate waste is in the area of build and deployment automation. Our previous practices here could have been a great case study themselves in inefficiency and waste. Our builds relied on a semi-documented manual set of steps that needed to be executed on a golden machine that was configured just-right. Our deployment process was 100+ pages of documented steps (at least they were documented!). Hopefully I don’t need to go into details on why these are less than optimal. They did however mean that our builds were inconsistent, and our deployments were extremely time-consuming and prone to human error. And a major reason for things being in this state: the previous project-based procurement model and its tendency to focus on delivering purely the scope of the current project, nothing more, nothing less. There was very little appetite left for investing in seemingly peripheral concerns like build and deployment processes.

Except that by remaining inefficient and inconsistent, these processes were a major source of waste.

Enter an alternative procurement model that values trust and is more supportive of investing in improvement and eliminating waste, and the results have been amazing.

Through automation, our build processes can be executed anywhere: on any developer’s machine or automated in any build engine (in our case we’re using Azure DevOps Pipelines). We’ve placed the majority of our build logic in Cake Build scripts outside of the build engine, which enables them to be source controlled (we also source control our Azure DevOps Pipelines YAML pipeline definitions) but more importantly provides portability between build engines if we need it. The Cake Build scripts do everything: calculating and injecting a semantic version number, cleaning & building our code (both .NET and Java), executing unit tests for both our .NET-based product and our PowerShell deployment code, building documentation (more on that later) and creating our deployment packages. Because we deliver both binary and source code release packages, our build scripts support both, removing source files we don’t want to deliver in the source code package along the way, and calculating and recording a checksum for each package we deliver.

We’ve also invested in automating the build of documentation artefacts we continue to deliver in our release package: WSDL files for the SOAP-based web services, and API documentation. The WSDL files are fairly simply generated by the build script using Microsoft’s svcutil tool, but even that was previously a manually executed command. The human-readable API documentation was a more interesting exercise. We previously manually maintained a several hundred-page document for our service catalogue, and really wanted to generate this documentation from the WSDL files. However, we found the tools available all possessed limited automation interfaces and for the most part assumed a human was configuring and driving them. Instead we compromised on generating documentation from the .NET API used by WCF; although this doesn’t 100% reflect the WSDL, we’ve been able to extend the documentation with notes on interpretation. We use the Microsoft docfx tool to generate a static HTML-based documentation website that can be used from the filesystem or deployed to a web server, and supply that in our release package. In the future as we migrate to more modern frameworks and RESTful services our automation options will expand significantly on this front.

Another important component of our build automation has been moving away from relative filesystem-based references or drop locations for dependencies to use NuGet packaging instead. We build two main products: a core system that implements the majority of our middleware logic and services, and a user interface that depends on it. Previously the build for our core product would copy its build outputs to a common location, from where they’d be referenced by the UI product. However, this often led to inconsistencies between developer machines and conflicts moving between target release branches. As part of our build automation we moved this dependency to NuGet instead, having our build process for the core product build and publish a NuGet package that is then referenced by the UI product. There were also a few challenges to solve here along the way:

  • We needed a way to allow developers to work locally and in isolation on a feature branch without needing to push and publish a NuGet package. We resolved this by building and publishing “alpha.0” (pre-alpha) NuGet packages when working locally to a filesystem-based NuGet source, which developers could optionally reference in preference to our Azure DevOps Artifacts NuGet source.

  • Our build process was being executed on submitting a pull request to validate the feature branch built successfully but was also publishing a NuGet package along the way. This meant that we could potentially have release branches incorrectly referencing a NuGet package built in a PR build. We didn’t find a completely satisfying solution to this, but resolved it the best we could by ensuring that PR builds didn’t push packages.

We now have our builds running in Azure DevOps Pipelines as part of a continuous integration (CI) pipeline, moving us away from our golden machine reliance, validating that every PR and merge to a target release branch builds and passes all of our unit tests, and making the build artefacts available for deployment.

Deployment automation has probably had an even greater impact.

Our first step here was to understand, consolidate and centralise as much shared configuration as we could. We have a number of components to each product, but we found that most components were reliant on the same set of configuration, usually duplicated (and often inconsistent!) across distinct component-specific configuration files. We centralised as much configuration as we could into a shared configuration folder that was referenced by each component by way of symbolic links. This meant that components were unaware that the configuration file they were referencing was actually in a different place, but meant we could maintain configuration settings once, ensuring they were consistent across components.

Our next step (which is still a work in progress) has been to look to migrate as many configuration settings as possible into our application database and abstract the mechanism for obtaining and maintaining configuration settings via a programmatic interface and configuration API. This is another step in centralising configuration settings, from component-specific files, to shared files (but potentially duplicated across servers) to a central store in our application database. It will also mean that we can one day in the hopefully not-too-distant future supply a user interface for administrators to manage these settings, instead of needing to dive into XML-based configuration files.

Having made improvements to configuration, we extended this to replace our 100+ page deployment instructions with PowerShell-based deployment scripts. Our deployment scripts now automate the majority of deployment activities, from ensuring required prerequisites are installed, to installing and configuring common components such as the remaining shared configuration and certificates, and installing and configuring IIS application pools and websites and Windows services for our runtime components. The scripts completely tear down & build up each time they’re executed, ensuring we’re in a known state each time we deploy. They also run a suite of post-deployment verification tests following deployment, so we can detect if our products are in a correctly functioning operational state during deployment, rather than only detecting this on first request or use.

We’ve also automated the deployment of our application databases, moving away from manually executed T-SQL scripts to using the DbUp framework. This allows us to upgrade a database from empty or a previous release version using the same component. By structuring our scripts into stateful scripts for database tables and data that are executed only once and codeful scripts for functions, views and stored procedures that are executed on each upgrade, we’ve also improved the consistency of our database deployment and our PR and merge processes for database code.

We now use these deployment artefacts to continuously deploy to our cloud-hosted test environments, validating that our product is continually in a deployable state across all Windows Server operating systems we support. Our deployments support side-by-side deployment of the same or different release versions to the same server, providing flexibility around configuration and logical test environments.

Our deployment automation has significantly increased consistency of our deployments and reduced time to deploy from hours or days to seconds or minutes. Better still, we deliver the same deployment artefacts to our customers, meaning they gain the same benefits and no longer need to wade through our 100+ page deployment document!

Next Time

That wraps up the second article in our series. Next time we’ll conclude the series with how we test, implement & support – see you next time!