10 Practical Tips for Azure and Visual Studio Team Services

Saturday, September 02, 2017

Having recently completed a contract for a large consulting firm for a rather large internal project, I thought I'd write down a number of lessons learned from that experience in order to help anyone looking at the DevOps practice as a whole.

The application we managed was an internal application consisting of two websites, twenty microservice APIs, DocumentDb CosmosDb and SQL Server databases, among other artifacts. The web apps and microservices were to deploy into an Application Service Environment (ASE) with networking configured to point into the organization's internal network, requiring employees to access the site through their network or VPN into it.

At the risk of turning this into a long-winded post, I'm going to break it down into a collection of paragraphs to expand on in further posts/dicussion.

Bring your DevOps team in early

The DevOps team wasn't added to the project until the development team was readying to deploy their solution into the staging and production platforms. It was assumed that because the development team invested in a process of having ARM templates and scripts they used for deploying in their 'lower' environments, that the DevOps team could simply inherit these templates and use them to migrate into the 'upper' environments. On the one hand, it would be a reasonable assumption if all of the environments were consistent. However, this application required the upper environments to run on ASEs, which hadn't been used or tested in any part of the development cycle.

The DevOps team needs just as much time as the development team when it comes to deployment preparation. While it was helpful to have the templates and scripts built by the development team, your team needs time to prepare, to plan, and to deploy several times over before your solution goes live. Over time, as your DevOps team matures, your teams will find ways to make these processes efficient and optimized for your organization's needs, but in the early stages of team deployment, the time spent orienting and setting up is critical.

Strategy is everything

In the early phases of your project, several layers of planning will help your DevOps team prepare well. Strategies around artifact naming, monitoring/logging, security, scaling, backups/reliability, even change management should be discussed and planned to the furthest extend possible between the dev, DevOps, and other related teams.

Keep separate repositories for Dev and DevOps

In our project the development team had deployment scripts and ARM templates set up to help they automate their deployments. As the DevOps team took ownership of the deployments, the deployment artifacts remained in the development repositories. This is a mistake.

VSTS supports git submodules, which allow projects to reference other projects as part of their overall solution structure. From a development standpoint this comes in very handy, as you can consolidate source code and make your efforts easier. But this works as a boon to DevOps teams, which allows them to maintain better ownership (and more importantly, control) over their resources, while allowing them to share with the development team, who will need access and possibly even make some edits to these artifacts.

Establish a deployment cadence

One of the most helpful things a DevOps team should establish as quickly as possible is a deployment cadence; that is, they should be able to quickly and easily deploy everything they need to any environment at any given time. For our project, we had ASEs in the upper environments that we didn't get to until the last minute, which was a tactical failure given the complexity of the deployment due to the networking requirements. Not only does this give the DevOps team the opportunity to practice the processes, it lets them encounter, address and resolve any issues early, reducing deployment risk when the time comes to move into production. There is nothing worse than pushing out go-live dates because of unexpected unanticipated problems that could have been addressed with a few rounds of testing.

Build an 'undeployment' process

For every deployment to every environment, there should also be an 'undeployment' that gets rid of everything there. This is necessary to keep maximize efficiency, as you're charged for everything in use in the cloud.

Staff your team with people you need

We encountered several networking issues during our deployments. Our DevOps team needed to coordinate several different things with the internal networking team, which was responsible for setting up security platforms for the network. One issue that arose from this revolved around confusion regarding roles and responsibilities for the network configuration as a whole; the DevOps team was responsible for getting the ASE deployed and configured, but the network team owned the network parts as a whole. It would have been helpful to have someone from that team as part of ours

Keep developers out of VSTS

There is nothing worse than having a build or deployment go bad, and finding the source of your problems coming from a developer's 'tweak' that wasn't communicated to the DevOps team. (This goes doubly for deployment script changes, but we just put everything into git submodules, so we've already got that covered).

In short, don't let them do it, and keep them locked out of the VSTS site. If they need access to their deployments, take advantage of the VSTS security groups and give access to a single point of contact who will be responsible for kicking those off. But at the very least, keep your exposure minimized so that you know who is doing what within the platform.

Favor parameter files to parameter settings

Our ARM templates had a somewhat complicated naming convention for all Azure artifacts deployed, leveraging a format of {company-name}{project-name}{artifact-prefix}{name}{environment} or something similar. For some artifacts, this approach made sense, especially if it had a public-facing unique endpoint associated with it. But for many other artifacts, it made things more confusing than helpful. Worse, it took several different variables to make up the names for different artifacts; these variables were then configured at different places all over VSTS. Now, VSTS is a very nice platform for building and deploying solutions, but it also requires a lot of interaction over the Internet, through a web browser, in order to get things done. Navigating through all of the configuration options requires a lot of network activity, and any kind of latency means it's going to take some time.

I strongly encourage anyone working with VSTS to favor ARM template parameter files over setting up anything in the VSTS user interface, wherever possible. It's not that the UI is bad; indeed, it's improved significantly over the past six months alone. But a more efficient approach will always involve the command line - it's much faster to tweak a JSON file and run a PowerShell script than it is to navigate through VSTS, and as you become more comfortable with Azure and Powershell (assuming you weren't in the first place, of course), the easier it gets.

Learn and Leverage the VSTS API

The VSTS API will let you kick off build/deployment processes and give you all kinds of information about the platofrm as a whole. Even better, you can access it using curl and other simple tools. Spend some time getting familiar with this API and how it works, and you'll find ways to unlock further productivity for your team.

The only constant is change

In the six months spent working on the client's project, Microsoft's DocumentDb product became CosmosDB, several steps used in our VSTS platform became obsolete and/or required upgrading, and a much-improved App Service Environments version 2 was released - and those are just the changes I can recall that had an impact on our project.

To deploy to the cloud is to deploy to a constantly changing and improving platform; generally speaking, these changes are for the better, but with any change, you'll have to be prepared for them up and down the chain - whether it's a new feature to take advantage of, or a pricing change requiring a change to the consumption habits of your solution. Developers, management and executives alike need to remain alert and watchful of all of these things within their cloud application portfolio and have the appropriate measures necessary to make whatever changes they need. Having a strong DevOps team with strategies and management practices in place will go a long way toward making organizations ready to handle these changes.