Dev*Ops and You

Something hit me recently about the DevOps ecosystem. What finally tripped me over the proverbial cliff was during a client meeting where an internal client was saying how what we were providing – DevOps as Service (DaaS) – was not going to help them in their DevOps Transformation Program. Internally I was:

They mentioned DevSecOps, because they searched and found it on the web, as a possible direction for their needs. To be frank, I was flabbergasted. I wanted to ask: “So normal DevOps is different from DevSecOps or another Dev<insert IT tech acronym here>Ops?” Really?!?!

I continued to listen quietly for the rest of the meeting while my mind was racing on about the consequences of isolating all the other departments that have a stakeholder-ship within the umbrella of DevOps. Because of a single acronym that changed only a few letters, things could get ugly and isolation could set in.

Egad! I realized that this over usage of a term (DevOps) has caused unintended consequences for the imposed isolation and increased skepticism of how overall DevOps can transform your current state of ineffectual development and deployment strategies to smooth development/deployment pipelines.

As a tech lead and “multi-hat” contributor, I have to be aware of all the interactions and integration points my software has. As an architect, I have to understand the inner workings of the many different systems in my ecosystem so that each performs its function meaningfully. In this way, I can confidently deploy my application quickly and operate it without any roadblocks or impediments. Without that vision and foresight, while by being so focused on just getting a work item off of my back log, I can easily fail to see:

  1. What will it take to maintain the application; short term, long term?
  2. How performant is it?
  3. How secure is it?
  4. Can it scale?
  5. etc.

So now, after all of that, I would like to help define a new term: Dev*Ops.

Dev*Ops (DevStarOps)

  1. An inclusive, multi-discipline team that performs the actions of developing, securing, maintaining, and operating an application for its entire lifecycle through the use of process and products for the continuous enablement of added value to the customer.
  2. The concept around involving various disciplines within an organization to add value from start to finish ensuring a more robust interaction between people, process and product.

Let me explain. I find that with my interactions with current clients, would-be/should-be clients, and colleagues, that the term DevOps is a very mangled, misused, and is a truly misunderstood acronym. People can’t seem to think beyond the concatenated trimmed words of Development and Operations. To them — don’t get me wrong — there are only two teams that exist in the creation, care, and feeding of an application. What they fail to realize is that there are other teams or entities that should belong in the same ecosystem and have the same stakeholder access. Each narrowly focused team, e.g., InfoSec, Sec, Infrastructure in most organizations are very siloed – complete fiefdoms with their own rules, processes, and governance. Each one of these little kingdoms expresses a role in the overall care and feeding of an application during its lifecycle – sometimes to the chagrin of the project lead who is most likely trying to meet an impossible deadline directed from leadership. Involvement of these other important stakeholders is always in the application’s best interest and leads to success.  More often than not the other stakeholder’s input is typically pushed to the side as a “we’ll get to it if we have time”.

What first comes to mind, with this lack of involvement, is Security. Given the recent fallout from Equifax’s security breach, this bubbling up to the forefront from obscurity to OMG hair on fire just goes to show the lack of involvement when architecting and developing an application for consumption. The typical response is “we will fix it if it becomes a problem”. That response leads to low morale, long hours, and missed time off when the problem exhibits itself. 

So how do we combat this type of created isolationism? Whether it be self imposed or company culture – “this is how we always have done it—why change now”? How do we involve those teams that are crucial to our application’s success, without alienating them because it is an inconvenience and adds to the timeline of things to check before deploying to production?

From what I have seen in the past, developers don’t really like security – nor anything else (happy path, anyone?) – because it hampers their “creativity”, their timelines, tasks,  and the “just get it done…POC-type” attitude. Project/Development leads and administrators can see only the budget and the timeline. Project managers can see only tasks, burn down charts, over capacity, and deadlines. Not one of them are asking the questions of capacity, security, longevity, and a host of other questions required to operate, maintain, secure the application.

The very short answer is that for each project that gets started the PM/Architect/Lead needs to involve the wide range of other teams within IT kingdom. In some cases for “brown field” projects this technique can be used as a disruption which can lead to automation, innovation, and morale boosts.

Then after you identify your team – collected from all disciplines in your org —  You get them all in the same room! And you keep them all in the same room until everyone is in agreement with approach on what the architecture, security, performance, and deployment is going to be for that particular project/application. You develop the timeline when each discipline will become involved more solidly with the core team and you execute that plan. Otherwise, you will end up with a lot of finger pointing and a development/deployment mess that equates to lost revenue and increased overtime for bad deployments, late infrastructure creation, and bad pipeline management.

An additional point to the above, is that everyone needs to stop being snowflakes. You are a team, you all have to work together no matter what for the sake of the great idea called an application. Business politics aside, involve as quickly as possible everyone who is going either have a point of interest to pursue or negative aspect to the project. You can’t, on the last hour of the last day, realize that your application is going epically fail due to close mindedness. Inclusion of other disciplines in your design, development and execution is absolutely critical!

As an engineer, I understand the need — nay! — the intense desire, to produce a solution from the beginning from my isolated viewpoint. That mentality can be hindered by lack of perspective, but you know what you just have to do?

Otherwise you end up in the minutia of trying solve a problem by yourself in your isolated bubble.  When that happens, you realize that time lines hit you, over budgets, lack of morale. Now there is rush to just get something out no matter what. As a lead, you MUST stop, think, and involve! As a worker bee, you MUST stop, think, and involve! Otherwise, you could push aside people that you very possibly save your timelines and your sanity.

As a tech lead, dev lead, sec lead…. If you are a lead or a worker bee, become involved, and think about how your team and your project is going to exist in your environment/ecosystem! Don’t think about the solution, think about the people and processes that can enable your solution. Don’t think about the timeline as an absolute! And try not to just “pitch your pull request over the fence” and forget about it! I have seen a lot of teams that do this and expect you as the “DevOps” engineer to do the rest of the work for them. This is primarily because they just want to develop and not take ownership of the project and process to make them successful.

To this end, I would like all teams (present and future) to think about:

  1. Who the application effects. Who are the stakeholders? Customers? Internal teams? – Involve throughout your process
  2. What the app provides as a service or otherwise. Who consumes this? Is it secure? – Involve those that are SMEs in the field.
  3. See the pipeline – it doesn’t have to be pretty in the beginning – but it has to work. The pipeline starts with the idea and ends with deployment into production.
  4. Communicate, Communicate, Communicate, and Involve! – This is a key point here! It’s not about you! Its about everyone and what they can provide to your success!

I will end with this:

Advertisements

Managing DevOps as a Service Part 2

TL;DR

In any organization that is willing and able to go the route of DevOps, the first step to get your teams onboard is to establish a bridging team to help facilitate the transition from just developers and just operations to a merged Dev and Ops.

Development Operations as a Service in Practice

In my first post, I went into some detail about DevOps as a Service (DaaS). So now the interesting part. Remember what I alluded to earlier? How do you deal with a large number of teams and still be able to control and keep things running smooth in a 24/7 connected world? The other aspect here is the ability to help transition existing team’s (brown field) siloed approach to development and operations to much smoother series of ownership from idea to reality.

There are three key things that are imperative here.

  1. Isolation
  2. Deployment Practice
  3. Environments

My definition of Isolation is:

  • Being able to deploy code to a server that does not host another application or other conflicting configurable item
  • Has the ability to retrieve and display atomic transactional deployment data
  • Has end-to-end granularity of “who-did-what-and-when”

My definition of Deployment Practice is:

  • You have a single source of truth (Source Control)
  • Your rollback strategy is more about “rolling forward” with newer fixes 
  • You do not back up your configuration files or other files within your code base
  • You have tests and other gated check in checks that ensure code solidity and robustness

My definition of an Environment is:

  • Contiguous collection of servers that represent where you will deploy your codebase. This includes, DBs, Frontends, Middleware servers.
  • An environment can be single server or multiple servers that represent your application’s ecosystem (on-prem, Azure, AWS).

Now that we have my definitions out of the way. Let build a robust DaaS architecture. This is a general practice, but it can be applied to a lot of different scales in your enterprise.

Putting it into practice

Image result for development code pipeline image

The toughest part setting up DaaS is determining the boundaries and scope of work related to each team.  From a central perspective it is better to be a conduit for each teams code pipeline. Being the Stop-Gap between the developer and the deployment when implementing DaaS defeats the purpose of enabling your teams of full ownership of their codebase.

One thing that I am going to stress here is that:

“Developers know their code (or at least they should). They understand the configurations, the nuances, the quirks. Writing those concepts into a document for someone else to implement is very difficult and leads to miscommunication and poor implementations and deployments.”

As an organization, small or large, how do you mitigate the risks without putting more checks and balances in place?  Infrastructure is hard pressed to ensure stability and security. Database operations is hard pressed to ensure security and data. Leadership is hard pressed to keep the lights on and keep the money rolling in. When you have an entire organization that is siloed with distrust from previous missteps, it is hard to have a cohesive team.

DaaS helps bridge that gap by ensuring that prior to deployments there are checks and balances within the build and package process for an application. They also ensure that due the vary nature of the fluidity of  deployments that there is a consistency throughout the entire process.

Getting the Band Back Together

Implementing DaaS is to start thinking about being a facilitator, not an implementer. It is a bridge with personnel on a rotating basis that are from Operations, Infrastructure, Development that form a cohesive group that garners trust and is able to educate best practices to new members to teams that are either just out of school or have been in the industry for some time doing it differently.

Think of you favorite band that started out strong and then broke up and did solo gigs. It seems to work for the individual but not for the group. The lead singer is a diva and the drummer and guitarist are providing the support to make the sound right.

By putting your Operations people and Developers and Security into the same room for the duration of a project the long term effects can lead to robust applications with long term implications. For one the developers will now understand how their app works in a the server technology and how security binds it all together. The Operations team understands what the developers are thinking and developing and guiding them in the nuances of the server technology. The Security folk will see how to work with the seemingly complexities of different development technologies and provide guidance around securing the application that was once developed in a vacuum on bare metal.

In the end, as will all band reunions, there is a natural synergy between all involved, less finger pointing, and more robust application development and deployment.

Fixing the Cost of Poor Quality Deployments

I had a hallway conversation with a colleague of mine the other day about what the benefits of  Continuous Deployments and how that could translate into discussion points with clients about the role DevOps can play with in an organization. During the course our conversation, I spun up a side thread and starting thinking about how one could approach the topic from where most businesses live and breathe—their bottom line. Approaching a potential client or even current client with a DevOps solution for their organization should really be around their cost savings. Time (speed to release) is also a factor, but I am going to approach this from the tack of cost.  Depending the customer or the client cost of deploying is more of a factor and speed. If you approach the cost factor first and tie that into the speed, then it is a double win situation.

Calculating the costs of deployments

Most organizations can claim that they have an automated deployment process, but it usually includes an individual either running script on the destination server, copying folders, files and even configurations to the destination.

How do you calculate those costs? Think of it as man-hours, each employee of a company has a cost associated with them. Each man-hour can now have a cost associated with it. So now we can create the formulae:

Cost per man-hour (CPMH) = (average hourly rate for 1 person)

Cost of Deployment (COD) = #of personnel *# of man-hours * combined CPMH for the # of personnel

For example:

We have 4 personnel who each have $100.00 hourly rate that is going to be $400 for 1 man-hour time for individuals.

Combined CPMH = $100.00*4

COD = 4 personnel * 4 man-hours each *(Combined CPMH)

So the simplistically any one deployment would a cost of 16*$400 or $6400.00. Those man-hours are never recoverable and the number personnel who could doing other things of more value are now burning monies to babysit a deployment.

Donovan Brown is quoted to have said “Never send a human to do a computer’s work”. He is absolutely right and here’s why. Humans by nature are fallible, there are going to be mistakes made even with rigid checklists and stringent policies. Repeatable processes that a human follows can and will be over looked. Overlooking steps or even typing a wrong character to can lead to errors in a deployment. So if you look at how employing a DevOps solution can benefit an organization you have to calculate your cost savings in the man-hours recovered with the use of automated tools to perform the repeatable basic function of performing a deployment.

Cost of Poor Quality Deployments (COPQ-D)

Above I spoke about the basic costs of performing a deployment, but how about the cost of poor quality deployments? Those are the types of deployments that fail and any number of developers and other personnel are immediately brought into a bridge call to either troubleshoot or to provide other types of support during the course of a failed deployment.

For example: we have a failed deployment  that takes 8 personnel a total of 6 hours to troubleshoot diagnose and determine a fix.

Person 1 = 100.00/hour; Person2 = $125.00/hour; Person3-8 = $75.00/hour; Combined CPMH = 100 + 125 + (75*6) or $675/hour

Personnel Cost (PC) = #personnel * the Combined CPMH

PC = 8*675 or $5400

In the end it cost the company to pay their employees $5400 because of a failed deployment, but coupled with this you have to also calculate lost business to an unavailable site. Another question that adds to the overall cost of a poor or bad deployment is the costs associated with performing a rollback of your application back to a known state.

These are just small examples, but I think that if you look back to some of your previously failed deployments where there were 10s of people on a call most of them idle while one or two individuals were performing screen shares with others fighting for verbal control of the situation.

Fixing it

The fix is really about being defensive in your ability to build, deploy, and test your compiled code before it even reaches your production environment. To perform this fix you need to be aware that your pipeline of code should be a single version that has progressed your lower test environments with increased testing at each stage.

Image result for ci/cd images

Next is building confidence in your build (branching and versioning). Ensuring that your branches are short lived and that your main branch is the single source of truth is critical to your team’s success. Consistent versioning is also important here, because in all cases for each build that is performed should have its own version even for very minor changes to your codebase.

Accurately describing your environments (Dev, Stage, Prod) and the server( s) that reside in each along with the roles that each server performs is another critical step for success. Knowing what each server does and carving up your packages to focus on that role is one more important thing in maintaining consistency and accuracy of your deployments.  Configuration values for each environment is key and should be kept out of your package codebase. Extracting those values to a central location based on machine role and environment adds to the consistency of your pipeline and allows for quick changes if the values change at any point for any reason.

Finally, now that you have an accurate build from a single source of truth and you have your environments and roles established, comes down to creating a deployment process that can be use across all of your environments with no variation.  Here is where the cost savings come in.

  1. You have a consistent build and deployment process
  2. You have a consistent auditable trail from changeset/gitcommit to deployment of your code
  3. You have near immediate feedback from your team and are able to ensure faster delivery times for fixes or changes

If you have a full CI/CD pipeline for your codebase the cost savings of deployments now become trivial because are not involving humans and the cost of humans that invariably make mistakes.  If a normal deployment before the automation took 4 hours with 4 person at an average of $100/man-hour that would $1600 dollars for an ideal manual deployment. Now if a developer just changed some code and checked it in, the automation now takes place.  The developer is now free to work on other other items while the deployment occurs.

For the sake of argument if a normal pre-automated deployment took 16 man-hours and now we allow the servers to work in a consistent automated fashion that will only take 20 computer-minutes, the savings would be

16*60 = 960 man-minutes

960 man-minutes/20 compute-minutes * 100 = 4800% savings

Now with that type of speed up and reduction of cost you have the ability to add more features, kill more bugs, and generally put the latest information in front of your testers or consumers.

Managing DevOps as a Service (Part 1)

DevOps Challenges

One of the bigger challenges I see in the DevOps space is when you attempt initiate DevOps as a Service (DaaS).

My Definition: DevOps as a Service (DaaS)

  1. Performing actions to allow for teams 1+ to deploy codebase(s) to multiple environments (1+ servers) and maintain those servers within specifications.
  2. Maintaining multiple environments across globally distributed teams with a follow the sun approach
  3. Allowing for just pass through of code and deployment while maintaining infrastructure for enabling teams.

Typically what you see with small and medium sized teams is that one or two of the members of the development team are involved in the operations space as well. This works from the standpoint of smaller teams who have intimate knowledge of their environments, code base, and configurations. Yet, what if you had to control hundreds of teams? Different code bases? Different time zones? Different environments? Hundreds, if not, thousands of servers spanning on-premise, AWS, Azure? What you do and what can you do?

A friend of mine Damian Brady wrote about DevOps as a culture. Well truly it is a culture, you have to have ownership of the work that your are doing.  All too often, I see where teams develop and test locally and assume that their codebase is going to work in the various environment’s that they deploy to. A “throw it over the fence” style of development and deployment. This can be problematic when the assumption from the developers is that the operations team knows and understand the nuances of their codebase.

Categories of Development and Operations

Image result for devops images

Let’s define in simplistic terms both what I mean by categories of Development and Operations. Development is really the design and coding of an application or API to be consumed by internal or external parties. Operations is about the continuous maintenance of the application or API once the Development portion of the job is completed and the team that originally developed the code has rolled off on to other projects or clients. In nutshell, this “old school” approach has left many a maintainer performing Development work that really isn’t their forte. Developers are so focused on their timeline, their code that sometimes don’t understand or don’t want to understand the underlying infrastructure that serves up their application. This leads to the dreaded WOMM effect and its consequences.

My definition of WOMM is “Works on My Machine”. This the most basic build on the local machine that “just works”. F5 and it just works. It is also, from my point of view as a bacteria or viral disease that at some point most if not all development teams contract in their effort to quickly get code developed for consumption.

Many consulting teams in my experience fall into the Development Only category because of a few reasons:

  • Contract is Fixed Price or has limited scope
  • Operations, like Documentation, is the first to be cut from a contract to make the cost of work attractive
  • Business Developers sometimes fail to grasp long term affects of a short term project and too narrowly scope the deal in the hopes that contract once signed can lead to further work. Sometimes this bet pays off; but most others it does not.

Operations teams on the other hand sometimes have unfounded aversions towards developers because of a few reasons of their own:

  • Developers inherently create bad code that breaks functioning applications and infrastructure – emphasis on the infrastructure. 
  • Developers don’t really understand infrastructure or how their codebase can work on one set of servers and yet not work at all in Production

Merging of Development and Operations

Now that we have the basics, we build up and look at the merging of developers and operations. In my experience I have seen a few larger organizations where there is a such a large divide between developers and operations that members of the operations team end up becoming the defacto hidden developer, bug fixer, tester for developer teams. I have in the past fallen into that category when I started branching out from development and into the operations space.

In my opinion it is very good for developers and operations teams to have a rudimentary understanding of each others space. Yes there are purists out there that would contend otherwise, but in order to be an effective team and have a strong application for consumption this is a critical piece within DevOps. Not only should the developer understand the infrastructure that they are going to host their application on, but they should be developing on it as well.  There are edge cases about this argument, but in general this is a good practice. Likewise it is good for the operations team to understand the developer space and be given a crash course when the code becomes un-configurable and broken.

DevOps is about continuous ownership from planning the application, to IDE development, to source control, to build, and finally to deployment. It ensures that the developers understand that to put their code into production is not the responsibility of a select few Wizards of Oz, but developer and the operations team go along the journey together to ensure a smooth and proper deployment.

How to Update Octopus Deploy Tentacle to Restart Automatically

Seen this too many times before, where the users (Developers were unaware of a server reboot that caused their Tentacle to stop. Here is a defensive way to make sure your Tentacles start consistently.

Here is how to update the Octopus Tentacle in the case that the server is scheduled for downtime or just in general to avoid Tentacles being or considered offline within the Octopus Deploy server UI. Typically during installation of a tentacle on a destination server you don’t have the ability to recover gracefully if something goes wrong. This can be troublesome during patching or long server reboot times. Hmmm. For some automation here.

Open PowerShell as Administrator:

clip_image002

Open services.msc

clip_image004

Scroll to OctopusDeploy Tentacle service:

Right click and select properties:

clip_image006

Change Startup type to: Automatic (Delayed Start)

Switch to the Recovery tab:

clip_image008

Change the 1st, 2nd, Subsequent failures to Restart the service

clip_image010

Click apply or OK and you will now have a tentacle service that will be more stable after your machine boots from either a scheduled or maintenance reboot.

Benefits of using Octopus Deploy Integration Tasks in vNext Builds

If you are like me and you use Octopus Deploy for deploying your projects; it can be a challenge to keep your OctoPack version updated. Restoring the OctoPack NuGet package each time you build with VSTS or TFS can be a challenge because if you perform a TFS XAML Build the build will fail because it cannot find the associated OctoPack targets and associated dlls. A workaround is to include the packages folder that contains the OctoPack targets file and associated dlls with your checked-in codebase, but that can be messy and lead to artifacts being left over in the case you wish to upgrade OctoPack to a newer version.

Another detractor to leveraging OctoPack in your solution sometime around version 3.4 a number of breaking change were introduced that caused nuget push issues. The teams that I work with on a daily basis are still on an older version of Octopus and when they installed the latest version of the Tentacle issues started to crop up along with failed builds and pushes to the internal NuGet repository.

So what are the benefits? According to the marketplace documentation, you can still use your OctoPack MSBuild arguments, but it doesn’t really apply to your older XAML builds.

Benefits of Octopus Deploy Integration

Some of the larger benefits when using the Octopus Deploy integration steps are:

  • You are always up-to-date
  • You have a clean project (no more packages to put with your codebase)
  • You have more Octopus Steps to play with (OctoPack can do them, but again it means more MSBuild parameters)
  • Troubleshooting is easier (Build shows all of the output in the console)

Benefits of vNext Builds with Octopus Deploy Integration

There are others who have blogged about the benefits of moving to the next version of Build. So I won’t go into the particulars. Suffice to say that replicating your XAML build in the new Build system is extremely beneficial and coupled with the Octopus Deploy Integration extension it can be even more powerful (https://octopus.com/vsts).

  • Control – You own it, you build it
  • TaskGroups (combined step tasks for Build templates)
  • Build Templates (cloneability, reusability)
  • Cleaner Visual Studio Solution
  • Centralized build/package/deploy processes
  • Decoupling of dependencies to installed packages

Even if you still want to use OctoPack, you can, you just have to take your old MSBuild arguments and paste them into the Visual Studio Build Step MSBuild Arguments parameter. Under the covers, it still does a lot of the same work, but the added benefit of with using the Octopus Deploy Package, Push steps allows for cleaner output logs during a build, package and push. One other benefit not mentioned previously is that with the new Build System you don’t have to check-in your packages folder that contains the OctoPack information (the NuGet Restore step takes care of that).

Gotchas

There probably some especially around references, but I can’t think of any that would hinder the overall usage.

Finally

The better approach of keeping a clean project/solution and letting TFS/VSTS do all of the work is just makes sense. Cluttering your project with excess complexity can make sustainable, reusable codebases hard to achieve. Coupled with that is the fact that over complicating your solution can also cause other developers within your team to have trouble trying to build the solution locally.

Starting From Scratch–Building Your Project Right Part 1

Prologue

Let’s say for the sake of argument that you just uploaded your codebase into TFS2017/VSTS. What do you do? XAML builds are deprecated and the new Build system seems daunting. Again what do you do?  You can watch videos and read tons of different stackoverflow articles and blog posts on how to… yet there are still lingering questions on how to “just” start from scratch.

In my experience with teams from around the world before they used a version control system, would happily code on their local machine, perform a local build (where is just worked), and then using the power of Visual Studio would “Publish” their fixes directly to their remote environments. For one that is poor a ALM practice and two there is just no way to track any of the changes that were either breaking or fixing described typically with screenshots in an email thread. Overall it was the Dark Ages, a chaotic time where teams that were trying to be Agily/Scrummy/etc, yet really having no anchor or starting point to leap off from for how do perform a proper “single source of truth” build; let alone a deployment.

With “DevOps” and “Shift Left” being the buzz words of the day, it can be hard to get your team in the correct cultural mindset of ownership and control. In this article we will dig into the new MS Build system as if you were a newly minted Developer Lead with the appropriate Administrative Rights in your TFS/VSTS project.

Here is a basic scenario and then we will work through how you can build your project right with the new Build system in VSTS/TFS2017.

Assumptions

  • You are just now using source control
  • Your builds consisted of developers performing builds on their local machines
  • You may had a build server in the past, but you have either upgraded or the build templates that you previously used are incompatible
  • You have Post-Build scripting moves files around to make your codebase viable for a manual or even scripted deployment

Scenario

This is barebones. Your particular scenario may not apply and I will discuss in the future how to do advanced builds and deployments.

Your codebase has been freshly checked into TFS/VSTS source control. If you chose Git or TFVC, it doesn’t matter the below techniques will apply for both version control types.  You need to perform a build that is simple and the output needs to packaged and ready to be consumed by either Release Management or Octopus Deploy. So where do we start?

How it’s done

Now for the pretty pictures.

First log into your TFS or VSTS site or account and click on the Build & Release tab. It should be blank if you are first starting out. The screenshot I am providing is from previous posts that build more and more advanced concepts.

image

You will see that there four items in the image called Mine, All Definitions, Queued, and XAML. You will not need the XAML tab; it has been deprecated and you can edit your old XAML builds nor are they compatible with the new Build system. So because of those points we will not be discussing that tab.

  • Mine – represents those build definitions that you the logged in user have created

image

  • All Definitions – represents all the build definitions that have been created for your solutions, branches, etc

image

  • Queued – represents all those builds that have been currently queued or running or completed

image

Now lets create a new build definition for our solution. In our case we are going to do something basic and build up from there.

image

then we get a popup that gives a number of options of generic templates to choose from. For now we will just choose the Visual Studio build. The reason behind this is that most developers are accustomed and acquainted with and it is in keeping with my idea of starting with the basics and building from there.

image

Click Next and you will see a another page where you choose your repository and other settings for your build.

image

The great thing about this page in the Build Definition Wizard is that you can make preliminary adjustments to your build before it is created. For instance you can choose the type (remote or local) of repository that you wish to point at, select the branch you wish to build from, and determine whether or not you wish to have Continuious Integration (build after every check in).  Subsequently you can choose the default agent for your build to be something different that has the capability you require to build your solution.

image

Make your selections and then click Create. Now you will have a “unsaved” generic build definition that you will need to continue editing. But first it will be wise to save you creation so that if you have to leave the page you don’t have to start all over again.

image

image

Now that we have saved the build definition, we can go into each of the build steps and one make adjustments and two add more build steps as necessary to perform our build.

On the left part of the page you should have noticed that there are six steps and descriptions for each. Lets talk about them a little in detail.

First you have the NuGet restore.

image What this little step does is go into your solution and check for the packages.config file and restore or install all of the packages each time you build. In this instance you can ensure that you have the everything restored from NuGet.org or other repo of your choice without having to check in your packages folder like you would have to do for your old XAML builds. With this step you can control a lot of what can happen with the installation or restoration of your packages.

Next is the Build Solution. This is a powerful step very similar to the older XAML Process. One item of note is that there are some overlapping of functionality that I have found for build steps, while it may not hinder a build, it just gives you further options for streamlining your process.

image

This will look a little familiar to those with a XAML build background, but this is a lot cleaner and can be adjusted to suit your needs. MSBuild arguments still work, but in some cases you don’t have add switches like “/m:1” when you can check the Advanced->Build in Parallel check box.

The Text Assemblies Step is standard with the Visual Studio Build Template. It will use Visual Studio Test to perform that function to ensure that you have testings completed and code coverage for the widgets you have on your Dashboard.

imageFrom a testing perspective this a powerful step that allows you to go into and perform advanced testing from the build without a lot of tools that you would assume that you need to perform the reporting, etc.

Also note that you see a image or this imagein each step it will show either a hover tip or open a new tab to more details about the step for your leverage or understand.

Next is the Publish Symbols path. It is a way for you to use your pdb and obj files to help debug your application on a different machine other than where your sources were built.

image

Next is the Copy Files to: step. This step is takes the output from your sources bin directory and copies those files or folders to your artifacts directory.  The artifacts directory is a cleaned directory that ensures that you have just the right objects in your artifacts folder that need to be either packaged or deployed at a later time. Again clicking on the image icon will provide options and parameters for you to use to make your build more coherent and robust.

So after the previous 5 steps, now what happens? Well the build agent then publish your outputs to a Drop folder. Typically this drop folder is within the $(Build.ArtifactStagingFolder). Where might that be? It is located under your build definition directory. Similar to this file path: “E:\TFSBuildAgent\vsts-agent-onprem\_work\5”. Inside of this folder are 4 folders that represent “a” artifacts, “b” build (output), “s” sources, TestResults (obvious).  Look at Resources->Variables for more information.

image

Once we have made our initial edits to your build definition, we can now Click Save and then queue a new Build.

image

and our build succeeded.

image

From our log output we can know that the artifacts are “published” and a little magic later we can compare and see that the artifacts are in the location where they are intended to be.

image

and finally…

Don’t be afraid of the build

As the heading suggests, when you are first starting out with using TFS/VSTS Build, don’t be afraid if your build fails or doesn’t perform the functions you are expecting immediately. I cannot stress this enough. I know that when I am testing builds that I spend a lot of time troubleshooting failed builds before I finally have a successful (green) build.

By working through your issues and using a methodical approach you can be successful. You shouldn’t feel pressured to get it right the first time. You don’t have to feel like you are going to get fired for having a broken build. Work through it, understand the build process, then communicate the understanding to your leadership.

If you are developing a Web Application or web api you may have to consult this how to. But if you are doing a single applicaton (exe, service, etc), then this approach can get you started in the right direction.

Resources

Here is another good resource about build settings and build tokens https://www.visualstudio.com/en-us/docs/build/define/general 

https://www.visualstudio.com/en-us/docs/build/steps/build/visual-studio-build further details about the Visual Studio Build. Word of caution here is that if you are using TFSBuild.proj type of file, you will not be able to use Build because it contains tasks and targets that are supported only for XAML builds.

https://www.visualstudio.com/en-us/docs/build/define/variables this is for getting the different built-in and custom variables to work in your favor.