Auto-Deploy SSIS packages with VSTS and Octopus Deploy

For a bit now, I have been describing to teams how they can deploy their SSIS packages to their SQL instances automatically, but I never “really” had to do it myself. Until now. The client that I am working with currently just went through a very large migration of on-premise TFS  to VSTS and with that came the need to change how their reporting structure was both generated and consumed. Before the migration, we were generating ALM reports through direct SQL queries against the on-premise TFS Warehouse. During the migration and onward we had to generate an SSIS package that would pull information directly from the VSTS Rest API and populate a local data warehouse that could help us accomplish nearly the same thing. Since I wanted to keep my solution and project clean – no OctoPack (which isn’t supported, although you can hack your project to include it), and no nuspec file or extraneous other items in my solution like a .nuget folder – I opted to see if I could perform an auto-deploy just using VSTS and Octopus Deploy. Here is how I did it, while keeping my SSIS solution pristine and leveraging the available marketplace tasks and Octopus Deploy step templates.

Setup

For this blog post I am using VSTS and Octopus Deploy version 3.13.10. Caution – your mileage may vary depending on whether you use TFS 2017 or a lower version of Octopus Deploy, but the technique is sound.

Assumptions:

  • You are using the internal nuget repo of Octopus Deploy. I can’t claim this process will work with external repos, but that is a challenge to others who read this.
  • You have a rudimentary understanding of VSTS and Octopus Deploy and how they each work independently.

Here is what you will need:

  • VSTS Account where your SSIS solution is located – this example is using Git, TFVC will work just as well
  • SSIS Marketplace Task – installed on your VSTS or TFS instance
  • Octopus Deploy Integration Tasks – installed on your VSTS or TFS instance
  • Octopus Deploy Project with servers already added to your environments
  • Octopus Deploy API key pushing packages from VSTS to Octopus Deploy built in repository (How to generate api key)
  • The Octopus Tentacles running on your destination servers should be running as a service account that has dbo access to your DB and local admin rights on the server to execute Remote PowerShell.

VSTS Component

Building out the CI build is straight forward, except that with SSIS, SSRS projects, the standard MSBuild and .NET Desktop build templates do not work as required. As Abel SquidHead points out you can perform an SSIS Build using commandline tasks within a build definition, but wouldn’t it be a more elegant to have a Build Task that did all the work for you save for a couple of parameters?

So to start, we need to add a Octopus Deploy Service Endpoint connection.

imageimage

Click Ok and you should see something like this:

 image

Now click on the Build and Release tab and create a new Empty Build definition. image

Then add the SSIS Build Task, Copy Files, Publish Artifact,the Octopus Deploy Pack, the Octopus Deploy Push Packages.

Note: You add tasks by clicking on the “+” symbol in the Phase 1 section.

image

Edit the Build SSIS task to point to your solution and project if necessary.

image

Next edit the Copy files Task so that it looks like so:

image

Next Edit publish artifact task so that it looks similar to this. You want to make sure that pushing your artifacts to a drop location for the next step to pick it up without all of the other bits and pieces that comprise your source tree.

image

Next Edit your Package Step. There is a lot to do here, but I will try to explain it a little by the numbers. I won’t describe the Advanced areas, but will leave that to the reader to experiment with. What this step does is simulate the same effort that a nuspec file performs when added to your project.

image

  1. Represents the name of the package you want. This name should be a constant for any package you wish. If you need to put other information, revision numbers, etc you can use the Nuget or Advanced options to provide the proper metadata for your package
  2. Represent the package format, until recently Octopus Deploy’s internal nuget repository only handled nupkg files, now it allows you generate either nupkg or zip files for you usage in your Octopus Deployment process
  3. This is the version number that will be appended to the package ID described above. This should be an auto-incrementing number so that each time you build you get a new version. Important: This version number does not typically represent the internal versioning of your package’s content. This is used as a metadata field to ensure that you are going to deploy the latest from your build output.
  4. Represents the location where your output resides, in this case it will be in the bin directory of the solution.
  5. Represents where you want to put your zipped or nupkged file contents, in this case the “artifactstagingdirectory\drop” folder

Now we move onto the last task in our process. Edit the Push Packages to Octopus. This pretty straight forward. Just select your Octopus Deploy Connection that you created earlier in this blog and the location of where you put your nupkg or zip file. (In this example I am using the nupkg format).

image

Now click the Save and Queue drop down and select Save. Give a reason if you wish and then we select the Triggers menu item for our process. Select CI, use the master branch and then click Save again.

image

You are now done!

It wise for you to test out your build first. One piece of advice is to disable the Package and Push Octopus steps first so that your build actually does what it is supposed to do and then enable each step after each successful build.

Octopus Deploy Component

There are a couple of things here that need to be accomplished for this section of the blog.

First login into your Octopus Deploy instance and then navigate to your project. You should see something like this:

image

You first want to select the Process to create how your deployment process is going to work. The following Options will be dependent upon you creating the process flow.

In the beginning your project and process will be blank and it will be helpful in getting your process started.  There are a lot of options to choose from, but for this exercise we are going to choose Deploy Package and Deploy ISPAC SSIS project from a package step found under the SQL Server grouping.  Once you do that your process should look something like this:

image

I will not dig too deep into the mechanics, but I will let you know what you need to change in order to make this deployment process clean and smooth.

First, lets edit the Deploy Package step. You will need to  scroll down to the bottom of the page. Select the Configure Features link and uncheck everything except for “Custom Installation Directory”. Click Apply and now your Deploy Package step should look similar to this:

image

Select Package name you chose from setting up VSTS. Simply typing a few letters will give you an autocomplete for the package you are looking for. In the custom installation directory, select a directory that you wish to put the package in and have it unpack itself. For more control you can use Variables to change this location based on server or environment.  Click the save button at the bottom of the page and now we go on to the next step.

Edit the Deploy IPSAC SSIS project step. This step is an incorporation of a community step that I have used before in other SSIS implementations. Since we are going with the basics here. You should see something similar after you have filled out the appropriate fields.

imageimage

Once we have filled in the appropriate values and have selected the step that gives us the location where the ipsac file is going to be located at. Now we will finish this up with either one of the following options.

Note: The fields can be edited depending on what your particular requirements are. Coordinate with your DBAs to better understand how these different fields work within your particular implementation.

Option 1 (Automatic Release Creation):

Option 1 revolves around Automatic Release Creation. Octopus Deploy has an observer mechanism that looks at changes to packages that are being inserted into the Octopus Deploy internal repository.

First select Triggers. This is going to get us to the point where when a new build from VSTS has successfully completed we can observe a new package coming in and then have it automatically deploy to our initial environment.

image

This section was moved from a previous version of Octopus Deploy, but the mechanism. First check the Create a release check box and then select the package step that will be executed when a new a package is pushed to the internal repository. Click Save and you are almost ready to go.

Warning: Automatic Release Creation will not be available if you do not have a Package Deploy step template in your deployment process. In our particular case this doesn’t matter, but it is worth to note.

Option 2 (Lifecycles):

Lifecycles are yet another way to for you to

  1. Define Quality gates for your deployments
  2. Give greater control on how to keep your deployments from reaching Production pre-maturely

By default, when a project is a created the Default Lifecycle is pretty wide open. So what we are going to do is give ourselves a gated way to ensure that we automatically deploy to our lowest environment and then require manual or other types of approval for higher environments.

First select Library and then the Lifecycles left hand menu selection. You should see something like this:

image

Select the Default Lifecycle so that we can edit it. The purpose here is to put the gateways in place for our different environments.  The default before we edit will not have an element selected called Phases. Phases allows for deployers to select different environments to start and end a particular process. Since we know that we have just one project and it will have to start in our dev and then move upwards from there after various checks.

Note: Once you have selected a Lifecycle for your project, each new release and deployment will use that Lifecycle. You cannot change your Lifecycle once you have created a release and deployed that release.

First we are going to add our 1st phase, this will help us establish where are starting point in the deployment process is going to be. We click Add Phase and then we see something like this:

image

We select our initial environment, select the automatically deploy radio button, click add, then give our Phase a meaningful name and then click Save. We can then go and other phases for this lifecycle, but for the sake of brevity this work is for getting things done.

I won’t go into any in depth discussions here, but there are a lot of options and other items you can use within these different phases to help promote your code from one environment to another.

Now that you have saved everything you are ready to deploy your SSIS code from source control out to servers all the way out to production.  You may find that you have do some tweaking with your servers or with the variables that you need to use for your deployment process. By practicing your deployments in a lower throwaway environment, you have a chance of greater success to apply the same process for all of your downstream environments.

Note: If this doesn’t work right out of the gate, don’t get frustrated! The VSTS build logs and the Octopus Deploy Task logs provide a wealth of troubleshooting information your ultimate success.

Finally

This exercise has shown that you can get your SSIS package built, packaged, pushed, and deployed to your lowest environment for immediate testing. The steps shown here are pretty straightforward and can be adjusted to suit your needs. To be honest it took me longer to write and screen capture this post than it took me to build out this pipeline. Smile 

Happy SSIS Deployments!

Advertisements

Dev*Ops and You

Something hit me recently about the DevOps ecosystem. What finally tripped me over the proverbial cliff was during a client meeting where an internal client was saying how what we were providing – DevOps as Service (DaaS) – was not going to help them in their DevOps Transformation Program. Internally I was:

They mentioned DevSecOps, because they searched and found it on the web, as a possible direction for their needs. To be frank, I was flabbergasted. I wanted to ask: “So normal DevOps is different from DevSecOps or another Dev<insert IT tech acronym here>Ops?” Really?!?!

I continued to listen quietly for the rest of the meeting while my mind was racing on about the consequences of isolating all the other departments that have a stakeholder-ship within the umbrella of DevOps. Because of a single acronym that changed only a few letters, things could get ugly and isolation could set in.

Egad! I realized that this over usage of a term (DevOps) has caused unintended consequences for the imposed isolation and increased skepticism of how overall DevOps can transform your current state of ineffectual development and deployment strategies to smooth development/deployment pipelines.

As a tech lead and “multi-hat” contributor, I have to be aware of all the interactions and integration points my software has. As an architect, I have to understand the inner workings of the many different systems in my ecosystem so that each performs its function meaningfully. In this way, I can confidently deploy my application quickly and operate it without any roadblocks or impediments. Without that vision and foresight, while by being so focused on just getting a work item off of my back log, I can easily fail to see:

  1. What will it take to maintain the application; short term, long term?
  2. How performant is it?
  3. How secure is it?
  4. Can it scale?
  5. etc.

So now, after all of that, I would like to help define a new term: Dev*Ops.

Dev*Ops (DevStarOps)

  1. An inclusive, multi-discipline team that performs the actions of developing, securing, maintaining, and operating an application for its entire lifecycle through the use of process and products for the continuous enablement of added value to the customer.
  2. The concept around involving various disciplines within an organization to add value from start to finish ensuring a more robust interaction between people, process and product.

Let me explain. I find that with my interactions with current clients, would-be/should-be clients, and colleagues, that the term DevOps is a very mangled, misused, and is a truly misunderstood acronym. People can’t seem to think beyond the concatenated trimmed words of Development and Operations. To them — don’t get me wrong — there are only two teams that exist in the creation, care, and feeding of an application. What they fail to realize is that there are other teams or entities that should belong in the same ecosystem and have the same stakeholder access. Each narrowly focused team, e.g., InfoSec, Sec, Infrastructure in most organizations are very siloed – complete fiefdoms with their own rules, processes, and governance. Each one of these little kingdoms expresses a role in the overall care and feeding of an application during its lifecycle – sometimes to the chagrin of the project lead who is most likely trying to meet an impossible deadline directed from leadership. Involvement of these other important stakeholders is always in the application’s best interest and leads to success.  More often than not the other stakeholder’s input is typically pushed to the side as a “we’ll get to it if we have time”.

What first comes to mind, with this lack of involvement, is Security. Given the recent fallout from Equifax’s security breach, this bubbling up to the forefront from obscurity to OMG hair on fire just goes to show the lack of involvement when architecting and developing an application for consumption. The typical response is “we will fix it if it becomes a problem”. That response leads to low morale, long hours, and missed time off when the problem exhibits itself. 

So how do we combat this type of created isolationism? Whether it be self imposed or company culture – “this is how we always have done it—why change now”? How do we involve those teams that are crucial to our application’s success, without alienating them because it is an inconvenience and adds to the timeline of things to check before deploying to production?

From what I have seen in the past, developers don’t really like security – nor anything else (happy path, anyone?) – because it hampers their “creativity”, their timelines, tasks,  and the “just get it done…POC-type” attitude. Project/Development leads and administrators can see only the budget and the timeline. Project managers can see only tasks, burn down charts, over capacity, and deadlines. Not one of them are asking the questions of capacity, security, longevity, and a host of other questions required to operate, maintain, secure the application.

The very short answer is that for each project that gets started the PM/Architect/Lead needs to involve the wide range of other teams within IT kingdom. In some cases for “brown field” projects this technique can be used as a disruption which can lead to automation, innovation, and morale boosts.

Then after you identify your team – collected from all disciplines in your org —  You get them all in the same room! And you keep them all in the same room until everyone is in agreement with approach on what the architecture, security, performance, and deployment is going to be for that particular project/application. You develop the timeline when each discipline will become involved more solidly with the core team and you execute that plan. Otherwise, you will end up with a lot of finger pointing and a development/deployment mess that equates to lost revenue and increased overtime for bad deployments, late infrastructure creation, and bad pipeline management.

An additional point to the above, is that everyone needs to stop being snowflakes. You are a team, you all have to work together no matter what for the sake of the great idea called an application. Business politics aside, involve as quickly as possible everyone who is going either have a point of interest to pursue or negative aspect to the project. You can’t, on the last hour of the last day, realize that your application is going epically fail due to close mindedness. Inclusion of other disciplines in your design, development and execution is absolutely critical!

As an engineer, I understand the need — nay! — the intense desire, to produce a solution from the beginning from my isolated viewpoint. That mentality can be hindered by lack of perspective, but you know what you just have to do?

Otherwise you end up in the minutia of trying solve a problem by yourself in your isolated bubble.  When that happens, you realize that time lines hit you, over budgets, lack of morale. Now there is rush to just get something out no matter what. As a lead, you MUST stop, think, and involve! As a worker bee, you MUST stop, think, and involve! Otherwise, you could push aside people that you very possibly save your timelines and your sanity.

As a tech lead, dev lead, sec lead…. If you are a lead or a worker bee, become involved, and think about how your team and your project is going to exist in your environment/ecosystem! Don’t think about the solution, think about the people and processes that can enable your solution. Don’t think about the timeline as an absolute! And try not to just “pitch your pull request over the fence” and forget about it! I have seen a lot of teams that do this and expect you as the “DevOps” engineer to do the rest of the work for them. This is primarily because they just want to develop and not take ownership of the project and process to make them successful.

To this end, I would like all teams (present and future) to think about:

  1. Who the application effects. Who are the stakeholders? Customers? Internal teams? – Involve throughout your process
  2. What the app provides as a service or otherwise. Who consumes this? Is it secure? – Involve those that are SMEs in the field.
  3. See the pipeline – it doesn’t have to be pretty in the beginning – but it has to work. The pipeline starts with the idea and ends with deployment into production.
  4. Communicate, Communicate, Communicate, and Involve! – This is a key point here! It’s not about you! Its about everyone and what they can provide to your success!

I will end with this:

Fixing the Cost of Poor Quality Deployments

I had a hallway conversation with a colleague of mine the other day about what the benefits of  Continuous Deployments and how that could translate into discussion points with clients about the role DevOps can play with in an organization. During the course our conversation, I spun up a side thread and starting thinking about how one could approach the topic from where most businesses live and breathe—their bottom line. Approaching a potential client or even current client with a DevOps solution for their organization should really be around their cost savings. Time (speed to release) is also a factor, but I am going to approach this from the tack of cost.  Depending the customer or the client cost of deploying is more of a factor and speed. If you approach the cost factor first and tie that into the speed, then it is a double win situation.

Calculating the costs of deployments

Most organizations can claim that they have an automated deployment process, but it usually includes an individual either running script on the destination server, copying folders, files and even configurations to the destination.

How do you calculate those costs? Think of it as man-hours, each employee of a company has a cost associated with them. Each man-hour can now have a cost associated with it. So now we can create the formulae:

Cost per man-hour (CPMH) = (average hourly rate for 1 person)

Cost of Deployment (COD) = #of personnel *# of man-hours * combined CPMH for the # of personnel

For example:

We have 4 personnel who each have $100.00 hourly rate that is going to be $400 for 1 man-hour time for individuals.

Combined CPMH = $100.00*4

COD = 4 personnel * 4 man-hours each *(Combined CPMH)

So the simplistically any one deployment would a cost of 16*$400 or $6400.00. Those man-hours are never recoverable and the number personnel who could doing other things of more value are now burning monies to babysit a deployment.

Donovan Brown is quoted to have said “Never send a human to do a computer’s work”. He is absolutely right and here’s why. Humans by nature are fallible, there are going to be mistakes made even with rigid checklists and stringent policies. Repeatable processes that a human follows can and will be over looked. Overlooking steps or even typing a wrong character to can lead to errors in a deployment. So if you look at how employing a DevOps solution can benefit an organization you have to calculate your cost savings in the man-hours recovered with the use of automated tools to perform the repeatable basic function of performing a deployment.

Cost of Poor Quality Deployments (COPQ-D)

Above I spoke about the basic costs of performing a deployment, but how about the cost of poor quality deployments? Those are the types of deployments that fail and any number of developers and other personnel are immediately brought into a bridge call to either troubleshoot or to provide other types of support during the course of a failed deployment.

For example: we have a failed deployment  that takes 8 personnel a total of 6 hours to troubleshoot diagnose and determine a fix.

Person 1 = 100.00/hour; Person2 = $125.00/hour; Person3-8 = $75.00/hour; Combined CPMH = 100 + 125 + (75*6) or $675/hour

Personnel Cost (PC) = #personnel * the Combined CPMH

PC = 8*675 or $5400

In the end it cost the company to pay their employees $5400 because of a failed deployment, but coupled with this you have to also calculate lost business to an unavailable site. Another question that adds to the overall cost of a poor or bad deployment is the costs associated with performing a rollback of your application back to a known state.

These are just small examples, but I think that if you look back to some of your previously failed deployments where there were 10s of people on a call most of them idle while one or two individuals were performing screen shares with others fighting for verbal control of the situation.

Fixing it

The fix is really about being defensive in your ability to build, deploy, and test your compiled code before it even reaches your production environment. To perform this fix you need to be aware that your pipeline of code should be a single version that has progressed your lower test environments with increased testing at each stage.

Image result for ci/cd images

Next is building confidence in your build (branching and versioning). Ensuring that your branches are short lived and that your main branch is the single source of truth is critical to your team’s success. Consistent versioning is also important here, because in all cases for each build that is performed should have its own version even for very minor changes to your codebase.

Accurately describing your environments (Dev, Stage, Prod) and the server( s) that reside in each along with the roles that each server performs is another critical step for success. Knowing what each server does and carving up your packages to focus on that role is one more important thing in maintaining consistency and accuracy of your deployments.  Configuration values for each environment is key and should be kept out of your package codebase. Extracting those values to a central location based on machine role and environment adds to the consistency of your pipeline and allows for quick changes if the values change at any point for any reason.

Finally, now that you have an accurate build from a single source of truth and you have your environments and roles established, comes down to creating a deployment process that can be use across all of your environments with no variation.  Here is where the cost savings come in.

  1. You have a consistent build and deployment process
  2. You have a consistent auditable trail from changeset/gitcommit to deployment of your code
  3. You have near immediate feedback from your team and are able to ensure faster delivery times for fixes or changes

If you have a full CI/CD pipeline for your codebase the cost savings of deployments now become trivial because are not involving humans and the cost of humans that invariably make mistakes.  If a normal deployment before the automation took 4 hours with 4 person at an average of $100/man-hour that would $1600 dollars for an ideal manual deployment. Now if a developer just changed some code and checked it in, the automation now takes place.  The developer is now free to work on other other items while the deployment occurs.

For the sake of argument if a normal pre-automated deployment took 16 man-hours and now we allow the servers to work in a consistent automated fashion that will only take 20 computer-minutes, the savings would be

16*60 = 960 man-minutes

960 man-minutes/20 compute-minutes * 100 = 4800% savings

Now with that type of speed up and reduction of cost you have the ability to add more features, kill more bugs, and generally put the latest information in front of your testers or consumers.

Managing DevOps as a Service (Part 1)

DevOps Challenges

One of the bigger challenges I see in the DevOps space is when you attempt initiate DevOps as a Service (DaaS).

My Definition: DevOps as a Service (DaaS)

  1. Performing actions to allow for teams 1+ to deploy codebase(s) to multiple environments (1+ servers) and maintain those servers within specifications.
  2. Maintaining multiple environments across globally distributed teams with a follow the sun approach
  3. Allowing for just pass through of code and deployment while maintaining infrastructure for enabling teams.

Typically what you see with small and medium sized teams is that one or two of the members of the development team are involved in the operations space as well. This works from the standpoint of smaller teams who have intimate knowledge of their environments, code base, and configurations. Yet, what if you had to control hundreds of teams? Different code bases? Different time zones? Different environments? Hundreds, if not, thousands of servers spanning on-premise, AWS, Azure? What you do and what can you do?

A friend of mine Damian Brady wrote about DevOps as a culture. Well truly it is a culture, you have to have ownership of the work that your are doing.  All too often, I see where teams develop and test locally and assume that their codebase is going to work in the various environment’s that they deploy to. A “throw it over the fence” style of development and deployment. This can be problematic when the assumption from the developers is that the operations team knows and understand the nuances of their codebase.

Categories of Development and Operations

Image result for devops images

Let’s define in simplistic terms both what I mean by categories of Development and Operations. Development is really the design and coding of an application or API to be consumed by internal or external parties. Operations is about the continuous maintenance of the application or API once the Development portion of the job is completed and the team that originally developed the code has rolled off on to other projects or clients. In nutshell, this “old school” approach has left many a maintainer performing Development work that really isn’t their forte. Developers are so focused on their timeline, their code that sometimes don’t understand or don’t want to understand the underlying infrastructure that serves up their application. This leads to the dreaded WOMM effect and its consequences.

My definition of WOMM is “Works on My Machine”. This the most basic build on the local machine that “just works”. F5 and it just works. It is also, from my point of view as a bacteria or viral disease that at some point most if not all development teams contract in their effort to quickly get code developed for consumption.

Many consulting teams in my experience fall into the Development Only category because of a few reasons:

  • Contract is Fixed Price or has limited scope
  • Operations, like Documentation, is the first to be cut from a contract to make the cost of work attractive
  • Business Developers sometimes fail to grasp long term affects of a short term project and too narrowly scope the deal in the hopes that contract once signed can lead to further work. Sometimes this bet pays off; but most others it does not.

Operations teams on the other hand sometimes have unfounded aversions towards developers because of a few reasons of their own:

  • Developers inherently create bad code that breaks functioning applications and infrastructure – emphasis on the infrastructure. 
  • Developers don’t really understand infrastructure or how their codebase can work on one set of servers and yet not work at all in Production

Merging of Development and Operations

Now that we have the basics, we build up and look at the merging of developers and operations. In my experience I have seen a few larger organizations where there is a such a large divide between developers and operations that members of the operations team end up becoming the defacto hidden developer, bug fixer, tester for developer teams. I have in the past fallen into that category when I started branching out from development and into the operations space.

In my opinion it is very good for developers and operations teams to have a rudimentary understanding of each others space. Yes there are purists out there that would contend otherwise, but in order to be an effective team and have a strong application for consumption this is a critical piece within DevOps. Not only should the developer understand the infrastructure that they are going to host their application on, but they should be developing on it as well.  There are edge cases about this argument, but in general this is a good practice. Likewise it is good for the operations team to understand the developer space and be given a crash course when the code becomes un-configurable and broken.

DevOps is about continuous ownership from planning the application, to IDE development, to source control, to build, and finally to deployment. It ensures that the developers understand that to put their code into production is not the responsibility of a select few Wizards of Oz, but developer and the operations team go along the journey together to ensure a smooth and proper deployment.

Benefits of using Octopus Deploy Integration Tasks in vNext Builds

If you are like me and you use Octopus Deploy for deploying your projects; it can be a challenge to keep your OctoPack version updated. Restoring the OctoPack NuGet package each time you build with VSTS or TFS can be a challenge because if you perform a TFS XAML Build the build will fail because it cannot find the associated OctoPack targets and associated dlls. A workaround is to include the packages folder that contains the OctoPack targets file and associated dlls with your checked-in codebase, but that can be messy and lead to artifacts being left over in the case you wish to upgrade OctoPack to a newer version.

Another detractor to leveraging OctoPack in your solution sometime around version 3.4 a number of breaking change were introduced that caused nuget push issues. The teams that I work with on a daily basis are still on an older version of Octopus and when they installed the latest version of the Tentacle issues started to crop up along with failed builds and pushes to the internal NuGet repository.

So what are the benefits? According to the marketplace documentation, you can still use your OctoPack MSBuild arguments, but it doesn’t really apply to your older XAML builds.

Benefits of Octopus Deploy Integration

Some of the larger benefits when using the Octopus Deploy integration steps are:

  • You are always up-to-date
  • You have a clean project (no more packages to put with your codebase)
  • You have more Octopus Steps to play with (OctoPack can do them, but again it means more MSBuild parameters)
  • Troubleshooting is easier (Build shows all of the output in the console)

Benefits of vNext Builds with Octopus Deploy Integration

There are others who have blogged about the benefits of moving to the next version of Build. So I won’t go into the particulars. Suffice to say that replicating your XAML build in the new Build system is extremely beneficial and coupled with the Octopus Deploy Integration extension it can be even more powerful (https://octopus.com/vsts).

  • Control – You own it, you build it
  • TaskGroups (combined step tasks for Build templates)
  • Build Templates (cloneability, reusability)
  • Cleaner Visual Studio Solution
  • Centralized build/package/deploy processes
  • Decoupling of dependencies to installed packages

Even if you still want to use OctoPack, you can, you just have to take your old MSBuild arguments and paste them into the Visual Studio Build Step MSBuild Arguments parameter. Under the covers, it still does a lot of the same work, but the added benefit of with using the Octopus Deploy Package, Push steps allows for cleaner output logs during a build, package and push. One other benefit not mentioned previously is that with the new Build System you don’t have to check-in your packages folder that contains the OctoPack information (the NuGet Restore step takes care of that).

Gotchas

There probably some especially around references, but I can’t think of any that would hinder the overall usage.

Finally

The better approach of keeping a clean project/solution and letting TFS/VSTS do all of the work is just makes sense. Cluttering your project with excess complexity can make sustainable, reusable codebases hard to achieve. Coupled with that is the fact that over complicating your solution can also cause other developers within your team to have trouble trying to build the solution locally.

Starting From Scratch–Building Your Project Right Part 1

Prologue

Let’s say for the sake of argument that you just uploaded your codebase into TFS2017/VSTS. What do you do? XAML builds are deprecated and the new Build system seems daunting. Again what do you do?  You can watch videos and read tons of different stackoverflow articles and blog posts on how to… yet there are still lingering questions on how to “just” start from scratch.

In my experience with teams from around the world before they used a version control system, would happily code on their local machine, perform a local build (where is just worked), and then using the power of Visual Studio would “Publish” their fixes directly to their remote environments. For one that is poor a ALM practice and two there is just no way to track any of the changes that were either breaking or fixing described typically with screenshots in an email thread. Overall it was the Dark Ages, a chaotic time where teams that were trying to be Agily/Scrummy/etc, yet really having no anchor or starting point to leap off from for how do perform a proper “single source of truth” build; let alone a deployment.

With “DevOps” and “Shift Left” being the buzz words of the day, it can be hard to get your team in the correct cultural mindset of ownership and control. In this article we will dig into the new MS Build system as if you were a newly minted Developer Lead with the appropriate Administrative Rights in your TFS/VSTS project.

Here is a basic scenario and then we will work through how you can build your project right with the new Build system in VSTS/TFS2017.

Assumptions

  • You are just now using source control
  • Your builds consisted of developers performing builds on their local machines
  • You may had a build server in the past, but you have either upgraded or the build templates that you previously used are incompatible
  • You have Post-Build scripting moves files around to make your codebase viable for a manual or even scripted deployment

Scenario

This is barebones. Your particular scenario may not apply and I will discuss in the future how to do advanced builds and deployments.

Your codebase has been freshly checked into TFS/VSTS source control. If you chose Git or TFVC, it doesn’t matter the below techniques will apply for both version control types.  You need to perform a build that is simple and the output needs to packaged and ready to be consumed by either Release Management or Octopus Deploy. So where do we start?

How it’s done

Now for the pretty pictures.

First log into your TFS or VSTS site or account and click on the Build & Release tab. It should be blank if you are first starting out. The screenshot I am providing is from previous posts that build more and more advanced concepts.

image

You will see that there four items in the image called Mine, All Definitions, Queued, and XAML. You will not need the XAML tab; it has been deprecated and you can edit your old XAML builds nor are they compatible with the new Build system. So because of those points we will not be discussing that tab.

  • Mine – represents those build definitions that you the logged in user have created

image

  • All Definitions – represents all the build definitions that have been created for your solutions, branches, etc

image

  • Queued – represents all those builds that have been currently queued or running or completed

image

Now lets create a new build definition for our solution. In our case we are going to do something basic and build up from there.

image

then we get a popup that gives a number of options of generic templates to choose from. For now we will just choose the Visual Studio build. The reason behind this is that most developers are accustomed and acquainted with and it is in keeping with my idea of starting with the basics and building from there.

image

Click Next and you will see a another page where you choose your repository and other settings for your build.

image

The great thing about this page in the Build Definition Wizard is that you can make preliminary adjustments to your build before it is created. For instance you can choose the type (remote or local) of repository that you wish to point at, select the branch you wish to build from, and determine whether or not you wish to have Continuious Integration (build after every check in).  Subsequently you can choose the default agent for your build to be something different that has the capability you require to build your solution.

image

Make your selections and then click Create. Now you will have a “unsaved” generic build definition that you will need to continue editing. But first it will be wise to save you creation so that if you have to leave the page you don’t have to start all over again.

image

image

Now that we have saved the build definition, we can go into each of the build steps and one make adjustments and two add more build steps as necessary to perform our build.

On the left part of the page you should have noticed that there are six steps and descriptions for each. Lets talk about them a little in detail.

First you have the NuGet restore.

image What this little step does is go into your solution and check for the packages.config file and restore or install all of the packages each time you build. In this instance you can ensure that you have the everything restored from NuGet.org or other repo of your choice without having to check in your packages folder like you would have to do for your old XAML builds. With this step you can control a lot of what can happen with the installation or restoration of your packages.

Next is the Build Solution. This is a powerful step very similar to the older XAML Process. One item of note is that there are some overlapping of functionality that I have found for build steps, while it may not hinder a build, it just gives you further options for streamlining your process.

image

This will look a little familiar to those with a XAML build background, but this is a lot cleaner and can be adjusted to suit your needs. MSBuild arguments still work, but in some cases you don’t have add switches like “/m:1” when you can check the Advanced->Build in Parallel check box.

The Text Assemblies Step is standard with the Visual Studio Build Template. It will use Visual Studio Test to perform that function to ensure that you have testings completed and code coverage for the widgets you have on your Dashboard.

imageFrom a testing perspective this a powerful step that allows you to go into and perform advanced testing from the build without a lot of tools that you would assume that you need to perform the reporting, etc.

Also note that you see a image or this imagein each step it will show either a hover tip or open a new tab to more details about the step for your leverage or understand.

Next is the Publish Symbols path. It is a way for you to use your pdb and obj files to help debug your application on a different machine other than where your sources were built.

image

Next is the Copy Files to: step. This step is takes the output from your sources bin directory and copies those files or folders to your artifacts directory.  The artifacts directory is a cleaned directory that ensures that you have just the right objects in your artifacts folder that need to be either packaged or deployed at a later time. Again clicking on the image icon will provide options and parameters for you to use to make your build more coherent and robust.

So after the previous 5 steps, now what happens? Well the build agent then publish your outputs to a Drop folder. Typically this drop folder is within the $(Build.ArtifactStagingFolder). Where might that be? It is located under your build definition directory. Similar to this file path: “E:\TFSBuildAgent\vsts-agent-onprem\_work\5”. Inside of this folder are 4 folders that represent “a” artifacts, “b” build (output), “s” sources, TestResults (obvious).  Look at Resources->Variables for more information.

image

Once we have made our initial edits to your build definition, we can now Click Save and then queue a new Build.

image

and our build succeeded.

image

From our log output we can know that the artifacts are “published” and a little magic later we can compare and see that the artifacts are in the location where they are intended to be.

image

and finally…

Don’t be afraid of the build

As the heading suggests, when you are first starting out with using TFS/VSTS Build, don’t be afraid if your build fails or doesn’t perform the functions you are expecting immediately. I cannot stress this enough. I know that when I am testing builds that I spend a lot of time troubleshooting failed builds before I finally have a successful (green) build.

By working through your issues and using a methodical approach you can be successful. You shouldn’t feel pressured to get it right the first time. You don’t have to feel like you are going to get fired for having a broken build. Work through it, understand the build process, then communicate the understanding to your leadership.

If you are developing a Web Application or web api you may have to consult this how to. But if you are doing a single applicaton (exe, service, etc), then this approach can get you started in the right direction.

Resources

Here is another good resource about build settings and build tokens https://www.visualstudio.com/en-us/docs/build/define/general 

https://www.visualstudio.com/en-us/docs/build/steps/build/visual-studio-build further details about the Visual Studio Build. Word of caution here is that if you are using TFSBuild.proj type of file, you will not be able to use Build because it contains tasks and targets that are supported only for XAML builds.

https://www.visualstudio.com/en-us/docs/build/define/variables this is for getting the different built-in and custom variables to work in your favor.

CI/CD with TFS/VSTS and Octopus Deploy

Building off my previous posts here and here about building multiple projects within a solution and troubleshooting packaging of your projects. We will now delve deeper into the Continuous Integration and Continuous Deployment pipeline that can ease the tensions of Deployment Aversion. Using two very powerful tools in your arsenal, you can streamline your deployment process and ensure that you are getting the biggest bang for your buck. Up until now we have identified our pipeline components and we have successfully, albeit manually, deployed our sample codebase to our destination servers. The question now is: can I streamline my pipeline using good ALM practices?

Assumptions

  1. You have your source code checked in
  2. Your build definition is in place
  3. You either have a partial Octopus Deploy project or similar

Setting up for success

Lets double check our Octopus Deploy Project. We will need to ensure that we have certain steps, configuration values, and settings.

Lets start out with cloning our already working deployment project.

image

Save our newly cloned project. We will get back this Octopus Project in a moment, but first we will need to do somethings with our TFS/VSTS build definition. In this example, we will expand upon our previous build definition and clone it. Why? Well first we know that it works and second it is easier to take away unneeded steps instead of trying to add and configure steps from memory. Cloning a Build Definition is very simple and straight forward. The below gif shows you how its done.

CloneBuildDefinition

Since our build definition is only queued manually we will need to go and make some changes to the Triggers for the build to allow it to perform the CI for the beginning of our CI/CD pipeline. We will edit our newly cloned build definition and choose the Triggers tab and then select.  For now we will just check the Continuous Integration and leave the defaults.

image

Now each time there is a change to the codebase and change is checked in, this build definition will start and finish with our packages being pushed to Octopus Deploy. One of the many benefits here with the use of Triggers is that you can have multiple build definitions that can focus on specific branches and with CI/CD working in your favor your can get your code to testers or others for faster consumption and approval.

 

CI/CD Option 1

You are probably asking yourself why there is an Option 1 and 2. Well the fact of the matter is that with Octopus Deploy Extension you basically  perform can perform all of the steps of packaging, creating a new release, and finally queuing up a deploy.  Essentially, TFS/VSTS is now a one stop shop for building and orchestrating  your build and deployment pipeline.

How to perform Option 1

Going back to our CI/CD build definition we need to add a couple of more steps. Click add build step and select and add Create Octopus Release and Deploy Octopus Release.

image

Now we see that for our entire build definition we can build, test, package, publish, create and finally deploy a release. If you noticed that there was Promote Octopus Release, that task can be used in other build/deployment scenarios. For now, though, we are just going to use the create and deploy release.

image

Editing our create Octopus release step, we see a lot of options that we will need to fill out. We fill out the Octopus Deploy Server connection endpoint, we select the project name. For now we are going to use the Octopus defined version number, but later we can change this to be what ever SemVer compatible version number we want to display. Finally in the Deployment section we select our initial environment we would like to deploy to. After that because I like verbose logging for troubleshooting, select the Show Deployment Progress. This will ensure that we have a complete set of logs for our End-to-End test.

image

One thing to note here is that you add a lot of data for your release notes. This great for troubleshooting or even attempting to rollback to an earlier series of changesets.

We are going to kick off a manual build/release creation first, but to make sure that we are putting everything in the correct order.

imageimageimage

and the end result without doing anything special with Octopus Deploy.

image

and now our Web application is still functioning.

image

Let’s go and make a change now in our source code, check it in and then see what happens.

Made some changes to our index page.

image

checked it in

image 

Synced with the master (because this is a Git repo)

image

Build kicked off

image

Build finished and deployed

SNAGHTML2c00ca50

Now to validate against the working web application.

It worked. I made the change and the change was “automatically” reflected on the site. Now keep in mind this is simple way to perform this function. If you needed to perform a rolling deployment, or some other type, it would make sense t to adjust for Octopus Deploy process to accommodate those requirements.

image

CI/CD Option 2

Option 2 involves some manipulation of Octopus Deploy and not so much TFS/VSTS. If we leave what we were currently performing in TFS/VSTS from my previous post we will need to work and manipulate Octopus Deploy a little more.  So following the same example as above we will clone both the TFS/VSTS build and the Octopus Deploy project/build definitions, to ensure that we have a clean break from what worked  to what could work.

How to perform Option 2

There are a couple of items that we need to accomplish here. Because we have already cloned both the Basic Build definition and the Octopus Project, we will work with those cloned items in this mini exercise. For now we are not going to touch the clone of the build definition because it is generic and just performs the build and then packages and publishes the package( s ) to our Octopus Deploy internal NuGet repository.

First we will need to change how our Releases get created. After selecting your Process, find the Automatic Release Creation in the left hand side of the page. Select Project Triggers, this has moved from previous versions of Octopus Deploy and then select the create a release when a package is pushed.

image

image

 

Then we need to change the behavior of our Lifecycle (more information is here). The important fact here is that if we wish to stay away from modifying our build definition to automatically deploy we can do it within our Lifecycle. One key note here is that the default lifecycle allows for deployments to created environments. While this is not ideal, there is a way for us to create and manipulate a new Lifecycle to benefit our needs. For our cloned project we will need to “Choose a different lifecycle”.

image

Special note here is that the lifecycle once applied, will only be used by your deployment project going forward. Any previous releases that were created will use the previously assigned lifecycle. This can be confusing, but it makes sense because Octopus Deploy takes a snapshot of all of the variables and state of your project for each deployment.

Let’s go ahead and create a new lifecycle:

Click Library->Lifecycles-Add lifecycle

image

Then we need to give the new lifecycle a Name and add a Phase.  This is important because the phase helps us describe what we wish to do in each step of our deployment.

image

Adding a Phase means that you are going to be describing what happens to your deployment. After you click the Add Phase you will get this:

image

Give it a name and then click Add environment

image

Initially you will see and empty drop down box and a couple of radio buttons. Select your dev or other environment that you wish to deploy to first and then select Deploy automatically to this environment as soon as the release enters this phase. Then click Add.

image

Now it should look like this:

image

Click Save and we are almost there.

You can can multiple phases and multiple environments to each phase depending on your need. For this exercise, we are putting out the CI/CD basics that you can build upon for your future development and deployment efforts.

Now that I have created that lifecycle, I need to perform a couple more things to get everything wired up and in sync.

First, I need to go back to my project and change the lifecycle from Default to CI-CD Option2.

image

The next thing we have to do is change the Automatic Release Creation under the Project Triggers.  Warning here is that if you have an external NuGet or other repo that you are dealing with this option will most likely not work for you.  There may be an option 3 workaround, but I haven’t gotten to that state yet.

Click on Project Triggers and then

image

image check the “Create a release when a package is pushed to the built-in…” make sure you select the deployment step that contains your package, then click Save

image

Let’s test this out.

image

Our build was successful.

image

And it just works now.

image

Outcomes

From Option 1 it is clear that you can just use the Create Release step and not be dependent on both the Octopus Create Release and Octopus Create Deployment. Either option is viable and can be used interchangeably depending on your needs. You can have build definitions that work on specific branches of code for CI/CD and then use the promote package to your higher environments. The above technique while focused on a simple web application can be used for more complex deployment scenarios and applications.