Auto-Deploy SSIS packages with VSTS and Octopus Deploy

For a bit now, I have been describing to teams how they can deploy their SSIS packages to their SQL instances automatically, but I never “really” had to do it myself. Until now. The client that I am working with currently just went through a very large migration of on-premise TFS  to VSTS and with that came the need to change how their reporting structure was both generated and consumed. Before the migration, we were generating ALM reports through direct SQL queries against the on-premise TFS Warehouse. During the migration and onward we had to generate an SSIS package that would pull information directly from the VSTS Rest API and populate a local data warehouse that could help us accomplish nearly the same thing. Since I wanted to keep my solution and project clean – no OctoPack (which isn’t supported, although you can hack your project to include it), and no nuspec file or extraneous other items in my solution like a .nuget folder – I opted to see if I could perform an auto-deploy just using VSTS and Octopus Deploy. Here is how I did it, while keeping my SSIS solution pristine and leveraging the available marketplace tasks and Octopus Deploy step templates.

Setup

For this blog post I am using VSTS and Octopus Deploy version 3.13.10. Caution – your mileage may vary depending on whether you use TFS 2017 or a lower version of Octopus Deploy, but the technique is sound.

Assumptions:

  • You are using the internal nuget repo of Octopus Deploy. I can’t claim this process will work with external repos, but that is a challenge to others who read this.
  • You have a rudimentary understanding of VSTS and Octopus Deploy and how they each work independently.

Here is what you will need:

  • VSTS Account where your SSIS solution is located – this example is using Git, TFVC will work just as well
  • SSIS Marketplace Task – installed on your VSTS or TFS instance
  • Octopus Deploy Integration Tasks – installed on your VSTS or TFS instance
  • Octopus Deploy Project with servers already added to your environments
  • Octopus Deploy API key pushing packages from VSTS to Octopus Deploy built in repository (How to generate api key)
  • The Octopus Tentacles running on your destination servers should be running as a service account that has dbo access to your DB and local admin rights on the server to execute Remote PowerShell.

VSTS Component

Building out the CI build is straight forward, except that with SSIS, SSRS projects, the standard MSBuild and .NET Desktop build templates do not work as required. As Abel SquidHead points out you can perform an SSIS Build using commandline tasks within a build definition, but wouldn’t it be a more elegant to have a Build Task that did all the work for you save for a couple of parameters?

So to start, we need to add a Octopus Deploy Service Endpoint connection.

imageimage

Click Ok and you should see something like this:

 image

Now click on the Build and Release tab and create a new Empty Build definition. image

Then add the SSIS Build Task, Copy Files, Publish Artifact,the Octopus Deploy Pack, the Octopus Deploy Push Packages.

Note: You add tasks by clicking on the “+” symbol in the Phase 1 section.

image

Edit the Build SSIS task to point to your solution and project if necessary.

image

Next edit the Copy files Task so that it looks like so:

image

Next Edit publish artifact task so that it looks similar to this. You want to make sure that pushing your artifacts to a drop location for the next step to pick it up without all of the other bits and pieces that comprise your source tree.

image

Next Edit your Package Step. There is a lot to do here, but I will try to explain it a little by the numbers. I won’t describe the Advanced areas, but will leave that to the reader to experiment with. What this step does is simulate the same effort that a nuspec file performs when added to your project.

image

  1. Represents the name of the package you want. This name should be a constant for any package you wish. If you need to put other information, revision numbers, etc you can use the Nuget or Advanced options to provide the proper metadata for your package
  2. Represent the package format, until recently Octopus Deploy’s internal nuget repository only handled nupkg files, now it allows you generate either nupkg or zip files for you usage in your Octopus Deployment process
  3. This is the version number that will be appended to the package ID described above. This should be an auto-incrementing number so that each time you build you get a new version. Important: This version number does not typically represent the internal versioning of your package’s content. This is used as a metadata field to ensure that you are going to deploy the latest from your build output.
  4. Represents the location where your output resides, in this case it will be in the bin directory of the solution.
  5. Represents where you want to put your zipped or nupkged file contents, in this case the “artifactstagingdirectory\drop” folder

Now we move onto the last task in our process. Edit the Push Packages to Octopus. This pretty straight forward. Just select your Octopus Deploy Connection that you created earlier in this blog and the location of where you put your nupkg or zip file. (In this example I am using the nupkg format).

image

Now click the Save and Queue drop down and select Save. Give a reason if you wish and then we select the Triggers menu item for our process. Select CI, use the master branch and then click Save again.

image

You are now done!

It wise for you to test out your build first. One piece of advice is to disable the Package and Push Octopus steps first so that your build actually does what it is supposed to do and then enable each step after each successful build.

Octopus Deploy Component

There are a couple of things here that need to be accomplished for this section of the blog.

First login into your Octopus Deploy instance and then navigate to your project. You should see something like this:

image

You first want to select the Process to create how your deployment process is going to work. The following Options will be dependent upon you creating the process flow.

In the beginning your project and process will be blank and it will be helpful in getting your process started.  There are a lot of options to choose from, but for this exercise we are going to choose Deploy Package and Deploy ISPAC SSIS project from a package step found under the SQL Server grouping.  Once you do that your process should look something like this:

image

I will not dig too deep into the mechanics, but I will let you know what you need to change in order to make this deployment process clean and smooth.

First, lets edit the Deploy Package step. You will need to  scroll down to the bottom of the page. Select the Configure Features link and uncheck everything except for “Custom Installation Directory”. Click Apply and now your Deploy Package step should look similar to this:

image

Select Package name you chose from setting up VSTS. Simply typing a few letters will give you an autocomplete for the package you are looking for. In the custom installation directory, select a directory that you wish to put the package in and have it unpack itself. For more control you can use Variables to change this location based on server or environment.  Click the save button at the bottom of the page and now we go on to the next step.

Edit the Deploy IPSAC SSIS project step. This step is an incorporation of a community step that I have used before in other SSIS implementations. Since we are going with the basics here. You should see something similar after you have filled out the appropriate fields.

imageimage

Once we have filled in the appropriate values and have selected the step that gives us the location where the ipsac file is going to be located at. Now we will finish this up with either one of the following options.

Note: The fields can be edited depending on what your particular requirements are. Coordinate with your DBAs to better understand how these different fields work within your particular implementation.

Option 1 (Automatic Release Creation):

Option 1 revolves around Automatic Release Creation. Octopus Deploy has an observer mechanism that looks at changes to packages that are being inserted into the Octopus Deploy internal repository.

First select Triggers. This is going to get us to the point where when a new build from VSTS has successfully completed we can observe a new package coming in and then have it automatically deploy to our initial environment.

image

This section was moved from a previous version of Octopus Deploy, but the mechanism. First check the Create a release check box and then select the package step that will be executed when a new a package is pushed to the internal repository. Click Save and you are almost ready to go.

Warning: Automatic Release Creation will not be available if you do not have a Package Deploy step template in your deployment process. In our particular case this doesn’t matter, but it is worth to note.

Option 2 (Lifecycles):

Lifecycles are yet another way to for you to

  1. Define Quality gates for your deployments
  2. Give greater control on how to keep your deployments from reaching Production pre-maturely

By default, when a project is a created the Default Lifecycle is pretty wide open. So what we are going to do is give ourselves a gated way to ensure that we automatically deploy to our lowest environment and then require manual or other types of approval for higher environments.

First select Library and then the Lifecycles left hand menu selection. You should see something like this:

image

Select the Default Lifecycle so that we can edit it. The purpose here is to put the gateways in place for our different environments.  The default before we edit will not have an element selected called Phases. Phases allows for deployers to select different environments to start and end a particular process. Since we know that we have just one project and it will have to start in our dev and then move upwards from there after various checks.

Note: Once you have selected a Lifecycle for your project, each new release and deployment will use that Lifecycle. You cannot change your Lifecycle once you have created a release and deployed that release.

First we are going to add our 1st phase, this will help us establish where are starting point in the deployment process is going to be. We click Add Phase and then we see something like this:

image

We select our initial environment, select the automatically deploy radio button, click add, then give our Phase a meaningful name and then click Save. We can then go and other phases for this lifecycle, but for the sake of brevity this work is for getting things done.

I won’t go into any in depth discussions here, but there are a lot of options and other items you can use within these different phases to help promote your code from one environment to another.

Now that you have saved everything you are ready to deploy your SSIS code from source control out to servers all the way out to production.  You may find that you have do some tweaking with your servers or with the variables that you need to use for your deployment process. By practicing your deployments in a lower throwaway environment, you have a chance of greater success to apply the same process for all of your downstream environments.

Note: If this doesn’t work right out of the gate, don’t get frustrated! The VSTS build logs and the Octopus Deploy Task logs provide a wealth of troubleshooting information your ultimate success.

Finally

This exercise has shown that you can get your SSIS package built, packaged, pushed, and deployed to your lowest environment for immediate testing. The steps shown here are pretty straightforward and can be adjusted to suit your needs. To be honest it took me longer to write and screen capture this post than it took me to build out this pipeline. Smile 

Happy SSIS Deployments!

Advertisements

Dev*Ops and You

Something hit me recently about the DevOps ecosystem. What finally tripped me over the proverbial cliff was during a client meeting where an internal client was saying how what we were providing – DevOps as Service (DaaS) – was not going to help them in their DevOps Transformation Program. Internally I was:

They mentioned DevSecOps, because they searched and found it on the web, as a possible direction for their needs. To be frank, I was flabbergasted. I wanted to ask: “So normal DevOps is different from DevSecOps or another Dev<insert IT tech acronym here>Ops?” Really?!?!

I continued to listen quietly for the rest of the meeting while my mind was racing on about the consequences of isolating all the other departments that have a stakeholder-ship within the umbrella of DevOps. Because of a single acronym that changed only a few letters, things could get ugly and isolation could set in.

Egad! I realized that this over usage of a term (DevOps) has caused unintended consequences for the imposed isolation and increased skepticism of how overall DevOps can transform your current state of ineffectual development and deployment strategies to smooth development/deployment pipelines.

As a tech lead and “multi-hat” contributor, I have to be aware of all the interactions and integration points my software has. As an architect, I have to understand the inner workings of the many different systems in my ecosystem so that each performs its function meaningfully. In this way, I can confidently deploy my application quickly and operate it without any roadblocks or impediments. Without that vision and foresight, while by being so focused on just getting a work item off of my back log, I can easily fail to see:

  1. What will it take to maintain the application; short term, long term?
  2. How performant is it?
  3. How secure is it?
  4. Can it scale?
  5. etc.

So now, after all of that, I would like to help define a new term: Dev*Ops.

Dev*Ops (DevStarOps)

  1. An inclusive, multi-discipline team that performs the actions of developing, securing, maintaining, and operating an application for its entire lifecycle through the use of process and products for the continuous enablement of added value to the customer.
  2. The concept around involving various disciplines within an organization to add value from start to finish ensuring a more robust interaction between people, process and product.

Let me explain. I find that with my interactions with current clients, would-be/should-be clients, and colleagues, that the term DevOps is a very mangled, misused, and is a truly misunderstood acronym. People can’t seem to think beyond the concatenated trimmed words of Development and Operations. To them — don’t get me wrong — there are only two teams that exist in the creation, care, and feeding of an application. What they fail to realize is that there are other teams or entities that should belong in the same ecosystem and have the same stakeholder access. Each narrowly focused team, e.g., InfoSec, Sec, Infrastructure in most organizations are very siloed – complete fiefdoms with their own rules, processes, and governance. Each one of these little kingdoms expresses a role in the overall care and feeding of an application during its lifecycle – sometimes to the chagrin of the project lead who is most likely trying to meet an impossible deadline directed from leadership. Involvement of these other important stakeholders is always in the application’s best interest and leads to success.  More often than not the other stakeholder’s input is typically pushed to the side as a “we’ll get to it if we have time”.

What first comes to mind, with this lack of involvement, is Security. Given the recent fallout from Equifax’s security breach, this bubbling up to the forefront from obscurity to OMG hair on fire just goes to show the lack of involvement when architecting and developing an application for consumption. The typical response is “we will fix it if it becomes a problem”. That response leads to low morale, long hours, and missed time off when the problem exhibits itself. 

So how do we combat this type of created isolationism? Whether it be self imposed or company culture – “this is how we always have done it—why change now”? How do we involve those teams that are crucial to our application’s success, without alienating them because it is an inconvenience and adds to the timeline of things to check before deploying to production?

From what I have seen in the past, developers don’t really like security – nor anything else (happy path, anyone?) – because it hampers their “creativity”, their timelines, tasks,  and the “just get it done…POC-type” attitude. Project/Development leads and administrators can see only the budget and the timeline. Project managers can see only tasks, burn down charts, over capacity, and deadlines. Not one of them are asking the questions of capacity, security, longevity, and a host of other questions required to operate, maintain, secure the application.

The very short answer is that for each project that gets started the PM/Architect/Lead needs to involve the wide range of other teams within IT kingdom. In some cases for “brown field” projects this technique can be used as a disruption which can lead to automation, innovation, and morale boosts.

Then after you identify your team – collected from all disciplines in your org —  You get them all in the same room! And you keep them all in the same room until everyone is in agreement with approach on what the architecture, security, performance, and deployment is going to be for that particular project/application. You develop the timeline when each discipline will become involved more solidly with the core team and you execute that plan. Otherwise, you will end up with a lot of finger pointing and a development/deployment mess that equates to lost revenue and increased overtime for bad deployments, late infrastructure creation, and bad pipeline management.

An additional point to the above, is that everyone needs to stop being snowflakes. You are a team, you all have to work together no matter what for the sake of the great idea called an application. Business politics aside, involve as quickly as possible everyone who is going either have a point of interest to pursue or negative aspect to the project. You can’t, on the last hour of the last day, realize that your application is going epically fail due to close mindedness. Inclusion of other disciplines in your design, development and execution is absolutely critical!

As an engineer, I understand the need — nay! — the intense desire, to produce a solution from the beginning from my isolated viewpoint. That mentality can be hindered by lack of perspective, but you know what you just have to do?

Otherwise you end up in the minutia of trying solve a problem by yourself in your isolated bubble.  When that happens, you realize that time lines hit you, over budgets, lack of morale. Now there is rush to just get something out no matter what. As a lead, you MUST stop, think, and involve! As a worker bee, you MUST stop, think, and involve! Otherwise, you could push aside people that you very possibly save your timelines and your sanity.

As a tech lead, dev lead, sec lead…. If you are a lead or a worker bee, become involved, and think about how your team and your project is going to exist in your environment/ecosystem! Don’t think about the solution, think about the people and processes that can enable your solution. Don’t think about the timeline as an absolute! And try not to just “pitch your pull request over the fence” and forget about it! I have seen a lot of teams that do this and expect you as the “DevOps” engineer to do the rest of the work for them. This is primarily because they just want to develop and not take ownership of the project and process to make them successful.

To this end, I would like all teams (present and future) to think about:

  1. Who the application effects. Who are the stakeholders? Customers? Internal teams? – Involve throughout your process
  2. What the app provides as a service or otherwise. Who consumes this? Is it secure? – Involve those that are SMEs in the field.
  3. See the pipeline – it doesn’t have to be pretty in the beginning – but it has to work. The pipeline starts with the idea and ends with deployment into production.
  4. Communicate, Communicate, Communicate, and Involve! – This is a key point here! It’s not about you! Its about everyone and what they can provide to your success!

I will end with this:

Managing DevOps as a Service (Part 1)

DevOps Challenges

One of the bigger challenges I see in the DevOps space is when you attempt initiate DevOps as a Service (DaaS).

My Definition: DevOps as a Service (DaaS)

  1. Performing actions to allow for teams 1+ to deploy codebase(s) to multiple environments (1+ servers) and maintain those servers within specifications.
  2. Maintaining multiple environments across globally distributed teams with a follow the sun approach
  3. Allowing for just pass through of code and deployment while maintaining infrastructure for enabling teams.

Typically what you see with small and medium sized teams is that one or two of the members of the development team are involved in the operations space as well. This works from the standpoint of smaller teams who have intimate knowledge of their environments, code base, and configurations. Yet, what if you had to control hundreds of teams? Different code bases? Different time zones? Different environments? Hundreds, if not, thousands of servers spanning on-premise, AWS, Azure? What you do and what can you do?

A friend of mine Damian Brady wrote about DevOps as a culture. Well truly it is a culture, you have to have ownership of the work that your are doing.  All too often, I see where teams develop and test locally and assume that their codebase is going to work in the various environment’s that they deploy to. A “throw it over the fence” style of development and deployment. This can be problematic when the assumption from the developers is that the operations team knows and understand the nuances of their codebase.

Categories of Development and Operations

Image result for devops images

Let’s define in simplistic terms both what I mean by categories of Development and Operations. Development is really the design and coding of an application or API to be consumed by internal or external parties. Operations is about the continuous maintenance of the application or API once the Development portion of the job is completed and the team that originally developed the code has rolled off on to other projects or clients. In nutshell, this “old school” approach has left many a maintainer performing Development work that really isn’t their forte. Developers are so focused on their timeline, their code that sometimes don’t understand or don’t want to understand the underlying infrastructure that serves up their application. This leads to the dreaded WOMM effect and its consequences.

My definition of WOMM is “Works on My Machine”. This the most basic build on the local machine that “just works”. F5 and it just works. It is also, from my point of view as a bacteria or viral disease that at some point most if not all development teams contract in their effort to quickly get code developed for consumption.

Many consulting teams in my experience fall into the Development Only category because of a few reasons:

  • Contract is Fixed Price or has limited scope
  • Operations, like Documentation, is the first to be cut from a contract to make the cost of work attractive
  • Business Developers sometimes fail to grasp long term affects of a short term project and too narrowly scope the deal in the hopes that contract once signed can lead to further work. Sometimes this bet pays off; but most others it does not.

Operations teams on the other hand sometimes have unfounded aversions towards developers because of a few reasons of their own:

  • Developers inherently create bad code that breaks functioning applications and infrastructure – emphasis on the infrastructure. 
  • Developers don’t really understand infrastructure or how their codebase can work on one set of servers and yet not work at all in Production

Merging of Development and Operations

Now that we have the basics, we build up and look at the merging of developers and operations. In my experience I have seen a few larger organizations where there is a such a large divide between developers and operations that members of the operations team end up becoming the defacto hidden developer, bug fixer, tester for developer teams. I have in the past fallen into that category when I started branching out from development and into the operations space.

In my opinion it is very good for developers and operations teams to have a rudimentary understanding of each others space. Yes there are purists out there that would contend otherwise, but in order to be an effective team and have a strong application for consumption this is a critical piece within DevOps. Not only should the developer understand the infrastructure that they are going to host their application on, but they should be developing on it as well.  There are edge cases about this argument, but in general this is a good practice. Likewise it is good for the operations team to understand the developer space and be given a crash course when the code becomes un-configurable and broken.

DevOps is about continuous ownership from planning the application, to IDE development, to source control, to build, and finally to deployment. It ensures that the developers understand that to put their code into production is not the responsibility of a select few Wizards of Oz, but developer and the operations team go along the journey together to ensure a smooth and proper deployment.