SSIS Package Execution Failure – When google fails you :(

We have all done it, get some sort of cryptic error and the first place we go to search is Google hoping the sea of message boards and blog posts will save the day. Many times this has been a saving grace and we find the solution and others it sends us down a rabbit hole with no results. Last night my team experienced this exact case so let me tell you what happened.

It all started with an error

As we started our Disaster Recovery testing someone executed one of the SSIS packages and got the following error.

Could not load package “xxxxxxxxxx.dtsx” because of error 0x80131534.

Description: The package failed to load due to error 0x80131534 “(null)”. This occurs when CPackage::LoadFromXML fails.

Source: {9934C764-F98C-49E4-85E6-7062B6589435}

Now go ahead and google this error six ways from Sunday and you will get all types of information regarding the wrong version of dtexec, failed upgrade of a project in VSTS and issues with .net versions and script tasks. This sent my team off and running looking into all those areas and calling other teams to start looking into server configuration issues and asking development teams if they had a bad upgrade.

Let me pause here for a second because while google searches can be great, it’s also in so many cases caused us to forget basic troubleshooting skills, we all too often take google as gospel and forget to do anything on our own. Why do I say this? Because that is exactly what happened here. When this issue was escalated to myself, I took a step back and looked to see if all packages were failing or just some. The answer, in this case, was just some were. Troubleshooting 101, when you have partial errors like this take a look into what is different, in this case, there was one difference between how the successful packages versus the failed packages where running and that was the windows user was different. So, I changed the user and ran the package and bye-bye error.

Turns out that outside of all the wonderful google advice that is available, there are two other reasons why you may see this error.

  1. The account running dtexec may not have read permission to the SSIS package/.dtsx file
  2. The account running dtexec may not have execute permission on the dtexec.exe folder. YES, I said folder because if you just give it to the file then you’ll run into more problems with the underlying dependencies.

The moral of the story is, use your troubleshooting skills first before running off to google for the answer.

MS Ignite Day #2 Recap

Day 2 was my opportunity to be focused and dive into a few technologies that were really peaking my interest, namely Azure Synapse Analytics and SQL Server 2019. Since I heard the Synapse announcement I wanted to make sure I sought out every session on this and understood everything I could possible learn and today certainly went a long way to help me understanding this new technology and also realized there is so much more to learn, but there are still 3 more days left.

With Azure Synapse everything I keep learning about this is it’s certainly impressive and I will stand by the statement that Snowflake, Google Big Query and Amazon Redshift better be paying attention because the complete integrated story, performance benchmarks and cost model have closed some big gaps that previously existed with Azure Data Lake and Azure SQL DW. One of the sessions I had a chance to attend was “Democratizing the Data Lake with On-Demand Capabilities in Azure Synapse Analytics” with Josep Aguilar Saborit and Charles Feddersen, where I came away with two very important takeaways. First, was the ease of being able to query and import data, in a nutshell get the data into Azure Data Lake Storage and it’s available. With that said it seems that Microsoft and most of the industry are standardizing on parquet as the file format of choice. Which if you are not familiar with is an extremely easy file format to get almost any file into, Azure Data Factory can really help here. Parquet format files are first class citizens within Azure Synapse and querying or importing this data is almost effortless.

Now let’s talk a little bit about the architecture behind Synapse and while yes Azure Synapse can be thought of as Azure SQL DW 2.0 you need to take everything you may already know about Azure SQL DW architecture and throw it out the window. The most important thing to remember is that the architecture behind this new service has been reinvented from the ground up and made to be a cloud first design. If you remember Azure SQL DW was not a cloud first design as it was basically the predecessor of SQL Server PDW just ported into the cloud with very little architecture change.

Now let me caveat the next few statements with I still have a lot to learn about Azure Synapse architecture but here are a few items that I have picked up on and I have been very impressed by. Both the failure detection and hotspot detection that is baked into the solution is quite impressive. First let’s understand that queries are natively treated as distributed and alight with both Hash and User partitions with the execution nodes, see picture below (Sorry these are pictures taken during presentations)

With failure detection being handled at the node level it allows for any failure at a node level to be automatically recovered and redistributed to the other active nodes. That’s right unless for some reason you lose all nodes your queries will not fail. If you are familiar with more legacy architectures, you would then know that this was not always the case.

Now with hot spot detection imagine the scenario that a query task was partitioned in a way where one node was doing a skewed amount of the work. What Synapse will do is recognize this skewed load and automatically redistribute this load to other nodes that are currently being underutilized. Oh, wait I am not done with the amazing special sauce, if multiple nodes become overloaded and distribution can’t be done then additional nodes can just be added to achieve ultimate performance.

Now let’s move on to SQL Server 2019 and new features in Azure SQL which I am just going to lump all in together. Now this is not a full comprehensive list of all new features just some things I wanted to call out. Let’s start with Azure SQL Serverless, if you are looking for a way to use Azure SQL and not incur a crazy cost then take a look at Azure SQL Serverless for scenarios such as Dev/Test environments (I’m sure there are other use cases but this is what I am focusing on for now). Typically in Dev/Test environments you don’t have the requirement to be up 100% of the time and therefore you don’t want to have the cost of running a database 24/7, and this is where serverless comes into play as you are only charged for the time that you are actually running queries. I will go out on a limb and say this could easily be a 40-50% cost savings for existing Dev/Test environments you have that are running on current Azure SQL Database and incurring 24/7 costs.

One more feature set I want to quickly hit on is the Intelligent Query Processing. With this new feature there are quite a number of areas of improvement so let’s just hit on a quick few. The first is the memory grant feedback, where if the initial execution if the memory grant was too small and caused spill to disk the subsequent executions will be adjusted to properly set memory grants and therefore reduce spill and increase query execution time. Table Variable deferred compilation now treats a table variable just as a temp table has always been done and therefore query cardinality is now no longer assumed to be a rowcount of 1. While I am not a huge table variable fan if you do have a workload that is very table variable heavy then just upgrading to SQL Server 2019 should result in performance gains just by upgrading.

One last note is on Azure Data Studio as I had a chance to see Alan Yu’s session on Azure Data Studio focused on notebooks. If you are not using Azure Data Studio and notebooks hurry up and start learning you will thank me later. Plus, Azure Data Studio now has PowerShell notebooks and is a one stop show for almost any database developer. Azure Data Studio November release

MS Ignite Day#1 Recap


What an amazing first day at Microsoft Ignite that all started at about 6:30 this morning when I got in line to be one of the first 3000 that got the opportunity to see the Keynote from Satya Nadella. There were so many new announcements made during the Keynote and I am sure that I am going to miss a few in my round up but will try to get all the ones I can remember and my own take on their impact. So here we go!!


First it was obvious that the theme of the keynote was, this is the year of the evolution of the hybrid cloud, where we will start seeing the growth, adoption and seamless integration of cloud services being to be deployed to Edge devices or even on your own datacenter hardware while all still being managed through Azure portal. Sticking to this theme let’s first talk about all the announcements made that are directly related to this:

I am going to focus here in what I think is the most significant game changer and that is Azure Arc. With the ability to now deploy Azure based services to any infrastructure with a Kubernetes endpoint , including AWS and Google cloud , will transform how people now think about cloud architecture and deployments. All of this while still managing those deployment from a single interface using Azure Portal or automation through the Azure API. For myself this now enables me to develop solutions that are truly hybrid and brings applications and data sources into a hybrid model, where previously I was stuck in an on-premise mode due to many different factors. From a database perspective organizations have sensitive data that due to security concerns, regulatory requirements or latency SLA’s migrating to the cloud is a near impossibility now with Azure Arc we can put still have them on premise in our data center while taking advantage of features such as hyperscale, updates and security monitoring. Since I could keep going on and on about this feature I will stop here for now and just end with “It’s a game changing announcement”


Next let’s quickly review some of the Azure Data Services announcements:


While all these new announcements are amazing in the own right I want to take a few minutes to call out the Azure Synapse Analytics, think of this as Azure DW rearchitected. It has many similarities and traits that were started with Azure SQL DW and it’s predecessor of SQL Server PDW, however the similarities stop there as Azure Synapse Analytics was re-architected from the ground up to be a true cloud based platform. If you used Azure DW one of the issues was getting the performance you desired at a reasonable cost and why competitors such as Snowflake and Redshift where taking a majority of the market share. While demos always highlight just how fast a system can be, who would put a demo together to show how slow a platform is, the performance and cost differences now with Azure Synapse Analytics change the entire conversation around what the future of your data warehouse should look like and now have capabilities such as automatic data ingestion that didn’t even exit before. One more quick note on this is that Azure Databricks also already has connectors for Azure Synapse Analytics, opening up so many more possibilities for data ingestion and machine learning.


Lastly, I will wrap up with the last area I wanted to highlight which was Power Apps. There were several announcements here such as Power Virtual Agents, Integration with Microsoft Teams and increased data sizes for Power BI. This is an area that I have not paid much attention to over the last year or so and most of the new capabilities were release a couple weeks ago with Release Wave 2. What really opened up my eyes here was all the capabilities that Power Apps now has and if you have not looked at this recently then you really should be. The integrated security and controls really allow you to extend capability to Business Users while not creating a culture of Shadow IT as it’s still a centrally managed platform where you can prevent data sprawl and unauthorized data usage. I’m seriously looking at this as a way to eliminate custom SharePoint development going forward.


In a nutshell Day 1 at Ignite was a great day and boy did my brain hurt at the end of the day as I was thinking of all the new possibilities of a true hybrid cloud architecture.



Employees don’t work for you; you work for them

Alex Yates (b | t) is hosting this month’s T-SQL Tuesday (the 119th edition) with asking us to write about something in our IT career that we have changes our minds about. I was going to write about database design techniques but instead decided to go with a non-technical topic of, Employees don’t work for you, you work for them.


Where I went wrong


In one of my first management roles almost 15 years ago I was required to attend what was called “Management Training”, during this training the theme was very much focused on you were the boss and your staff works for you. This at the time made sense to me as my experience to this date was your manager tells you what to do and you do it, if you don’t expect to be chewed out. Everything just seemed to fit for me, and I took every skill that was presented to me and ran with it and became the “Boss”. At first my teams were successful as my team members had the same attitude, I used to have which was I don’t want to lose my job, I was creating a culture of fear not one of growth.

Over time I started to notice that some of my team members were starting to resent me, other coworkers and the team dynamics were becoming ugly. I can distinctly remember another mistake I made with one team member that was not producing at the same level as others. My training told me to start “managing out” this employee, if you are not familiar with this technique it is where you create a performance improvement plan and culture that makes the person want to leave rather than firing them and having to pay unemployment and severance packages. While putting this plan into action training was provided for most employees except this person with the hope it would make them want to leave. They did want to leave and so did some of my other team members and they looked at me as the jerk who was singling a person out. I felt like a complete failure and knew that there had to be a better way to do this.

I could go on and one with mistakes that I make here with the attitude of being the “boss”, but the takeaway I want everyone to understand is don’t run your team with an iron fist.


When I started to change


10 years ago, after many mistakes and not building up successful long-term teams I woke up and realized that something had to change but I still wasn’t sure exactly what to do. I sought out to find a mentor that I could look up to and build out successful teams and transition from the Manger to Leader. As one thing I certainly did recognize is that being a manger didn’t feel right to me, in fact it never felt natural to me, but I was always told “it’s just because your young and new to this”. What it really turned out to be is that I knew deep down inside that it was time for a CHANGE and needed to understand more what it really means to lead a team and not dictate a team.

I started listening to podcasts and YouTube videos from various people such as @garyvee and @simonsinek, where are started to learn more things such as empathy, compassion and the idea that you can’t succeed until you learn to stop caring what everyone else things. Do what you feel is right and follow some simple principles and people will want to work with you. With all this in mind I started to change and changed over to working for my team and not expecting them to work for me.


What does it mean to work for your team?


During this time of change and transformation I started to change how I ran things, I stopped dictating and started to trust my employees. Rather than giving out tasks and micromanaging, I instead started to give goals and allow them to figure out how we get there and set realistic timelines that we would work together on. Another key element is keeping your team informed and engaged on why we are doing things, what our goals are and when things change updating them immediately.

Furthermore, you need to lead by example and show them what you expect by doing it yourself first and stop thinking that just because you work 60 hours a week that it means they will work 60 hours a week. Some of them may because that’s what they want to do, and others may not and that is ok. Remember they don’t get paid what you get paid, they don’t have the same investment in the work or company that you have either. Get to know your team members as well and what makes drives them and what their goals are and then help them understand what it takes to achieve them. When learning what their goals are don’t forget not everyone is going to have the same drive as you, some may be happy with exactly where they are and you should applaud them for that because happiness always comes first.


In summary change is good and always strive to change what you truly know is wrong.






Back with new focus

Back with a new focus


First let me say wow can’t believe that it has been almost 3 years since I have blogged.

The last few years my career have been focused on transitioning from being a pure hands on technologist to management and leadership roles. This transition has had its ups and downs with making many mistakes along the way. Those mistakes, however, have been what has opened doors and lead to great success. One of the most important lessons I learned along this journey is don’t be afraid to admit and share your mistakes with others. In sharing and asking for advice you will quickly find out that you’re not the first one to make these mistakes and there is a world of advice out there to help you along the way.

So why after three years to I pick up the proverbial pen again? First is a conversation I had with Erik Darling (@ErikDarlingData), , recently where I said I have all these blog posts sitting in draft that I just haven’t finished. His response was “Who cares just post them it’s just a blog”. I laughed and then thought wow he’s right it is just a blog and perfection is not required.

Second, I’ve started mentoring a few different people and one of the topics we have been discussing is self-marketing. This led me to realize that I should lead by example to show the power of enlightenment and engagement with your peers.

So stay tuned for my new series on Leadership: Lessons learned.


Leadership Part 1: Team building

Leadership Part 2: Talent acquisition and Employee Retention

Leadership Part 2: Talent acquisition and Employee Retention

The two most costly human capital expenses an organization will have is the onboarding and offboarding of staff. Just take a moment to think about that and if your scratching your head or think I’m crazy let’s take a moment to explore both of these costs.

Loss of staff costs: (Loss of intrinsic knowledge) + (Loss of productivity) + (Irreplaceable skills) + (Loss of leadership)

Talent acquisition: (Staff time to interview) + (Search firm fees) + (Employee referral fees) + (Onboarding time)


Backfill Database administrator in NYC.

In this example we will say we Phone screened 20 candidates, brought 5 in for in person and 2 for final meet with CTO.

Blended rate for staff = $150/hr

Staff time to interview = (20 phone screens * 1 hour * $150) + (5 onsite * 2 hours * 150) + (2 final * 2 hours *150) = $5100

Search firm fees = 180,000 * .25 = $45,000

Onboarding time = 30 days to be production = (180,0000 * (30/365)) = $14,794

Talend acquisition cost = $5100 + $45,000 + $14794 = $64894

This isn’t even factoring in the whole calculation of loss of staff and projects that couldn’t be completed. As you can see the cost goes up very fast.

Why do I bring this up? Well that brings up the next point which is, investment in your current staff.

Investment in current staff

We so easily get lost in budgets and bottoms lines and asking the in the moment question of “Do we really have to spend that now?” I’d ask the question back of “Do you really want to avoid a $100 lunch bill for the team and instead pay to replace them?” If your working for an organization that can’t see the value of investing in staff, then you should be fighting to change that mentality on a daily basis. This is the difference of being a leader versus being a follower. There should never be a moment that you aren’t fighting for your team, as this shows not only sets an example for the rest of leadership is also shows your team that you have their back and believe in them.

Now investment of staff can come in many ways and here are just a few ideas:

  • Team building events
  • Monthly lunches
  • Quarterly happy hour
  • Training
  • Conferences
  • Flexible work schedules

Out of this list I just want to take a little bit to focus in on a couple of the areas above. The first is around team building which overs the first three items in the list. Team building is an investment in not only the team but the individuals. Studies have shows that when you have employees that have a “Friend” at work they are more likely to stay and have higher productivity. See Gallup Productivity Poll or CNBC Why work friendships are critical. These are relationships you should foster, support and not get in the way of. They will develop naturally with your overall support of the team no need to create crazy get to know you events to force them.

Next, let’s talk training. This is a simple one. Budget for training and encourage your team members to attend training, conferences or your local user groups. If there is a user group that meets during the day encourage them to take the time to attend. This will have a significant long term benefits for example, in the technical space it will result in better innovation and less technical debt. I’ll save technical debt for another post, as that is a topic all in it’s own. I can’t stress enough to reinvest in your employees and provide the training for them so they can always feel as they are growing and not becoming stagnant.

Lastly, flexible work schedule, this is a hot button topic I know at a number of different organizations and wraps up to a much broader topic of Work /Life Balance. The first piece of advice I will give here is if that you don’t already have a policy in place to handle things such as remote work, sick time and vacation rules create one now. Having clearly defined rules around what is expected when someone is working remotely makes sure both employer and employee are on the same page. For example, here are a few areas I make sure are covered in a remote policy:

  • Working hours
  • Response times
  • Work Phone forwarding
  • Messaging (Skype, Teams etc…)

Remote work is only one a part work life balance as you also want to consider flexible work hours and vacation policy. Flexible work hours is rather straight forward, learn what your employees needs are and how you can combine them with business needs to make sure that you have both business coverage and work/life balance. Let’s say you have Employee A who has to drop kids off at school before coming to work and allowing them to come in at 9:30 rather than 9:00 could mean they wouldn’t have to find childcare and incur extra cost. This results in reduced costs for the employee and usually a higher satisfaction level with their employer. Vacation policy is one that is not always so obvious but not only encouraging employees to use there Paid Time Off but to also fully unplug is key to overall satisfaction. There should never be pressure to check emails or take phone calls during PTO and in some cases I have actually taken this as far as to create a policy that states if you check or reply to an email while on vacation your account will be locked out.

In conclusion don’t be the horrible boss, value your employees and realize that without them and their dedication you can’t effectively lead.

Leadership Part 1: Team building

Leadership Part 1: Team building

One of the most challenging aspects of being a good leader is building a team that can support the initiatives you will set forth. Without a team that will stand behind you and one that you will fight for can be the difference of success versus failure. I’m sure most of this sounds like a no-brainer and your reading this going “Duh, he’s stating the obvious.” While this may seem so obvious it’s still important to reiterate.

Myself, I have read numerous books, see below, on how to build a strong teams and they have all helped mold my approach. One of the most influential experiences for myself was having the opportunity to spend some time around the Military and learning how to build a team when you don’t necessary get to choose your team members. Learning how to find the strengths and weaknesses and put everyone in the best position to succeed is an art form that will take practice and you will undoubtedly like myself make mistakes along the way.

Team Building Books

Get to know your team members

This is something that I learned the hard way on how important it is to get to know each team member. Now depending on your team size it may be more difficult to get to know everyone as time and logistics will be a hurdle for you, but I can promise that the more you get to know your team the stronger team you will have. So how do you go about getting to know your team? For myself what has worked is the following structure.

  • Direct Reports
    • Weekly one on one sessions
    • Required to also have weekly one on one sessions with their direct reports
  • One Level down
    • Monthly touch points
  • Two levels down and below
    • Quarterly full team meetings with small group break outs

The one on ones are really a key to getting to know your team. During this time make sure not to just focus on work related projects but also get to know your team members personally. Keep them light, laughing is ok and allowed at work despite what HR may say, and make sure that they also report back to you on their direct staff and what’s going on. This helps you gauge not only how things are going with the team as a whole and keeps your team accountable for making sure that your process is being followed.

Quarterly full team meetings can be an opportunity to get out of the office and plan a team building event. These don’t have to be those silly get to know you events we all went through when starting school. Find activities that challenge everyone to work as a team in a fun way. One of the best events I ever did was relay go cart racing. This required strategy, planning and learning the strengths of all different team members.

Don’t put all the rock stars on the same team

Now that you know the different personalities on your teams and who the rock stars or potential rock stars are it brings up the question of, how do I group my teams? My rule, and yes this is my rule you don’t’ have to listen to me, is to never put all my rock stars on the same team. Why you may ask? Well it’s a lesson learned from doing exactly that in the past. What it left me with was one team that got all the accolades, did a majority of the work and worked on all the cool projects. As for my other teams they became resentful and felt like they were in a dead-end position.

This made me take a step back and change my way of team structure and focus more on having strong leads on each team and leads that were not only experts in their field but also knew how to mentor and share their knowledge with the rest of their team to build them up. Team leads on any of my teams are required to have three key attributes:

  • Expert level understanding of their domain
  • Ability to mentor and train other team members
  • Ability to say “I don’t know”

The last one to me is key as I much rather get the right answer to something that have them feel they need to answer on the spot, make something up or assume they might know the answer. “I don’t know” to me means you don’t know the answer now, but you will get the answer and get it right in a short time frame.

For the actual organization and team structure make sure to put individuals together that don’t have conflicts or personality clashes. It’s a careful blend of with the goal to get the optimal output out of each team. Don’t be afraid of change either, the expectation your going to get the perfect teams on your first attempt is unreasonable. Conflicts will arise and some can be worked through and some can’t.

With that said don’t make it too easy either and put a team together because they are already friends or have worked extensively in the past together. You want to make sure that everyone is slightly challenged and not in too much of a comfort zone.

There is nothing wrong with keeping people on their toes. I compare this to a new pair of shoes, sure they are uncomfortable at first and need to be broken in but they don’t prevent you from walking, just reminds you that walking isn’t as easy as you remember in that old broken in pair.

Set a defined career path

This may be the most important thing you can do as a leader. People who don’t know what their career path looks like tend to become detached and don’t put in the investment needed to be the best team member possible. It’s also one of the primary reasons someone will start to look for another job, as they start to feel like they don’t matter and start looking for a place that will appreciate them. Now not everyone on your team may want to climb that proverbial corporate ladder so career paths are not always a way to move up but sometimes just a way to make sure they keep their skills current and are recognized for their achievements.

What is a career path? First, let me start with that it’s not a dictatorship or ultimatum. This should be a collaborative effort making sure to align not only the companies/team’s goals but also the individual’s goals. Employee A may have aspirations to be a CEO one day and employee B may be perfectly content with just doing their current job and knowing everyday what time they are going home. So, with that said here’s my definition:

  • Collaborative exercise that is reviewed at least yearly (I prefer quarterly)
  • Geared towards company and person goals
  • Measurable goals
  • Defined outcomes for meeting goals

I want to take an extra few moments to focus on the third point of Measurable goals. This is one of the easiest mistakes to make, making a goal that can be left up to interpretation will lead to very uncomfortable and possibly contentious conversations. Don’t let yourself or any of your leaders go down this path as no good can come from it.

Example of bad goals:

  • Become an expert in area X
  • Lead a conversation in area X
  • Increase revenue

Example of good goals:

  • Pass exam X before 4/1/2019
  • Achieve certification X before 12/31/2019
  • Server as technical lead for 1 project that was completed with no schedule delays before 12/31/2019
  • Raise top line revenue by 25% by 7/1/2019

As you can see the second list is well defined and makes sure there is no interpretation and leads to clear cut conversations.

SQL 2016 SSIS new feature AutoAdjustBufferSize

If you have not already seen in the What’s New in Integration Services list there is a new property for Data Flows that is called AutoAdjustBufferSize.  Why is this important and what does it actually do?

First, let’s remember that in SQL 2014 we had a max DefaultBuffersize of 100MB and to try and optimize this we would use calculations to figure out or row size and how to set the DefaultBufferMaxRows.   I’ve covered these techniques in the past in this video or you can read about it via Ginger Grant’s Blog.

Now in SQL 2016 we don’t have to worry as much about trying to perform these calculations as setting the AutoAdjustBufferSize to true will take care of this for us.   So what is happening when I set this to true.  SSIS will now ignore the DefaultBufferSize property and set the buffer size based off of the following calculation of DefaultBufferMaxRows * rowsize.  When performing this calculation the rowsize is actually calculated at runtime, while the DefaultBufferMaxRows is a property value you specify either when designing the package or through an expression and can be tied to a parameter.

I do have a big caveat to this and with doing some extensive testing you need to make sure that if you have a wide file and try to match SQL Server max row group size of 1,048,576 you can get an error of “Buffer failed while allocating”  due to not enough free memory on your system to auto adjust the buffer size (See Below).  So as with anything I recommend you test, test and then test again as I have seen some significant performance increase by changing this setting but also the dangers of not having enough memory.

Buffer error:


Use PowerShell to automate your Azure DB creation

Why create your databases manually if you don’t have to?  If there is one thing that I hate to do it is extra work when it’s not needed.  With the magic of PowerShell and Azure PowerShell modules you can easily automate the creation of your Azure Database.  You can download the scripts I put together from here and below is a little walk through on the scripts.

There are three files contained in the zip folder that will be needed.

  • AzureConfig.xml (Contains the username and encrypted password to login to Azure)
  • PasswordEncrypt.ps1 (utility to set the encrypted password for the AzureConfig.xml)
  • CreateAzureDBv12.ps1 (Script to create your Azure Database)

So why did I create my automation this way?  Well like I said before I hate doing extra work and that includes having to manually login, but I am also security conscious so I wanted to create a way to have a configuration file that would have my username and password and also store that password as an encrypted value.

To use this all you have to do is enter your username into the username tag in the AzureConfig.xml and then launch the PasswordEncrypt.ps1 and enter the password to encrypt when prompted and it will update the AzureConfig.xml file.  There is a default key being used in the PasswordEncrypt.ps1 that you can feel free to change but if you do you must change it in the PasswordEncrypt.ps1 on line 2 and CreateAzureDBv12.ps1 on line 36.



And yes as I am writing this I realized I should of put the key into the config file as well but I will save that for the next update.

Now you are ready to use the CreateAzureDBv12 script which has the following parameters.

  • SQLDatabaseName (The name for your database)
  • Location (The Azure location you wish to create your database in)
  • SQLServerName (The name of your SQL Server.  This can be an existing or new server.)
  • StartIPAddress (the start IP Address to create the Azure Firewall rule for your server to allow connections)
  • EndIPAddress (The End IP Address to create the Azure Firewall rule for your server to allow connections)

There is also a default rule that is created to allow Azure IP addresses.  You can comment this out if you don’t want to create this rule.  Myself I do this because almost all the databases I create are being used by an Azure Web Application or Power BI.

Hope you enjoy these and love to hear feedback.